Cricket as a state machine - artificial-intelligence

I'm creating a cricket manager stats game. I need to create a ball-by-ball simulation of the game. The game/ball outcome will be influenced by player stats and other external factors like weather or chosen tactics.
I've been reading around that most of the games can be implemented as a state machine, which sounds appealing to me, but because I'm a newbie at cricket I'm failing to envision this game as a state machine.
Should the Ball be a state machine or the match or the player or all 3. I'm also not sure how will i orchestrate this state machines (through events).
I'm also having hard time identifying the States and transitions. Any help would be greatly appreciated.

So here's what I understand from your question - Your cricket manager game will simulate a match ball by ball depending on player stats (bowler's skill/experience, batsman's skill/exp, fielding/wicketkeeping stats, so on...) and other related variables. From my understanding this will be more of an algorithmic engine rather than a visual representation of a cricket game.
Now answering your question, first of all, I don't believe you're looking at FSMs the right way. An FSM is a piece of code designed such that at any point in it's lifetime, it is in one of many possible states of execution. Each state can and usually has (that's the point of it) a different update routine. Also, each state can transition to another state upon predefined triggers/events. What you need to understand is that states implement different behaviour for the same entity.
Now, "most of the games can be implemented as a state machine" - Not "a" state machine but rather a whole nest of state machines. Several manager classes in a game, the renderer, gameplay objects, menu systems, more or less everything works off a state machine of its own. Imagine a game character, say a boxer, for the purpose of this example. Some states you'll find in the 'CBoxer'(?) class will be 'Blocking', 'TakingHit', 'Dodge', RightUpper', 'LeftHook' and so on.
Keep in mind though, that FSMs are more of a design construct - a way to envision the solution to the problem at hand. You don't HAVE to necessarily use them. You could make a complete game without a state machine(I think :) ). But FSMs make your code design really intuitive and straightforward, and it's frankly difficult to not find one in any decent sized project.
I suggest you take a look at some code samples of FSMs at work. Once you get the idea behind it, you'll find yourself using them everywhere :)

As a first step you should go through the rules of cricket and your model for the ball outcome to summarise how previous balls affect a given ball.
Then identify what you need to keep track of, and whether it is convenient to use a state machine to represent it. For example, statistics are usually not very convenient to keep track of as FSMs.
With that information in mind, you should be able to build a model. Information you need to keep track of might be either state machines or an internal value of a particular state. The interactions between balls will dictate the transitions and the events circulating from one machine to another.

Related

How to model UNO as a POMDP

I am trying to model UNO card game as Partially Observable Markov Decision Processes(POMDPs) . I did little bit of research, and came to conclusion that, the states will be the number of cards, the actions will be either to play or pick the card from unseen card deck. I am facing difficulty in formulating the state transition and observation model. I think, that observation model will depend on past actions and observation(History), but for that I need to relax Markov Assumption. I want to know that relaxing the Markov Assumption is better choice or not? Additionally, how exactly should I form the state and observation model.Thanks in advance.
I think in a POMDP the states should still be the "full truth" (position of all the cards) and the transitions are simply the rules of the game (including the strategy of the other players?!). The observations should certainly not depend on any history, only on the state, or else you're violating the Markov assumption. The point of a POMDP is that the agent can gain information about the current state by analyzing history. I'm not really sure if or how this applies to UNO, though. If you know which cards have been played and their order, can you still gain information by using the history? Probably not. Not sure, but maybe it does not make sense to think of this game as a POMDP, even if you use a solution that was designed for a POMDP.

Top down Game AI

I'm creating a game that requires the units onscreen to fight eachother, based on teams and designated enemies for each team. The player doesn't control any of the tanks or teams.
The issue is that the battle between the units (tanks at the moment) should be interesting enough to the player that they can watch and have fun without doing anything.
I currently have the tanks moving around totally randomly and shooting at each other when in range, but I'm looking for something smarter.
What types of ai and ai algorithms should I look into? All ideas are welcome, I simply want to make every battle interesting.
For strategies and tactics, your AI probably needs to do some rational decision making to make it look smarter. There are many ways to to this, the simplest way is write down a couple of condition-action rules for your tanks and implement them as a finite state machine. FSMs are simple to implement and easy to debug, but it gets tedious later when you want to revise the condition rules or add/remove any states. You can also use utility agents - the AI performs utility check on each potential goal (e.g. engage, retreat, reload/refuel, take cover, repair, etc.) based on current stats (ammo, health, enemy counts and locations) periodically and then chooses the most preferable goal. This take more time to implement compared to FSM, but it's more flexible in the way that you don't need to change the decision flow when you need to add or remove behaviors. It makes the AI look like it follows a general rule but not always predictable. Utility agent is also harder to debug and control because you don't have any rigid condition-action rules to trace like you do with FSM when your AI goes crazy. Another popular method is behavior tree. Action sequences are implemented as a tree structure. It requires more code to write upfront but usually gives you a better balance between control and flexibility than FSM and utility agent. These decision making processes are not mutually exclusive - you can any method for top level strategies and a different method for low level tactics.
Whatever decision making process you choose, you need some input to feed to your AI. You can use influence map help AI determine where in the battlefield is considered hostile and where is considered safe. Influence map is shared among the team so it can also help with group tactics. When your AI engages multiple enemies, selecting a right target is important. If your AIs pick a target that most human player wouldn't, the player is gonna feel the AI is "stupid", even when sometimes the chosen target is actually the best one. You can run distance check on the enemy units and filter/prioritize the target with line of sight, current weapon range, threat level, etc. Some tests are more expensive than others (line of sight check is usually one of the worst offender) so if you have a lot of enemy units in range you want to run those slower tests the last.
For tanks' movement, look into steering behaviors. It covers a lot of vehicle movement behaviors but pursue and evade are the ones that you need the most. Also look into A* for pathfinding if your tanks need to navigate around a complex terrain. There are other good pathing solutions that give you the shortest/fastest path, but in a game the shortest/fastest path is not always the optimal path. If your shortest path is open but too close to the enemy line, you want to give your tank some heuristic to take a different route. You can easily configure your path preference with A*.
Things to look into: finite state machine, utility based agent, behavior tree, steering behaviors, a* search algorithm, navigation waypoints or navigation mesh, influence map.
The simplest thing would be to have them drive in a random direction and when there is an enemy tank within range, they start shooting until one of them is destroyed. You could also have them randomly retreat when their health gets too low. You could also try adding group tactics where any tank that is not engaged will join (with some proability so that maybe it will, maybe it won't - just to keep things interesting) it's nearest neighbour in combat.
If you're looking for algorithms, A* ("A-Star") is a generic path-finding algorithm that could help your tanks move around, but I don't know of any generic algorithms to control the battles.

How to imitate a player in an online game

I'd like to write an application, which would imitate a player in an online game.
About the game: it is a strategy, where you can:
train your army (you have to have enough resources, then click on a unit, click train)
build buildings (mines, armories, houses,...)
attack enemies (select a unit, select an enemy, click attack)
transport resources between buildings
make researches (economics, military, technologic,...)
This is a simplified list and is just an example. Main thing is, that you have to do a lot of clicking, if you want to advance...
I allready have the 'navigational' part of the application (I used Watin library - http://watin.sourceforge.net/). That means, that I can use high level objects and manipulate them, for example:
Soldiers soldiers = Navigator.GetAllSoldiers();
soldiers.Move(someLocation);
Now I'd like to take the next step - write a kind of AI, which would simulate my gaming style. For this I have two ideas (and I don't like either of them):
login to the game and then follow a bunch of if statements in a loop (check if someone is attacking me, check if I can build something, check if I can attack somebody, loop)
design a kind of scripting language and write a compiler for it. This way I could write simple scripts and run them (Login(); CheckForAnAttack(); BuildSomething(); ...)
Any other ideas?
PS: some might take this as cheating and it probably is, but I look on this as a learning project and it will never be published or reselled.
A bunch of if statements is the best option if the strategy is not too complicated. However, this solution does not scale very well.
Making a scripting language (or, domain specific language as one would call that nowadays) does not buy you much. You are not going to have other people create AI agents are you? You can better use your programming language for that.
If the strategy gets more involved, you may want to look at Bayesian Belief Networks or Decision Graphs. These are good at looking for the best action in an uncertain environment in a structured and explicit way. If you google on these terms you'll find plenty of information and libraries to use.
Sounds like you want a finite state machine. I've used them to various degrees of success in coding bots. Depending on the game you're botting you could be better off coding an AI that learns, but it sounds like yours is simple enough not to need that complexity.
Don't make a new language, just make a library of functions you can call from your state machine.
Most strategy game AIs use a "hierarchical" approach, much in the same way you've already described: define relatively separate domains of action (i.e. deciding what to research is mostly independent from pathfinding), and then create an AI layer to handle just that domain. Then have a "top-level" AI layer that directs the intermediate layers to perform tasks.
How each of those intermediate layers work (and how your "general" layer works) can each determined separately. You might come up with something rather rigid and straightforward for the "What To Research" layer (based on your preferences), but you may need a more complicated approach for the "General" layer (which is likely directing and responding to inputs of the other layers).
Do you have the sourcecode behind the game? If not, it's going to be kind of hard tracing the positions of each CPU you're (your computer in your case) is battling against. You'll have to develop some sort of plugin that can do it because from the sound of it, you're dealing with some sort of RTS of some sort; That requires the evaluation of a lot of different position scenarios between a lot of different CPUs.
If you want to simulate your movements, you could trace your mouse using some WinAPI quite easily. You can also record your screen as you play (which probably won't help much, but might be of assistance if you're determined enough.).
To be brutally honest, what you're trying to do is damn near impossible for the type of game you're playing with. You didn't seem to think this through yet. Programming is a useful skill, but it's not magic.
Check out some stuff (if you can find any) on MIT Battlecode. It might be up your alley in terms of programming for this sort of thing.
First of all I must point out that this project(which only serves educational purposes), is too large for a single person to complete within a reasonable amount of time. But if you want the AI to imitate your personal playing style, another alternative that comes to mind are neural networks: You play the game a lot(really a lot) and record all moves you make and feed that data to such a network, and if all goes well, the AI should play roughly the same as you do. But I'm afraid this is just a third idea you won't like, because it would take a tremendeous amount of time to get it perfect.

Need suggestions for an Applied AI project

I have a course in my current semester in which I'm required to do a project on application of AI. I have decided to do this on game AI. I have 2 basic ideas: implementing an FPS bot(s) or implementing soccer AI.
I'm quiet a noob at AI right now, I've implemented basic pathfinding algos (A*, etc), and have studied about Finite state machines, some First Order logic, basic Neural Network stuff(Backpropagation ALgo), and am currently doing a course on Genetic Algorithms.
Our main focus is on the bot right now. Our plans include:
Each 'bot' would be implemented using a Finite State Machine (FSM), which would contain the possible states the bot could have; & the rules for the action/state changes that are going to take place when it receives an input.
In bot group movement, each bot would decide whether to strike, ways to strike; based on range, number of bots, existing fights using Neural Networks.
By using genetic algorithms the opponents next move could be anticipated based on repetitive moves.
Although I've programmed a few 2d games till now in my free time (like pacman, tetris, etc), I've never really gone into the 3d area. We will most probably be using a 3d engine.
We want to concentrate most of our energy on the AI part. We would like not to be bothered with unnecessary details about the animation/3d models, etc. For example, if we could find a framework which has functions like Moveright() which just moves the bot to the right, it would be really awesome.
My basic question is : is it too ambitious to go about it in the way we have planned, considering the duration of the project is abour 3 months? Should we go 3d and use a 3d game engine? is it easy to use such engines, if you have no experience with them before? If yes, what kind of engine would be suitable to our project?
I came accross another idea, given in the book AI Game programming by example, where the player would have a top down view of the bots. Would that way be more appropriate?
Thanks .. sorry about the length of the question .. it's just that my problem is a bit too specific.
My basic question is : is it too
ambitious to go about it in the way we
have planned, considering the duration
of the project is abour 3 months?
Yes -- but that's not necessarily a bad thing :)
Should we go 3d and use a 3d game
engine?
No. Mainly because you said:
We want to concentrate most of our
energy on the AI part.
Here's what I'd do, based on my experience (and knowing that, as a student, I often bit off way more than I could chew, too):
Make your simulation function irrespective of a graphical component. Have it publish "updates" to another layer, that consist of player and ball vectors. By doing so you'll be keeping your AI tasks separate from everything else, which means you have fewer bugs to worry about, and you can also unit test your underlying simulation much easier.
Take those "updates" and create your first "visualization" layer -- make it the simplest 2D representation possible. It could just be a stream of text lines: "Player 1 has the ball / Player 1 kicked ball at (30,40) with speed 20kph". That will be hard enough for your first pass since you'll be figuring out how to take data published by the simulation and doing something with it.
Your next visualization might add a 2D grid of ANSI graphics (think rogue-like) to actually show players and the ball moving. Your next one after that might be sprites. And so on. Note how you incrementally increase the complexity of your visualization... don't make your first step go to using a technology (3d graphics engine) you've never used before. (You'll never finish your project in that case.)
As for your questions about which route to take -- FSMs, NNs, GAs, top-down design -- you should rank your interest in them from most to least (along with the rest of your group) and then tackle them, in that order. You might consider doing one style for one team and a different design for the other team. You might want to make your FSM team play against a FSM team that's had an additional tweak done to it, in order to compare and contrast if you think your changes are actually being beneficial (you might be surprised and find out they make the team worse). Actually, that's where unit testing and splitting the simulation from the visualization come in very, very handy -- you should be able to "sim" as many games as you need to to get experimental results without worrying about graphics. You might even do it in batches overnight with scripts.
In general, my advice to you is this: break down your project into the tiniest pieces you can, and tackle them one at a time, so no matter where you're at when time runs out, you'll have something interesting to show off.
You could have a look at guntactyx, that's what I had to use when I did my AI unit at uni.
It takes care of all the display, physics, sound etc... for you, all you have to do is program your team of bots.
The API includes functions to make the bot move left or right, shoot, hear sounds (like gun shots) etc... and it comes with a few sample bots so you don't start from scratch.
Also, it's quite fun to watch your bots battling your friends' bots :)

Building a NetHack bot: is Bayesian Analysis a good strategy?

A friend of mine is beginning to build a NetHack bot (a bot that plays the Roguelike game: NetHack). There is a very good working bot for the similar game Angband, but it works partially because of the ease in going back to the town and always being able to scum low levels to gain items.
In NetHack, the problem is much more difficult, because the game rewards ballsy experimentation and is built basically as 1,000 edge cases.
Recently I suggested using some kind of naive bayesian analysis, in very much the same way spam is created.
Basically the bot would at first build a corpus, by trying every possible action with every item or creature it finds and storing that information with, for instance, how close to a death, injury of negative effect it was. Over time it seems like you could generate a reasonably playable model.
Can anyone point us in the right direction of what a good start would be? Am I barking up the wrong tree or misunderstanding the idea of bayesian analysis?
Edit: My friend put up a github repo of his NetHack patch that allows python bindings. It's still in a pretty primitive state but if anyone's interested...
Although Bayesian analysis encompasses much more, the Naive Bayes algorithm well known from spam filters is based on one very fundamental assumption: all variables are essentially independent of each other. So for instance, in spam filtering each word is usually treated as a variable so this means assuming that if the email contains the word 'viagra', that knowledge does affect the probability that it will also contain the word 'medicine' (or 'foo' or 'spam' or anything else). The interesting thing is that this assumption is quite obviously false when it comes to natural language but still manages to produce reasonable results.
Now one way people sometimes get around the independence assumption is to define variables that are technically combinations of things (like searching for the token 'buy viagra'). That can work if you know specific cases to look for but in general, in a game environment, it means that you can't generally remember anything. So each time you have to move, perform an action, etc, its completely independent of anything else you've done so far. I would say for even the simplest games, this is a very inefficient way to go about learning the game.
I would suggest looking into using q-learning instead. Most of the examples you'll find are usually just simple games anyway (like learning to navigate a map while avoiding walls, traps, monsters, etc). Reinforcement learning is a type of online unsupervised learning that does really well in situations that can be modeled as an agent interacting with an environment, like a game (or robots). It does this trying to figure out what the optimal action is at each state in the environment (where each state can include as many variables as needed, much more than just 'where am i'). The trick then is maintain just enough state that helps the bot make good decisions without having a distinct point in your state 'space' for every possible combination of previous actions.
To put that in more concrete terms, if you were to build a chess bot you would probably have trouble if you tried to create a decision policy that made decisions based on all previous moves since the set of all possible combinations of chess moves grows really quickly. Even a simpler model of where every piece is on the board is still a very large state space so you have to find a way to simplify what you keep track of. But notice that you do get to keep track of some state so that your bot doesn't just keep trying to make a left term into a wall over and over again.
The wikipedia article is pretty jargon heavy but this tutorial does a much better job translating the concepts into real world examples.
The one catch is that you do need to be able to define rewards to provide as the positive 'reinforcement'. That is you need to be able to define the states that the bot is trying to get to, otherwise it will just continue forever.
There is precedent: the monstrous rog-o-matic program succeeded in playing rogue and even returned with the amulet of Yendor a few times. Unfortunately, rogue was only released an a binary, not source, so it has died (unless you can set up a 4.3BSD system on a MicroVAX), leaving rog-o-matic unable to play any of the clones. It just hangs cos they're not close enough emulations.
However, rog-o-matic is, I think, my favourite program of all time, not only because of what it achieved but because of the readability of the code and the comprehensible intelligence of its algorithms. It used "genetic inheritance": a new player would inherit a combination of preferences from a previous pair of successful players, with some random offset, then be pitted against the machine. More successful preferences would migrate up in the gene pool and less successful ones down.
The source can be hard to find these days, but searching "rogomatic" will set you on the path.
I doubt bayesian analysis will get you far because most of NetHack is highly contextual. There are very few actions which are always a bad idea; most are also life-savers in the "right" situation (an extreme example is eating a cockatrice: that's bad, unless you are starving and currently polymorphed into a stone-resistant monster, in which case eating the cockatrice is the right thing to do). Some of those "almost bad" actions are required to win the game (e.g. coming up the stairs on level 1, or deliberately falling in traps to reach Gehennom).
What you could try would be trying to do it at the "meta" level. Design the bot as choosing randomly among a variety of "elementary behaviors". Then try to measure how these bots fare. Then extract the combinations of behaviors which seem to promote survival; bayesian analysis could do that among a wide corpus of games along with their "success level". For instance, if there are behaviors "pick up daggers" and "avoid engaging monsters in melee", I would assume that analysis would show that those two behaviors fit well together: bots which pick daggers up without using them, and bots which try to throw missiles at monsters without gathering such missiles, will probably fare worse.
This somehow mimics what learning gamers often ask for in rec.games.roguelike.nethack. Most questions are similar to: "should I drink unknown potions to identify them ?" or "what level should be my character before going that deep in the dungeon ?". Answers to those questions heavily depend on what else the player is doing, and there is no good absolute answer.
A difficult point here is how to measure the success at survival. If you simply try to maximize the time spent before dying, then you will favor bots which never leave the first levels; those may live long but will never win the game. If you measure success by how deep the character goes before dying then the best bots will be archeologists (who start with a pick-axe) in a digging frenzy.
Apparently there are a good number of Nethack bots out there. Check out this listing:
In nethack unknown actions usually have a boolean effect -- either you gain or you loose. Bayesian networks base around "fuzzy logic" values -- an action may give a gain with a given probability. Hence, you don't need a bayesian network, just a list of "discovered effects" and wether they are good or bad.
No need to eat the Cockatrice again, is there?
All in all it depends how much "knowledge" you want to give the bot as starters. Do you want him to learn everything "the hard way", or will you feed him spoilers 'till he's stuffed?

Resources