Developing an AI system to pick a fantasy football team - artificial-intelligence

I'm looking to build an AI system to "pick" a fantasy football team. I have only basic knowledge of AI techniques (especially when it comes to game theory), so I am looking for advice on what techniques could be used to accomplish this and pointers to some reading materials.
I am aware that this may be a very difficult or maybe even impossible task for AI to accurately complete: however I am not too concerned on the accuracy, rather I am interested in learning some AI and this seems like a fun way to apply it.
Some basic facts about the game:
A team of 14 players must be picked
There is a limit on the total cost of players picked
The players picked must adhere to a certain configuration (there must always be one goalkeeper, at least two defenders, one midfielder and one forward)
The team may be altered on a weekly basis but removing/adding more than one player a week will inccur a penalty
P.S. I have stats on every match played in last season, could this be used to train the AI system?

This is interesting.
So if you didn't really care about accuracy at all, you could just come up with some heuristic for the quality of a team. For instance, assign a point value to each player and then try to maximize it using dynamic programming. Something like: http://www.cse.unl.edu/~goddard/Courses/CSCE310J/Lectures/Lecture8-DynamicProgramming.pdf
This would be similar to the knapsack problem.
Technically this is AI since a computer is deciding something but maybe not what you had in mind.
You sound like you want a learning AI (http://en.wikipedia.org/wiki/Machine_learning) which is an interesting field. Here's how you can approach the problem.
Define your inputs. Right now you have last years data. You'll probably want data on many years. Also, you might be able to include the ranking of pundits, maybe a bunch of magazines rank players or something, that seems useful as well.
Take your inputs and feed them into some machine learning algorithm for each season. Wikipedia will help you out there.
Essentially, for each season you'll want to feed in your data, have your AI pick a team, and then rate the performance of the team based on the seasons results.
Keep doing this and maybe your bot will get better at picking teams, and you can apply to this year's data.
(If you only have last year's data, it's okay to train the algorithm with just that but your AI will probably be over trained on that one set and won't be as accurate.)
This was just a sketch of how it might look. For a romp into AI, this problem is probably pretty hard so don't feel disheartened if it seems overwhelming at first.

Related

Can "Monte-Carlo Tree Search" be applied on a "two player game with imperfect information" like Stratego?

I want to develop a two player game with imperfect information - "Stratego".
The game is "somewhat" like chess but initially we don't know anything about the ranks of the opponent's pieces. When a piece attacks or is attacked by some opponent's piece, their ranks are revealed and the higher rank piece kills/captures the lower rank piece.
More detail on the game can be found here.
I did a little research. I read "Opponent Modeling in Stratego" by J.A. Stankiewicz. But I couldn't find a complete tutorial on how to develop the game. I have successfully developed before a two player game - "Othello" a.k.a. Reversi, and I'm familiar with MINIMAX algorithm and alpha-beta pruning.
I found somewhere that Monte-Carlo Tree Search is also used in developing zero-sum two player games. Can it be used for games like stratego? Can I get a complete tutorial for the same?
Any other tutorial not involving Monte-Carlo Tree Search would also be useful :)
I think MCTS would have a difficult time in Stratego since the initial spreading function is so large while the best play is very dependent on the ground-truth of the game. That is to say, MCTS would, in the best case, give you a play that's statistically good amongst all the possible variations of your opponent's pieces, but the best next move is highly dependent on which particular variation they've chosen.
I'm still developing a solid understanding of MCTS, but it seems to me that MCTS does not do well in games where multi-round deceptive play involving hidden information is important (poker, canonically, but stratego, I would say, also). In such games, you really need to develop a model of the other player(s) situation/strategy and MCTS by its nature is going to give you an answer that is statistically related to all trees, not just the ground-truth tree.
MCTS works fine with games involving large amounts of chance (backgammon and other board games involving dice and many card games) and seems to me an excellent general-purpose solution that could be rapidly adopted to a large number of modern "European-style" board games. (The interesting thing with those is that although they involve "deceptive strategy" they generally involve relatively little hidden information.)
I don't know of any MCTS for incomplete information off the top of my had, and it seems like it would take substantial modification to the algorithm to get it to work.
Even in a very restricted type of Stratego where there are only ten pieces on each side, only two types of piece, and only one of the "stronger" piece, you're still playing one of ten possible actual games. In a full game of stratego there is far more uncertainty than that because of the large number of combinations of starting position, which all look alike.
It seems like you would also have to augment the algorithm to capture "revealed knowledge," as it happens, e.g., in our toy example, every encounter between pieces reveals some information about the enemy position.
It seems like it would be interesting to try, but only for a very restricted Stratego-like problem at first, and with the understanding that off the shelf MCTS is not sufficient and that you'd have to think carefully and deeply about the right extensions to the algorithm.

How do I pick a good representation for a board game tactic for a genetic algorithm?

For my bachelor's thesis I want to write a genetic algorithm that learns to play the game of Stratego (if you don't know this game, it's probably safe to assume I said chess). I haven't ever before done actual AI projects, so it's an eye-opener to see how little I actually know of implementing things.
The thing I'm stuck with is coming up with a good representation for an actual strategy. I'm probably making some thinking error, but some problems I encounter:
I don't assume you would have a representation containing a lot of
transitions between board positions, since that would just be
bruteforcing it, right?
What could branches of a decision tree look
like? Any representation I come up with don't have interchangeable
branches... If I were to use a bit string, which is apparently also
common, what would the bits represent?
Do I assign scores to the distance between certain pieces? How would I represent that?
I think I ought to know these things after three+ years of study, so I feel pretty stupid - this must look likeI have no clue at all. Still, any help or tips on what to Google would be appreciated!
I think, you could define a decision model and then try to optimize the parameters of that model. You can create multi-stage decision models also. I once did something similar for solving a dynamic dial-a-ride problem (paper here) by modeling it as a two stage linear decision problem. To give you an example, you could:
For each of your figures decide which one is to move next. Each figure is characterized by certain features derived from its position on the board, e.g. ability to make a score, danger, protecting x other figures, and so on. Each of these features can be combined (e.g. in a linear model, through a neural network, through a symbolic expression tree, a decision tree, ...) and give you a rank on which figure to act next with.
Acting with the figure you selected. Again there are a certain number of actions that can be taken, each has certain features. Again you can combine and rank them and one action will have the highest priority. This is the one you choose to perform.
The features you extract can be very simple or insanely complex, it's up to what you think will work best vs what takes how long to compute.
To evaluate and improve the quality of your decision model you can then simulate these decisions in several games against opponents and train the parameters of the model that combines these features to rank the moves (e.g. using a GA). This way you tune the model to win as many games as possible against the specified opponents. You can test the generality of that model by playing against opponents it has not seen before.
As Mathew Hall just said, you can use GP for this (if your model is a complex rule), but this is just one kind of model. In my case a linear combination of the weights did very well.
Btw, if you're interested we've also got a software on heuristic optimization which provides you with GA, GP and that stuff. It's called HeuristicLab. It's GPL and open source, but comes with a GUI (Windows). We've some Howto on how to evaluate the fitness function in an external program (data exchange using protocol buffers), so you can work on your simulation and your decision model and let the algorithms present in HeuristicLab optimize your parameters.
Vincent,
First, don't feel stupid. You've been (I infer) studying basic computer science for three years; now you're applying those basic techniques to something pretty specialized-- a particular application (Stratego) in a narrow field (artificial intelligence.)
Second, make sure your advisor fully understands the rules of Stratego. Stratego is played on a larger board, with more pieces (and more types of pieces) than chess. This gives it a vastly larger space of legal positions, and a vastly larger space of legal moves. It is also a game of hidden information, increasing the difficulty yet again. Your advisor may want to limit the scope of the project, e.g., concentrate on a variant with full observation. I don't know why you think this is simpler, except that the moves of the pieces are a little simpler.
Third, I think the right thing to do at first is to take a look at how games in general are handled in the field of AI. Russell and Norvig, chapters 3 (for general background) and 5 (for two player games) are pretty accessible and well-written. You'll see two basic ideas: One, that you're basically performing a huge search in a tree looking for a win, and two, that for any non-trivial game, the trees are too large, so you search to a certain depth and then cop out with a "board evaluation function" and look for one of those. I think your third bullet point is in this vein.
The board evaluation function is the magic, and probably a good candidate for using either a genetic algorithm, or a genetic program, either of which might be used in conjunction with a neural network. The basic idea is that you are trying to design (or evolve, actually) a function that takes as input a board position, and outputs a single number. Large numbers correspond to strong positions, and small numbers to weak positions. There is a famous paper by Chellapilla and Fogel showing how to do this for a game of Checkers:
http://library.natural-selection.com/Library/1999/Evolving_NN_Checkers.pdf
I think that's a great paper, tying three great strands of AI together: Adversarial search, genetic algorithms, and neural networks. It should give you some inspiration about how to represent your board, how to think about board evaluations, etc.
Be warned, though, that what you're trying to do is substantially more complex than Chellapilla and Fogel's work. That's okay-- it's 13 years later, after all, and you'll be at this for a while. You're still going to have a problem representing the board, because the AI player has imperfect knowledge of its opponent's state; initially, nothing is known but positions, but eventually as pieces are eliminated in conflict, one can start using First Order Logic or related techniques to start narrowing down individual pieces, and possibly even probabilistic methods to infer information about the whole set. (Some of these may be beyond the scope of an undergrad project.)
The fact you are having problems coming up with a representation for an actual strategy is not that surprising. In fact I would argue that it is the most challenging part of what you are attempting. Unfortunately, I haven't heard of Stratego so being a bit lazy I am going to assume you said chess.
The trouble is that a chess strategy is rather a complex thing. You suggest in your answer containing lots of transitions between board positions in the GA, but a chess board has more possible positions than the number of atoms in the universe this is clearly not going to work very well. What you will likely need to do is encode in the GA a series of weights/parameters that are attached to something that takes in the board position and fires out a move, I believe this is what you are hinting at in your second suggestion.
Probably the simplest suggestion would be to use some sort of generic function approximation like a neural network; Perceptrons or Radial Basis Functions are two possibilities. You can encode weights for the various nodes into the GA, although there are other fairly sound ways to train a neural network, see Backpropagation. You could perhaps encode the network structure instead/as well, this also has the advantage that I am pretty sure a fair amount of research has been done into developing neural networks with a genetic algorithm so you wouldn't be starting completely from scratch.
You still need to come up with how you are going to present the board to the neural network and interpret the result from it. Especially, with chess you would have to take note that a lot of moves will be illegal. It would be very beneficial if you could encode the board and interpret the result such that only legal moves are presented. I would suggest implementing the mechanics of the system and then playing around with different board representations to see what gives good results. A few ideas top of the head ideas to get you started could be, although I am not really convinced any of them are especially great ways to do this:
A bit string with all 64 squares one after another with a number presenting what is present in each square. Most obvious, but probably a rather bad representation as a lot of work will be required to filter out illegal moves.
A bit string with all 64 squares one after another with a number presenting what can move to each square. This has the advantage of embodying the covering concept of chess where you what to gain as much coverage of the board with your pieces as possible, but still has problems with illegal moves and dealing with friendly/enemy pieces.
A bit string with all 32 pieces one after another with a number presenting the location of that piece in each square.
In general though I would suggest that chess is rather a complex game to start with, I think it will be rather hard to get something playing to standard which is noticeably better than random. I don't know if Stratego is any simpler, but I would strongly suggest you opt for a fairly simple game. This will let you focus on getting the mechanics of the implementation correct and the representation of the game state.
Anyway hope that is of some help to you.
EDIT: As a quick addition it is worth looking into how standard chess AI's work, I believe most use some sort of Minimax system.
When you say "tactic", do you mean you want the GA to give you a general algorithm to play the game (i.e. evolve an AI) or do you want the game to use a GA to search the space of possible moves to generate a move at each turn?
If you want to do the former, then look into using Genetic programming (GP). You could try to use it to produce the best AI you can for a fixed tree size. JGAP already comes with support for GP as well. See the JGAP Robocode example for an instance of this. This approach does mean you need a domain specific language for a Stratego AI, so you'll need to think carefully how you expose the board and pieces to it.
Using GP means your fitness function can just be how well the AI does at a fixed number of pre-programmed games, but that requires a good AI player to start with (or a very patient human).
#DonAndre's answer is absolutely correct for movement. In general, problems involving state-based decisions are hard to model with GAs, requiring some form of GP (either explicit or, as #DonAndre suggested, trees that are essentially declarative programs).
A general Stratego player seems to me quite challenging, but if you have a reasonable Stratego playing program, "Setting up your Stratego board" would be an excellent GA problem. The initial positions of your pieces would be the phenotype and the outcome of the external Stratego-playing code would be the fitness. It is intuitively likely that random setups would be disadvantaged versus setups that have a few "good ideas" and that small "good ideas" could be combined into fitter-and-fitter setups.
...
On the general problem of what a decision tree, even trying to come up with a simple example, I kept finding it hard to come up with a small enough example, but maybe in the case where you are evaluation whether to attack a same-ranked piece (which, IIRC destroys both you and the other piece?):
double locationNeed = aVeryComplexDecisionTree();
if(thatRank == thisRank){
double sacrificeWillingness = SACRIFICE_GENETIC_BASE; //Assume range 0.0 - 1.0
double sacrificeNeed = anotherComplexTree(); //0.0 - 1.0
double sacrificeInContext = sacrificeNeed * SACRIFICE_NEED_GENETIC_DISCOUNT; //0.0 - 1.0
if(sacrificeInContext > sacrificeNeed){
...OK, this piece is "willing" to sacrifice itself
One way or the other, the basic idea is that you'd still have a lot of coding of Stratego-play, you'd just be seeking places where you could insert parameters that would change the outcome. Here I had the idea of a "base" disposition to sacrifice itself (presumably higher in common pieces) and a "discount" genetically-determined parameter that would weight whether the piece would "accept or reject" the need for a sacrifice.

Artificial Intelligence - Intelligence Agent that cleans and paints

I remember when I was in college we went over some problem where there was a smart agent that was on a grid of squares and it had to clean the squares. It was awarded points for cleaning. It also was deducted points for moving. It had to refuel every now and then and at the end it got a final score based on how many squares on the grid were dirty or clean.
I'm trying to study that problem since it was very interesting when I saw it in college, however I cannot find anything on wikipedia or anywhere online. Is there a specific name for that problem that you know about? Or maybe it was just something my teacher came up with for the class.
I'm searching for AI cleaning agent and similar things, but I don't find anything. I don't know, I'm thinking maybe it has some other name.
If you know where I can find more information about this problem I would appreciate it. Thanks.
Perhaps a "stigmergy" approach is closely related to your problem. There is a starting point here, and you can find something by searching for "dead ants" and "robots" on google scholar.
Basically: instead of modelling a precise strategy you work toward a probabilistic approach. Ants (probably) collect their deads by piling up according to a simple rule such as "if there is a pile of dead ants there, I bring this corpse hither; otherwise, I'll make a new pile". You can start by simplifying your 'cleaning' situation with that, and see where you go.
Also, I think (another?) suitable approach could be modelled with a Genetic Algorithm using a carefully chosen combination of fitness functions such as:
the end number of 'clean' tiles
the number of steps made by the robot
of course if the robots 'dies' out of starvation it automatically removes itself from the gene pool, a-la darwin awards :)
You could start by modelling a very, very simple genotype that will be 'computed' into a behaviour. Consider using a simple GA such as this one by Inman Harvey, then to each gene assign either a part of the strategy, or a complete behaviour. E.g.: if gene A is turned to 1 then the robot will try to wander randomly; if gene B is also turned to 1, then it will give priority to self-charging unless there are dirty tiles at distance X. Or use floats and model probability. Your mileage may vary but I can assure it will be fun :)
The problem is reminiscent of Shakey, although there's cleaning involved (which is like the Roomba -- a device that can also be programmed to perform these very tasks).
If the "problem space" (or room) is small enough, you can solve for an optimal solution using a simple A*-based search, but likely it won't be, since that won't leave for very interesting problems.
The machine learning approach suggested here using genetic algorithms is an interesting approach. Given the problem domain you would only have one "rule" (a move-to action, since clean could be eliminated by implicitly cleaning any square you move to that is dirty) so your learner would essentially be learning how to move around an environment. The problem there would be to build a learner that would be adaptable to any given floor plan, instead of just becoming proficient at cleaning a very specific space.
Whatever approach you have, I'd also consider doing a further meta-reasoning step if the problem sets are big enough, and use a partition approach to divide the floor up into separate areas and then conquering them one at a time.
Can you use techniques to create data to use "offline"? In that case, I'd even consider creating a "database" of optimal routes to take to clean certain floor spaces (1x1 up to, say, 5x5) that include all possible start and end squares. This is similar to "endgame databases" that game AIs use to effectively "solve" games once they reach a certain depth (c.f. Chinook).
This problem reminds me of this. A similar problem is briefly mentioned in the book Complexity as an example of a genetic algorithm. These versions are simplified though, they don't take into account fuel consumption.

Detecting an online poker cheat

It recently emerged on a large poker site that some players were possibly able to see all opponents cards as they played through exploiting a security vulnerability that was discovered.
A naïve cheater would win at an incredibly fast rate, and these cheats are caught very quickly usually, and if not caught quickly they are easy to detect through a quick scan through their hand histories.
The more difficult problem occurs when the cheater exhibits intelligence, bluffing in spots they are bound to be called in, calling river bets with the worst hands, the basic premise is that they lose pots on purpose to disguise their ability to see other players cards, and they win at a reasonably realistic rate.
Given:
A data set of millions of verified and complete information hand histories
Theoretical unlimited computer power
Assume the game No Limit Hold'em, although suggestions on Omaha or limit poker may be beneficial
How could we reasonably accurately classify these cheaters? The original 2+2 thread appeals for ideas, and I thought that the SO community might have some useful suggestions.
It's an interesting problem also because it is current, and has real application in bettering the world if someone finds a creative solution, as there is a good chance genuine players will have funds refunded to them when identified cheaters are discovered.
Plot V$PIP versus winrate of all players with a statistically significant #hands played. You should see outliers with naked eye. I think that's the basic thing to do first.
Then you can plot WTSD vs winrate, winrate at showdown vs winrate without showdown, %won at showdown vs VPIP.
The stats you choose must be significant statistically. If you know poker, the above choices make sense.
This is not a job for a machine, outliers are detected by eye.
EDIT: Omaha is much tougher, since it is really variant. There are cases of unbelievable streaks made by weak players who were not cheating.
I hate to be so blunt, but all the answers on this page with the exception of #Erwin Smout's are worthless.
Statistical analysis is a joke for identifying poker cheats
I realize the question allows there to be millions of hands worth of history available to the system. I'm sure there are players with hand histories this large, hell, I've probably played this many online hands. But I've also been playing online for over 10 years. Thats not a small amount of time, and it is my understanding that two conflicting things are true when it comes to identifying online poker cheaters: it needs to happen in a small amount of time, and like any good thief, an online poker cheat is going to take his stash elsewhere immediately after the taking.
There was a great example of the variance in poker in this paper which was generated by matching an always raise player versus an always call player (page 13 of the PDF). Over the course of 100,000 hands, wayyyy more than I think most people would be willing to play against someone who could see their cards, the always call player won on average .026 small blinds per hand. I know this does not sound like much, but assuming stakes of $5-10, that comes out to $6,500. Maybe someone can help me find the link, but the measured professional win rate is less not too much larger than this. Please note, NEITHER of these players was cheating, and the statistically expected difference over this number of hands is significantly less than what actually transpired.
What online poker players need to understand
Poker is gambling. It is a game of skill, because some players are able to elicit more information from their opponents than their opponents are able to gather, and that extra information is often as useful as seeing other peoples cards. Even players who are better players than their typical opponents, will end up long term losers. If you do not understand this, you're just searching for witches with statistics in the arbitrarily small number of hands you'll be playing against any opponent.
What can be done?
Keeping in mind the question states that cheaters are able to see the other players cards, you don't need statistical analysis to identify them. There are only three ways in which that is possible.
First is that the server is sending the information intentionally to clients which is an obvious security issue and should not be implemented (IMO, even for moderators). If a site was found allowing this to happen, it is the player's responsibility to move their funds elsewhere, or refuse to play on the site until that terrible design decision is rectified. It should also be the responsibility of the sites to inform their players of the exact steps that take place during hands played on the site so they have that to make their decision on when choosing a site in the first place. Security by obscurity is unpermitable. As for catching the thieves, this information should be sitting in log files on their servers, which should be regularly audited for this type of behavior.
Second is that the user has hacked the poker server and they would know about that in hurry, or else once it is exposed, it is again players responsibility to determine where to play. In this case, the cheater can be prosecuted in most countries.
Lastly, it is possible the dealing algorithm has been cracked. This one was a major problem in the past with companies that used naive methods to deal hands, but most of the major shops solved this problem by taking random inputs from players logged into their system as well as using entropy generating hardware to seed their random number generator. Thats not to say it cannot be cracked however. If this is the case, the only option is for the company to engineer a new random number generator.
Well. IT people get fascinated by all kinds of wrong question.
A better question is "how is cheating even possible ?". There is no need what so ever to send the opponent's hands over the wire until at showdown. If that data isn't sent to the client, then how could they cheat ?
They'd need to break into the server. Don't tell me that isn't preventable.
I think if they cheat intelligent, so with winning not too much rounds, it won't be detectable. I don't believe you could see the difference between luck and cheating here.
But I would like to know at which online poker provider the cheating is possible. Because I can't imagine a way how to do this, if the poker software is coded properly. If I was asked to program an online poker software, The users wouldn't be able to see the opponents cards, because there is no way he could get this information. And this is how I would do this.
Every connection between users and server is encrypted
no communication between users, the users can only talk to the server.
The server tells every user only the cards the user should see, and no other cards, unless the round is finished and the users open their cards.
The only way the users could cheat here is, you get together with other players, or impersonate multiple players with different accounts and accessing IPs, and open another channel to communicate between the players. This way the group has a big advantage because they know more than their own cards, but there's still no way they can see other cards. And because it's now a group that is cheating it is even more harder to detect it, because they can share their earnings with multiple players, and this group could even have a player that looses more than (s)he gains and still win overall.
I doubt you can say with any certainty if someone is cheating or if they are just good at Poker, past a certain point.
You could however narrow the candidates who you think might be cheating, by looking at the users who over your time period benefited overall. This will remove the vast majority of users, allowing you to focus your resources better. (This of course will include users who are skilled at Poker.).
Once you've done that, you can compare the history of play from while the cheat was possible to the history afterwards or before, and see if the users success decreases or increases.
That should give you a list of users who you need to investigate more carefully, possibly by analyzing specific games.
Enjoy, it's a nice problem.
For all of you expressing disbelief that this is even possible: the community on the poker forums linked in OP were similarly awestruck, but the site in question has confirmed that such a security vulnerability was present. Quite simply, the site was using very basic and insecure crypto to transmit hole card data to its players. Theoretically, it would have been possible for anyone aware of this to intercept transmissions from the site to a specific victim (eg. by being physically nearby and intercepting wireless data), and to cheat that player using the intercepted knowledge.
The question is about how to detect whether this vulnerability was actually exploited (before it was fixed), and if so by whom, given the resources outlined.
Oh, and also some of you seem to be assuming we're talking about a hypothetical scenario, and/or play-money poker; we're not. The site is real, the vulnerability was real, the investigation is really happening (see link in OP), and the games under investigation are real-money games with normal buyins of $200 and above.
I'm by no means a data-mining expert, and my grasp of statistical analysis of large data sets is pretty weak as well (and I'm not very good at poker, even though I love it) so take everything I say here with a grain of salt.
Weed out the junk data. You are going to only really care about players that fit into two categories: (1) players who win more hands than they lose, (2) players who win more money than they lose. Who cares about a cheater who loses a lot? Heh.
With this paired down list of players to actually analyze, I would take a look at their style of play. Assuming you have a lot of historical data, I would build a player skill profile and attempt to normalize their betting strategy. As a poor poker player, I normally will back up weaker cards that no decent player would back simply because they feel good. For example, any time I am dealt a face card with another low card (2, 3, 4, 5), if they're suited, I'll often ALWAYS call any bets made by other players before the turn, even though this strategy is not very successful. Pre-turn raises above the Big Blind often indicate a player has a pocket pair, yet my love of playing won't let me fold a suited hand pre-flop.
So for me, your analysis of my play would say that me matching aggressive calls pre-flop when I have anything suited would be normal. But a different player who only occasionally calls large pre-flop bets would be an indication that something might be out of whack.
I don't know what sort of system you'd need to build to make a profile of different users styles of play, but I imagine you could use some computer learning algorithms to "learn" a person's style of play with pretty decent accuracy.
You mentioned that a smart user would throw hands to minimize his appearance as a cheater. I think this is a GREAT opportunity for more profiling. Would an experienced, winning player play through an awful hand? Probably not, ever. If I was dealt a 4S, 7H, and saw 9D, JC, AH on the flop, I would know that my chances of winning were really, really small. It also tells us that the cards given on the flop aren't very strong for anyone, so anyone at the table betting probably has a Jack or Ace paired, two pair, or three of a kind. Since you know your 4S, 7H is worthless, you'd either bet hard to bluff the pot or fold outright. Not very many good players (who would have been found in your winning players shortened list) would ever stick around on a hand like that.
Anyway, those are the things I've thought of. Now actually implementing them, I have no idea where to even begin so I'm afraid I can't be of much help there. This is a very interesting academic problem though, so please do us a favor and keep us informed of what you end up going with. If you want to take this conversation offline, feel free to email me at stackoverflow#ericharrison.info.
Could you not look for simple indicators initially before trying to do anything too complex??
i.e.. PreFlop : A player folds pocket kings with no raise before him and someone else had pocket Aces..
This MIGHT be indicative of the player knowing his starting KINGS (pretty good) is not as good as someone elses pocket ACES .. however that's assuming he makes the decision pre-flop and not post flop.. depends really..
Ignore this, just thinking out loud..
To be perfectly honest, I'd doubt very much that the players who could see opponents hands were random. There must be some sort of cross over in the code that generates the card view that was selecting some users but not others. I would recommend running tests on this code and trying to find a trend in the "viewers" and "non-viewers". If you find a strong trend, then the trend could be applied to the actual dataset too see which users, or which hands or which whatever was generating the code fault.
The answer to your question is simple. There is no way to detect that type of cheater with just hand histories. You need the information that is not public in order to correlate multiple characteristic's to find a suspected cheater.
Ohh yea, and obviously the companies that provide these games do everything possible to setup shop in a low tax, non-regulated country. Until they are regulated and enforce strict code compliance and testing this will continue to happen.
the most likely cheating situation would seem to be people working together. Three guys at same table knowing each others cards should be able to make some betting adjustments that would allow the pool of betters to come out ahead.
What stops are in place to prevent collusion?

What is your idea for a good AI project for a group of undergraduates?

There are two courses: "AI" and "AI in Games" both 15 students for 15 weeks.
I want to keep them motivated and creative.
I know I want some kind of competition (obvious for the latter course).
Maybe something like Marathon Match or ICFP.
I will need good visualization, so it would be great if it already exist.
One idea was to write AI for "Battle of Wesnoth", but I guess it's to diverse / boring.
Another game of Go. But that's too hard.
What are your ideas?
It will be work in groups of 3 students for 15 weeks.
MIT hosts a competition called BattleCode.
BattleCode, is a real-time strategy
game. Two teams of robots roam the
screen managing resources and
attacking each other with different
kinds of weapons. However, in
BattleCode each robot functions
autonomously; under the hood it runs a
Java virtual machine loaded up with
its team's player program. Robots in
the game communicate by radio and must
work together to accomplish their
goals.
Teams of one to four students enter
are given the BattleCode software and
a specification of the game rules.
Each team develops a player program,
which will be run by each of their
robots during BattleCode matches.
Contestants often use artificial
intelligence, pathfinding, distributed
algorithms, and/or network
communications to write their player.
At the final tournaments, the
autonomous players are pitted against
each other in a dramatic head-to-head
tournament. The final rounds of the
MIT tournament are played out in front
of a live audience, with the top teams
receiving cash prizes.
(source: mit.edu)
BattleCode in action.
You essentially are given the BattleCode software from MIT and your students can program the AI for their robots. They have a test suite so you can practice running your autonomous bots on your own in a practice arena. Towards the end of the semester they can enter in MIT's Open Tournament, where they compete with their software AI robots against schools all over the nation. Up to $40,000 is given away in cash and prizes as well as bragging rights for winning.
If you are looking to teach them about AI, Pathfinding, Swarm Intelligence, etc. I can't think of a more fun way.
May the best AI bot win!
Wii gesture recognition using hidden markov models.
I wouldn't count out Go. It's computationally hard for Go AI to compete with top human players, but the simple rules of Go (compared to Chess) make it a relatively easy game to write AI for. Your students' programs only need to compete against each other, not against Dan level human players. See An Introduction to the Computer Go Field and Associated Internet Resources for a lot of Go programming resources.
I think it's a good idea to select a theme both challenging enough that it can't be completely solved, yet allows the user to see the value of it in the real world and not so much a toy problem. My suggestion would thus be:
Word segmentation problem (e.g. convert "iamaboy" to "i'am a boy")
Word sense disambiguation (e.g. "The apple is nice to eat" - The apple is a fruit or a company?)
Optical character recognition
What I just list down is some of the more basic stuff of natural language processing. If your students is much more technically inclined, you can probably take it to the next level and let them tackle the problem of machine translation.
Empire, it's addictive as whatever and there are open source D versions (1 and 2) and a not quite free c++ version .

Resources