A colleague got me started reading about and taking a course on general game playing. The idea sounds pretty insane: Programs that are designed to play games in general. The game it plays could be chess, checkers, or even go fish—-and the program could be playing them simultaneously. There are tons of other games the program could play. It isn’t any and all of them, since there are some finite restrictions and rules that GGP and the language that describes games must follow.
General game-playing programs do not know the rules of the game ahead of time. As such, they are based on concepts that games have in general; for instance, each game has players, legal moves to make, and conditions that indicate that the game is over. For example, tic-tac-toe involves two players, where each player takes turns marking Xs or Os on the board, and the game-over condition is when there’s a line of matching Xs and Os or if the board doesn’t have any blank spaces. Chess, go fish, and other games follow the same structure as well, although the specific game rules are different.
There’s always been research about computers being smart enough to beat people at games (famously for chess). They’ve been done for years. What makes general game playing programs different is that they are not specialized for any particular game.
It’s crazy to think that a single program can be used to play different games, given that the games are described in a way that the program can understand. In general, games are just a series of states that change as the game progresses, so the program just inspects the states and makes moves accordingly. Unsurprisingly, games are just represented by a state machine. How fortunate, given that I’m currently learning about them in CS 154 as well.
In related news, I watched Peter Norvig’s talk on How Computers Learn today, which was great. He talks about machine learning and some of the work that Google’s done with it in clear language without much jargon. The entire talk was interesting, though the part where (at time 43:52 of the video) he talks about a computer learning how to play the game Breakout and getting really good at it is sort of relevant to the topic for this post. Though it’s not a general way to play games, just having the program learn from its past experience, and all it knows are the pixels on the screen and the score. The computer gets pretty good at playing the game after being extremely dumb from its first time playing and getting pretty good at the game after hundreds of play sessions.
Applying that same technique to GGP is probably not feasible. Since the program doesn’t know what kind of game it’s going to play, it can’t really build on past experience and say that certain techniques are better than others. It could be playing tic-tac-toe in one round and chess in the next round, and each game has their own winning strategies.
A good way to make moves without basing it on prior knowledge is to “inspect the future” by checking future possible moves and seeing which move is best, statistically speaking. This is what, from what I’ve read so far, the papers on GGP have written about.
Though I haven’t heard about general game playing at all until this past weekend, it’s pretty interesting. If you want to follow along and learn more about GGP, check out the General Game Playing course on Coursera or ggp.org.