Games

Let's, for a moment, try to boil games down to their essence.

What's the simplest sort of game one might imagine that is still interesting? We'll need two players to keep it interesting. And we'll make it discrete to keep things straightforward.

Rounds

For any game that proceeds in several rounds, one can think of the possible game states in each round in a DAG. Such a game can be solved by associating with each node a value (say, the probability of player 1 winning, or the point spread between the players), and then computing values from the bottom up.

For a turn-taking two-player game, this value computation is straightforward -- player 1 seeks to maximize the value while player 2 seeks to minimize it.

(Example: tic-tac-toe)

(Example: 3x3 cornfield)

This value function immediately gives rise to an optimal strategy. If we can compute a value function for a game, that game is "solved". Indeed, for many games, we now know whether they are first-player-wins, second-player-wins, or forced-draw.

When you can't solve the whole value function, you can begin to think in terms of estimating portions of the tree, which can lead to skillful (if not perfect) play.

In other words, once you can get the state space of a game into your computer, you can figure out the value of every state of the game, which leads to an optimal strategy. Ideally, then, you will also manage to compress this value function in such a way that you can remember it. This is the most fun you can have with a game, and I highly encourage it.

Simultaneous Play

Value function updates are easy when players take turns making choices. But what about when players act simultaneously?

Let's look at the simplest version of this situation: a one-round two-choice game:

p+
ab
p-a12
b0-1

Here, p+ is trying to maximize the value, while p- is trying to minimize the value. (In other words, p-'s reward is the negative of the value in the cell; p+'s reward is the value in the cell.)

What are some strategies the players might use?

p- has an easy choice. If they play move a, then the best they will do is 1, while if they play move b, the worst they will do is 0. So it's clear that move b is optimal -- the reward in every column is greater for move b. (This is a "dominant strategy".)

p+ has a slightly more complicated choice. If they knew what p- would play, they would always play the opposite. And it seems like p-'s best strategy is to play b, so that means a.

But we can go deeper. What if -- for some reason -- p- instead choose to play a with probability 1/2th? (a "mixed strategy") This is interesting, because if p+ knew that this was going to happen, they would realize that their moves have equal expected value (1/2 * 1 + 1/2 * 0 = 1/2 = 1/2 * 2 + 1/2 * -1). Now, this is just an interesting side note in this game, because playing any probabilistic strategy will always be worse for p-.

But if we adjust things so that neither player has a dominant strategy:

p+
ab
p-a13
b2-1

If p+ plays a with probability 0.8, then p- is indifferent (a = 0.8 + 0.6 = 1.4 = 1.6 - 0.2 = b).

If p- plays a with probability 0.6, then p+ is indifferent (a = 0.6 + 0.8 = 1.4 = 1.8 - 0.4 = b).

And, notably, the expected value of the game for p+ ends up being *higher* than their best fixed strategy (a); and the value of the game for p- is *lower* than their best fixed strategy (b).

So this ends up being an equilibrium!

Using this idea of mixed equilibria, you can analyze games that feature simultaneous moves.

A Nod To John Nash

Nash proved that every finite game has an equilibrium. Particularly, either one player has a dominant strategy, or both players end up with mixed strategies. I suspect this comes down to something about matrix factorization.