Nash equilibria
Solution concepts
Once a game structure and payoffs are in place, comes the reasoning. What should the players do? Game theory has a number of solution concepts, which are reasonings and algorithms that either predict what the players would do, or advises what they should do, under specific assumptions.
The most commonly known and mainstream solution concept for games in normal form is the Nash equilibrium.
Resolution for games in normal form
Let us consider the following game. As we say previously, a game in normal form is represented as a matrix. Let us say we have a row player that can pick strategy A, B or C, and a column player that can pick strategy D, E or F.
When both players pick a strategy, say for example A and D, then this jointly selects a cell (AD) and this gives us payoffs. The left number (5) is what the row player gets ($5), and the right number is what the column player gets ($1).

We assume that both players would like to gain as much as possible. This is known by economists as "utility maximization", in plain words, rationality. We also assume that both players know what game they are playing. We also assume a couple of fancy properties such as common knowledge of what we have just said, impressive logical and reasoning skills, etc.
Nash Equilibria
What is called today a Nash equilibrium was contributed by John Nash back in the 1950s.
A very important assumption is that players can pick their strategies independently. Intuitively, you can think of the players' being in separate rooms.
The row player can reason as follows: if the column player picks D, then what should I pick, in other words: what is my best response to D? Well, if I pick A I get $5, if I pick B $7 and if I pick C $6. So my best response to D is B. Likewise, my best response to E is A, and my best response to F is A.

The column player does the same thing and finds out that her best response to A is F (gets her $4), her bets response to B is E (gets her $9), and her best response to C is D (gets her $6).

We get a Nash equilibrium if we find a cell such that the players pick "best responses to each other's choices of strategy." There is only one: AF. It appears in orange on both tables above, and in green below. Indeed, you can check that the best response to A is F and the best response to F is A.

So what happens here? A Nash Equilibrium fulfills a stability criterion: no player has interest to deviate from it. More precisely, no player has interest to deviate unilaterally from it, i.e., assuming that they can fix the opponent's strategy while reasoning on what they can do. This is why, in the above lines, we only reasoned across a row, or across a column. Being "stuck" on a row or column while reasoning is a typical Nashian feature.
But what disturbs us here? In fact, (4,4) is not optimal. There would have been opportunity to have (7,8) or (6,6), that would have been better for both, but the Nash equilibrium framework fails at capturing them.
Nash equilibria with mixed strategies
Nash equilibria are not necessarily unique. There are many games with multiple Nash equilibria. There are numerous discussions on what to do in the presence of non-unique equilibria: some solution concepts introduce mechanisms to sync on one of them by supplying all players with shared information prior to the game (e.g., correlated equilibria).
Also, there are games with no Nash equilibrium at all when considering only pure strategies. This, in fact, motivated the introduction and study of mixed strategies, which we covered in a previous section. As it turns out, if the players select random mixtures of pure strategies, known as mixed strategies, then there is a theorem that guarantees the existence of at least one Nash equilibrium.
Resolution for games in extensive form and spacetime games
We saw in the Strategies section that games in extensive form have a (reduced) strategic form, which is an equivalent game in normal form. The definition of a Nash equilibrium for a game in extensive form is thus straightforward: it is an outcome that corresponds to a Nash equilibrium in the strategic form of the game.
Likewise, spacetime games can be converted to games in extensive form with imperfect information and thus have a strategic form, too. A combination of strategy is a Nash equilibrium if it is a Nash equilibrium in the strategic form of the game.
The specific case of games in extensive form with perfect information
The Nash equilibrium for games in extensive form with perfect information is, in a sense, too broad and thus a more specific resolution algorithm for games in extensive form with perfect information was designed by Reinhard Selten. It is called Subgame Perfect Equilibrium. Every Subgame Perfect Equilibrium is a Nash equilibrium, but the converse does not always hold.
Centipede games
In his seeding 2000 paper, Jean-Pierre Dupuy expressed his idea and conjectures on a particular class of games in extensive form called centipede games. As its name indicates, a centipede game has a particular structure in which, at each node, the current player can either choose to stop or continue the game. If they stop the game, the corresponding payoffs are immediately paid, and otherwise, it is the next player's turn -- until the last node, in which the last player can pick between two outcomes.
This is an example of centipede game (which, again, is also a game in extensive form):

The Subgame Perfect Equilibrium
The Nash equilibrium (more specifically, Subgame Perfect Equilibrium) for this game would be (1,0). It is obtained with a backward induction reasoning: if Alice were to play, she would favor 4 over 3, and thus (2,4) over (3,5). Remember that the payoff she gets is the one on the right.

But then, Bob, right before her, prefers 3 over 2 and thus (3,1) over (2,4).

Knowing this, Alice prefers 2 over 1, and thus (0,2) over (3,1).

And finally, Bob prefers 1 over 0 and thus (1,0) over (0,2).

This is the Subgame Perfect Equilibrium, which is also a Nash Equilibrium.
There is, however, one piece of criticism to this reasoning known as the "Backward Induction Paradox". Indeed, the reasoning of Bob picking (1,0) is that if he had continued the game instead, Alice would have picked (0,2) -- but then, the Alice who picks (0,2) would be in the contradictory position of assuming that Bob is rational, while playing at a node following Bob's irrational play (since it was rational for him to pick (1,0)).
A more generic tree structure
For more general trees, the Subgame Perfect Equilibrium is obtained through a backwards induction, starting from the leaves and going all the way back to the root. Let us consider this game:

In the above game, the Subgame Perfect Equilibrium is (4,1).
Last updated