In society today , it is suggested that what people identify as a nerd, is actually someone who knows their own mind well enough to distrust it. Its someone who takes out societal bias to make decisions by the application of algorithms based off Game Theory.
John von Neumann and Oskar Morgenstern introduced game theory to the world in 1943 with “Theory of Games and Economic Behavior.” They hoped to find mathematical answers to economic problems.
According to economic theory, producers could make a greater profit by reacting to conditions such as supply and demand. But these theories fail to account for the strategies of other producers, and how the anticipation of those strategies affects each producer’s moves. Game theory attempted to account for all of these strategic interactions. It didn’t take long for military strategists to see the value in this.
Imagine you’re a soldier posted on a defensive line. Tomorrow, there will be a great battle. There are two possible outcomes of the battle (victory or defeat), and two possible outcomes for you (surviving or dying). Clearly, your preference is to survive.
If your line is breached, you will die. However, even if the defensive line holds, you may die in battle. It seems that your best option is to run away. But if you do, the ones who stay behind and fight may die. You realize that every other person on the defensive line is thinking this very same thing. So if you decide to stay and cooperate but everyone else flees, you’ll certainly die.
This problem has plagued military strategists since the beginning of warfare. That’s why there is generally a new condition entered into the equation — if you flee or defect, you will be shot as a traitor. Therefore, the best chance you have of surviving is to keep your position on the line and fight for victory.
How does this relate to game theory?
Game theory isn’t the study of how to win a game of chess or how to create a role-playing game scenario. Often, game theory doesn’t even remotely relate to what you’d commonly consider to be a game.
At its most basic level, game theory is the study of how people, companies or nations (referred to as agents or players) determine strategies in different situations in the face of competing strategies acted out by other agents or players. Game theory assumes that agents make rational decisions at all times. There’s some fault in this assumption: What passes for irrational behavior by most of society (a buildup of nuclear weapons, for instance) is considered quite rational by game theory standards.
However, even when game theory analysis produces counterintuitive results, it still yields surprising insights into human nature. For instance, do members of society only cooperate with each other for the sake of material gain, or is there more to it? Would you help someone in need if it hurt you in the long run?
When we discuss game theory, we assume a few things:
- A game is considered any scenario in which two players are able to strategically compete against one another, and the strategy chosen by one player will affect the actions of the other player. Games of pure chance don’t count, because there’s no freedom of choice, and thus no strategy involved. And one-player games, such as solitaire, aren’t considered by game theorists to be games, because they don’t require strategic interaction between two players.
- Players in a game know every possible action that any player can make. We also know all possible outcomes. All players have preferences regarding these possible outcomes, and, as players, we know not only our own preferences but also those of the other players.
- Outcomes can be measured by the amount of utility, or value, a player derives from them. If you prefer reaching point A to reaching point B, then point A has higher utility. By knowing that you value A over B, and B over C, a player can anticipate your actions, and plan strategies that account for them.
- All players behave rationally. Even seemingly irrational actions are rational in some way. For instance, if you were to play two games of pool, you wouldn’t intentionally lose your money on the first game unless you believed that doing so would bolster your opponent’s confidence when he or she was deciding how much to bet on game 2 — a game you anticipate winning. This is an essential difference between one-shot and repeating games. In a one-shot game, you play once; in a repeating game, you play multiple times. (A little later, we’ll look at how rational thinking varies between one-shot and repeating games.)
- If no player can reach a better outcome by switching strategies, the game reaches an impasse called the Nash Equilibrium. Essentially, this boils down to players keeping their current strategies (even if they don’t have the highest preference) because switching won’t accomplish anything.
In the next section, we’ll put this information to use and see what we can learn about strategy by plotting it on a game tree.
In game theory, the most simple example of Nash equilibrium, is a solution concept of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy.
In a one-shot game, such as our previous example of the prisoner’s dilemma, the stakes are high — but carry no further repercussions. However, when playing a repeated game, a one-shot strategy may not be the best move: You and your opponent can get better returns in the long run by cooperating (not confessing) at times and defecting (confessing) at others. This helps you probe one another’s strategies and is known as a mixed-strategy.
Let’s say that you know your prisoner’s dilemma is just one scenario in a series of repeated games. So you choose not to confess on your first move. Instead of taking advantage of this, Player 2 may reciprocate your trust, and also not confess, resulting in the best mutual payoff: five years each in jail. Strategy in repeated games takes into consideration the opponent’s reputation and future cooperation, and so these games can play out much differently than one-shot games.
In fact, even if you repeat the game, but still know exactly how many games there will be, both players will both expect the other to maximize utility by defecting on the very last move, or the last game in the series. Knowing this, both players realize they must defect on the second-to-last move. But since both players know that will be the optimal strategy, they’ll each play their most self-serving strategy the move before that, and so on, until they’re pre-empting the other on the very first game in the series. This is the only chance for either player to do so, lest both immediately fall to a disadvantage, never to recover the lead.
Playing a series of games with no known end, the players can adopt a tit-for-tat strategy, which punishes the opponent for defecting. The players match defection in kind with their own defection for a predetermined number of moves, before attempting to re-establish trust. This is called a trigger strategy. For instance, if Senator 1 cooperates on a bill sponsored by Senator 2, but Senator 2 doesn’t reciprocate the cooperation, Senator 1 might refuse to cooperate when Senator 2 proposes his or her next bill: tit-for-tat.
Another trigger strategy is the grim trigger strategy, in which Player 1 cooperates until Player 2 defects, causing Player 1 to defect on every move thereafter regardless of future cooperation on the part of Player 2. While tit-for-tat leaves room for forgiveness, grim trigger strategy is an endless cycle of defection.
Sometimes, players threaten a grim-trigger strategy and don’t follow through with it. This is known as cheap talk: a threat without commitment. So if your fiancé moves in with you but doesn’t break the lease on his apartment, that’s cheap talk. If he burns his former home to the ground (and gets a tattoo of your name), that’s commitment.
Continue reading to learn how game theorists saved the world — or nearly ruined it — on a daily basis for several decades.