Game theory’s development accelerated at a record pace during World War II. Though it was intended for economics, both the United States and the Soviet Union quickly saw its value for forming war strategies.
Early in the Cold War, the Eisenhower administration viewed nuclear weapons like any other weapon in the arsenal available for use [source: Spence]. Game theorist Thomas Schelling convinced officials that nuclear weapons were only useful as deterrents. Additionally, he proposed that the U.S. should have a variety of responses it could call upon in relation to the size of the offense against it.
A balance was struck in which neither nation could gain advantage through nuclear attack — the reprisals would be too devastating. This was known as Mutual Assured Destruction (MAD). This balance required open acknowledgment of each nation’s strengths and vulnerabilities. However, as prisoner’s dilemma showed us, both players must assume the other is only concerned with self-interest; therefore, each must limit risk by adopting a dominant strategy.
If one nation changed the balance of power (by building a missile-defense shield, for instance), would it lead to a strategic blunder that resulted in nuclear war? Governments consulted game theorists to prevent such imbalances. When one nation built missile silos, the other nation targeted them. The Soviet Union and the U.S. then spread out and hid their launch sites around the globe, which required both nations to commit more missiles to a potential first strike in order to diminish the retaliatory abilities of the other.
They also kept nuclear-armed aircraft aloft in the skies at all times to provide a deterrent if the silos were destroyed. As another deterrent, they established nuclear-armed submarines. This pretty much covered all bases: ground, air and sea.
The atmosphere was tense, and there was a constant threat of miscommunication leading to disastrous results. In the midst of such massive distrust, even a defensive move (such as building fallout shelters) could be interpreted as provocative. Building fallout shelters, for instance, makes it look like you’re expecting trouble. Why are you expecting trouble, unless you’re planning on starting it?
By no rational or mathematic measure would it make sense to launch nuclear weapons after your nation has already taken a significant hit. What would be the point? World destruction for the sake of revenge? But if revenge isn’t a deterrent, what keeps either nation from launching a first strike? To counteract the threat of a first strike, American and Soviet leaders sometimes used a “madman strategy” or released rumors that they were mentally unstable or blind with rage to keep the other off guard.
Weapons control and disarmament negotiations were essentially repeated games that allowed both parties to reward cooperation and punish defection. Through repeated meetings and increased communication, trust and cooperation led to (some) disarmament and less strategic posturing. This was also due in no small part to the resources required to maintain an ever-growing nuclear capability.
Fortunately, neither nation was willing to play the final stage of a game in which the best possible outcome involved a victory that could only be celebrated by a handful of survivors underground.
Despite its applicable functions, game theory isn’t without criticism. It’s been pointed out that game theory can help only so much if you’re trying to predict realistic behavior. Every action, good or bad, can be rationalized in the name of self-interest.
A constant difficulty with game theory modeling is defining, limiting, isolating or accounting for every set of factors and variables that influence strategy and outcome. There’s always an X-factor that simply cannot be accounted for. For instance, no strategy can predict the actions of a negotiator who is in the throes of a religious revelation.
Game theory is based on rationality. And in traditional economic models, rationality is the maximization of one’s own payoff. Therefore, in every situation, you’ll always act to gain as much as possible, regardless of how it affects others. Interestingly, studies have found that the subjects most likely to fully embrace the economic model of a self-serving, payoff-maximizing agent are kindergarten students, but that by the fourth grade, their behavior begins to favor cooperative strategies [source: Henrich].
Game theory argues that cooperation between players is always the rational strategy, at least when participating in a game-theory experiment (even if it means losing the game). Consider this scenario: You participate in what you are told is a one-shot game. To win this game, you must take advantage of the other player. After doing so and winning, you learn that this game is actually one of two games in a series.
Now the roles are reversed. The test-givers want to see how Player 2 will behave after Player 1 defects in the first game — this is the true purpose of the study. Your rational, self-maximizing action in the first game is now irrational outside the framework of a one-shot game.
Test-givers often trick test-takers as a strategy to obtain the optimal outcome: full knowledge of players’ strategic choices in different game scenarios. A test-giver’s strategy of concealing the true nature of the game itself will dominate any player’s strategy within the game. The test-giver receives maximum information (which offers the most utility within a larger framework of test-giving). This information comes, however, at the expense of the player, who reveals to a fellow citizen his or her willingness to defect within the larger framework of life.
The prisoner’s dilemma shows us we must assume agents always play dominant strategies. Therefore, the best strategy for a game theory experiment is to assume the test-giver is manipulating the game to make players reveal information. In a game, then, it’s always better to cooperate — even if it means losing the game. The worst outcome from this strategy is still an acceptable outcome. Essentially, losing an experimental game when you’ve been tricked isn’t such a loss — as long as you maintain your reputation within a much larger series of life scenarios.