Some Thoughts on The Force Awakens (Spoilers)

Massive spoilers below…











1) I suspect some viewers might roll their eyes at the fact that the galaxy is still in the middle of a civil war, but this is somewhat realistic. Civil wars tend to last a loooong time. The civil war in Afghanistan, for example, has been going on since 1978. One could argue that Korea has been in civil war for the last 65 years. (The Force Awakens makes it seem like both the Republic and First Order control and govern territory, like North and South Korea.) The good news, if there is any, is that political scientists have a pretty good idea why civil wars take forever to end.

2) What a strange world we live in where James Bond has more screen time in a Star Wars film then Luke Skywalker. (Daniel Craig is the stormtrooper that Rey pulls the Jedi mind trick on.)

3) The trailers spoiled Han Solo’s death. When Kylo Ren pulls out his lightsaber on the bridge, we have yet to see Ren’s battle with Finn in the snow. This means that Ren can’t be giving himself up here, and it would be weird if he simply re-holstered his lightsaber.

4) At the end of Return of the Jedi, Luke seems to believe Vader can go home and everything will be okay. Similarly, Mr. Solo…I mean, Han…seems to think that Kylo Ren can return home and everything will be okay. Their best case scenario is life imprisonment, and execution seems much more likely.

5) Perhaps the most unrealistic thing about A New Hope and The Force Awakens is that, within a few scant hours of receiving intelligence about the big evil weapon, the rebels have some genius plan to destroy the facility. The only way this could happen is if the vulnerability is obvious. But if the vulnerability is obvious, why don’t the bad guys spot it and fix the problem?

6) Also, when will the bad guys learn that sinking massive amounts of capital into one super weapon is not a good investment strategy?

7) How can Han manually leave light speed within a planet’s atmosphere? Given the speed involved and the small window, this is basically impossible.

Understanding the Iran Deal: A Model of Nuclear Reversal

Most of the discussion surrounding the Joint Comprehensive Plan of Action (JCPOA, or the “Iran Deal”) has focused on mechanisms that monitor Iranian compliance. How can we be sure Iran is using this facility for scientific research? When can weapons inspectors show up? Who gets to take the soil samples? These kinds of questions seem to be the focus.

Fewer people have noted Iran’s nuclear divestment built into the deal. Yet Iran is doing a lot here. To wit, here are some of the features of the JCPOA:

  • At the Arak facility, the reactor under construction will be filled with concrete, and the redesigned reactor will not be suitable for weapons-grade plutonium. Excess heavy water supplies will be shipped out of the country. Existing centrifuges will be removed and stored under round-the-clock IAEA supervision at Natanz.
  • The Fordow Fuel Enrichment Plant will be converted to a nuclear, physics, and technology center. Many of its centrifuges will be removed and sent to Natanz under IAEA supervision. Existing cascades will be modified to produce stable isotopes instead of uranium hexafluoride. The associated pipework for the enrichment will also be sent Natanz.
  • All enriched uranium hexafluoride in excess of 300 kilograms will be downblended to 3.67% or sold on the international market.

Though such features are fairly common in arms agreements, they are nevertheless puzzling. None of this makes proliferation impossible, so the terms cannot be for that purpose. But they clearly make proliferating more expensive, which seems like a bad move for Iran if it truly wants to build a weapon. On the other hand, if Iran only wants to use the proliferation threat to coerce concessions out of the United States, this still seems like a bad move. After all, in bargaining, the deals you receive are commensurate with your outside options; make your outside options worse, and the amount of stuff you get goes down as well.

The JCPOA, perhaps the worst formatted treaty ever.

The JCPOA, perhaps the most poorly formatted treaty ever.

What gives? In a new working paper, I argue that undergoing such a reversal works to the benefit of potential proliferators. Indeed, potential proliferators can extract the entire surplus by divesting in this manner.

In short, the logic is as follows. Opponents (like the United States versus Iran) can deal with the proliferation problem in one of two ways. First, they can give “carrots” by striking a deal with the nuclearizing state. These types of deals provide enough benefits to potential proliferators that building weapons is no longer profitable. Consequently, and perhaps surprisingly, they are credible even in the absence of effective monitoring institutions.

Second, opponents can leverage the “stick” in the form of preventive war. The monitoring problem makes this difficult, though. Sometimes following through on the preventive war threat shuts down a real project. Sometimes preventive war just a bluff. Sometimes opponents end up fighting a target that was not even investing in proliferation. Sometimes the potential proliferator can successfully and secretly obtain a nuclear weapon. No matter what, though, this is a mess of inefficiency, both from the cost of war and the cost of proliferation.

Naturally, the opponent chooses the option that is cheaper for it. So if the cost of preventive war is sufficiently low, it goes in that direction. In contrast, if the price of concessions is relatively lower, carrots are preferable.

Note that one determinant of the opponent’s choice is the cost of proliferating. When building weapons is cheap, the concessions necessary to convince the potential proliferator not to build are very high. But if proliferation is very expensive, then making the deal looks very attractive to the opponent.

This is where nuclear reversals like those built into the JCPOA come into play. Think about the exact proliferation cost that flips the opponent’s preference from sticks to carrots. Below that line, the inefficiency weighs down everyone’s payoff. Right above that line, efficiency reigns supreme. But the opponent is right at indifference at this point. Thus, the entire surplus shifts to the potential proliferator!

The following payoff graph drives home this point. A is the potential proliferator; B is the opponent; k* is the exact value that flips the opponent from the stick strategy to the carrot strategy:

Making proliferation more difficult can work in your favor.

Making proliferation more difficult can work in your favor.

If you are below k*, the opponent opts for the preventive war threat, weighing down everyone’s payoff. But jump above k*, and suddenly the opponent wants to make a deal. Note that everyone’s payoff is greater under these circumstances because there is no deadweight loss built into the system.

Thus, imagine that you are a potential proliferator living in a world below k*. If you do nothing, your opponent is going to credibly threaten preventive war against you. However, if you increase the cost of proliferating—say, by agreeing to measures like those in the JCPOA—suddenly you make out like a bandit. As such, you obviously divest your program.

What does this say about Iran? Well, it indicates that a lot of the policy discussion is misplaced for a few of reasons:

  1. These sorts of agreements work even in the absence of effective monitoring institutions. So while monitoring might be nice, it is definitely not necessary to avoid a nuclear Iran. (The paper clarifies exactly why this works, which could be the subject of its own blog post.)
  2. Iranian refusal to agree to further restrictions is not proof positive of some secret plan to proliferate. Looking back at the graph, note that while some reversal works to Iran’s benefit, anything past k* decreases its payoff. As such, by standing firm, Iran may be playing a delicate balancing game to get exactly to k* and no further.
  3. These deals primarily benefit potential proliferators. This might come as a surprise. After all, potential proliferators do not have nuclear weapons at the start of the interaction, have to pay costs to acquire those weapons, and can have their efforts erased if the opponent decides to initiate a preventive war. Yet the potential proliferators can extract all of the surplus from a deal if they are careful.
  4. In light of (3), it is not surprising that a majority of Americans believe that Iran got the better end of the deal. But that’s not inherently because Washington bungled the negotiations. Rather, despite all the military power the United States has, these types of interactions inherently deal us a losing hand.

The paper works through the logic of the above argument and discusses the empirical implications in greater depth. Please take a look at it; I’d love to hear you comments.

Bargaining Power and the Iran Deal

Today’s post is not an attempt to give a full analysis of the Iran deal.[1] Rather, I just want to make a quick point about how the structure of negotiations greatly favors the Obama administration.

Recall the equilibrium of an ultimatum game. When two parties are trying to divide a bargaining pie and one side makes a take-it-or-leave-it offer, that proposer receives the entire benefit from bargaining. In fact, even if negotiations can continue past a single offer, as long as a single person controls all of the offers, the receiver still receives none of the surplus.

This result makes a lot of people feel uncomfortable. After all, the outcomes are far from fair. Fortunately, in real life, people are rarely constrained in this way. If I don’t like the offer you propose me, I can always propose a counteroffer. And if you don’t like that, nothing stops you from making a counter-counteroffer. That type of negotiations is called Rubinstein bargaining, and it ends with a even split of the pie.

In my book on bargaining, though, I point out that there are some prominent exceptions where negotiations take the form of an ultimatum game. For example, when returning a security deposit, your former landlord can write you a check and leave it at that. You could try suggesting a counteroffer, but the landlord doesn’t have to pay attention—you already have the check, and you need to decide whether that’s better than going to court or not. This helps explain why renters often dread the move out.

Unfortunately for members of Congress, “negotiations” between the Obama administration and Congress are more like security deposits than haggling over the price of strawberries at a farmer’s market. If Congress rejects the deal (which would require overriding a presidential veto), they can’t go to Iran and negotiate a new deal for themselves. The Obama administration controls dealings with Iran, giving it all of the proposal power. Bargaining theory would therefore predict that the Obama administration will be very satisfied[2], while Congress will find the deal about as attractive as if there were no deal at all.

And that’s basically what we are seeing right now. Congress is up in arms over the deal (hehe). They are going to make a big show about what they claim is an awful agreement, but they don’t have any say about the terms beyond an up/down vote. That—combined with the fact that Obama only needs 34 senators to get this to work—means that the Obama administration is going to receivea very favorable deal for itself.

[1] Here is my take on why such deals work. The paper is a bit dated, but it gets the point across.

[2] I mean that the Obama administration will be very satisfied by the deal insofar as it relates to its disagreement with Congress. It might not be so satisfied by the deal insofar as it relates to its disagreement with Iran.

Serial and Credible Threats

[Serial Podcast Season 1 spoilers below]

I’m going to assume you have gone through the first season of Serial and know most of the background. However, some important recap:

According to Jay’s testimony, Adnan strangled Hae and then solicited the help of Jay to dispose of the body. The police wondered why Jay, an acquaintance of Adnan, would ever go along with that. Jay stated that he initially refused. But Adnan threatened to go to the cops about Jay’s pot dealings. Wanting to avoid that, Jay becomes an accessory to murder.

To me, this makes no sense at all. I could understand why Jay might prefer burying the body to having to deal with the police over some (relatively minor) marijuana, but the latter scenario would never happen. Adnan simply does not have a credible threat here. If Adnan goes to the police to turn in Jay, Jay can easily plead out of the crime by handing them Adnan. Jay has all the leverage here. Adnan has none.

Did Jay not realize this? I can’t imagine that is true. Jay is supposed to be street-smart. He might not understand the difference between Nash equilibrium and subgame perfect equilibrium, but he certainly should understand the difference between a credible threat and an incredible threat.

Would Jay not be willing to snitch on Adnan if Adnan turned him in? If so, then Adnan would not be deterred from pointing the police to Jay, and so maybe Jay would go along with it. But I can’t imagine this is true either. First, it would require Jay to not want to rat on the guy who just ratted on him, even though it would likely mean that Jay’s charges would be dropped. Second, we in fact know Jay was willing to snitch on Adnan—because he did!

This leads me to conclude that Jay’s lying. I’m not sure why or what it means, but I think it’s important.

TL;DR: Jay’s story is not subgame perfect.

Am I missing something here?

The Game Theory of the Cardinals/Astros Spying Affair

The NY Times reported today that the St. Louis Cardinals hacked the Houston Astros’ internal files, including information on the trade market. I suspect that everyone has a basic understanding why the Cardinals would find this information useful. “Knowledge is power,” as they say. Heck, the United States spends $52.6 billion each year on spying. But game theorists have figured out how to quantify this intuition is both interesting and under-appreciated. That is the topic of this post.

Why Trade?
Trades are very popular in baseball, and the market will essentially take over sports headlines as we approach the July 31 trading deadline. Teams like to trade for the same reason countries like to trade with each other. Entity A has a lot of object X but lacks Y, while Entity B has a lot of object Y but lacks X. So teams swap a shortstop for an outfielder, and bad teams exchange their best players for good teams’ prospects. Everyone wins.

However, the extent to which one side wins also matters. If the Angels trade a second baseman to the Dodgers for a pitcher, they are happier than if they have to trade that same second baseman for that same pitcher and pay an additional $1 million to the Dodgers. Figuring out exactly what to offer is straightforward when each side is aware of exactly how much the other values all the components. In fact, bargaining theory indicates that teams should reach such deals rapidly. Unfortunately, life is not so simple.

The Risk-Return Tradeoff
What does a team do when it isn’t sure of the other side’s bottom line? They face what game theorists call a risk-return tradeoff. Suppose that the Angels know that the Dodgers are not willing to trade the second baseman for the pitcher straight up. Instead, the Angels know that the Dodgers either need $1 million or $5 million to sweeten the deal. While the Angels would be willing to make the trade at either price, they are not sure exactly what the Dodgers require.

For simplicity, suppose the Angels can only make a single take-it-or-leave-it offer. They have two choices. First, they can offer the additional $5 million. This is safe and guarantees the trade. However, if the Dodgers were actually willing to accept only $1 million, the Angels unnecessarily waste $4 million.

Alternatively, the Angels could gamble that the Dodgers will take the smaller $1 million amount. If this works, the Angels receive a steal of a deal. If the Dodgers actually needed $5 million, however, the Angels burned an opportunity to complete a profitable trade.

To generalize, the risk-return tradeoff says the following: the more one offers, the more likely the other side is to accept the deal. Yet, simultaneously, the more one offers, the worse that deal becomes for a proposer. Thus, the more you risk, the greater return you receive when the gamble works, but the gamble also fails more often.


Knowledge Is Power
The risk-return tradeoff allows us to precisely quantify the cost of uncertainty. In the above example, offering the safe amount wastes $4 million times the probability that the Dodgers were only willing to accept $1 million. Meanwhile, making an aggressive offer wastes the amount that the Angels would value the trade times the probability the Dodgers needed $5 million to accept the deal; this is because the trade fails to occur under these circumstances. Consequently, the Angels are damned-if-they-do, and damned-if-they-don’t. The risk-return tradeoff forces them to figure out how to minimize their losses.

At this point, it should be clear why the Cardinals would value the Astros’ secret information. The more information the Cardinals have about other teams’ minimal demands, the better they will fare in trade negotiations. The Astros’ database provided such information. Some of it was about what the Astros were looking for. Some of it was about what the Astros thought others were looking for. Either way, extra information for the Cardinals organization would decrease the likelihood of miscalculating in trade negotiations. And apparently such knowledge is so valuable that it was worth the risk of getting caught.

Why Are the NBA Finals on Sundays and NHL Finals on Saturdays?

A simple answer: iterated elimination of strictly dominated strategies.

The NBA and NHL have an unfortunate scheduling issue: their finals take place at roughly the same time, and having games scheduled at the same time would hurt both of their ratings. But this isn’t a simple coordination game. Everyone wants to avoid playing on Fridays, which is the worst night for ratings. This forces one series to play games on Sundays, Tuesdays, and Thursdays, with the other on Saturdays, Mondays, and Wednesdays. The first series is far more favorable for ratings and advertisements: it avoids the dreaded Friday ans Saturday nights entirely and also hits the coveted Thursday night slot.[1]

So who gets the good slot and why?

Well, the NBA wins because of its popularity. Some sports fans will watch hockey or basketball no matter what, but a sizable share of the population would be willing to watch both. Sadly for the NHL, though, those general sports fans break heavily in favor of the NBA. This allows the NBA to choose its best choice and forces the NHL to be the follower.

A more technical answer relies on iterated elimination of strictly dominated strategies. In my textbook, I have analogous example between a couple of nightclubs, ONE and TWO.[2] Both need to decide whether to schedule a salsa or a disco theme. (This is like deciding whether to schedule games on Saturdays or Sundays.) More patrons prefer salsa to disco. However, ONE has an advantage in that it is closer to town, giving individuals a general preference for it. Thus, TWO really wants to avoid matching its choice with ONE.

We might imagine a payoff matrix like this:


So TWO can still break even if it picks the same choice as ONE but needs to mismatch to make a profit.

How should TWO decide what to do? Well, it should observe that ONE ought to pick salsa regardless of TWO’s choice—no matter what TWO picks, ONE always makes more by choosing salsa in response. Deducing that ONE will pick salsa, TWO can safely fall back on disco.

In the NBA/NHL case, the NHL must recognize that the NBA knows it will draw uncommitted fans regardless of the NHL’s choice. This means that the NBA should pick Sunday regardless of what the NHL selects. In turn, the NHL can safely place hockey on Saturday. It’s not the perfect outcome, but it’s the best the NHL can do given the circumstances.

[1] Thursdays are the biggest day for ad sales because entertainment companies want to compete for leisure business (movies, theme parks, etc.) over the weekend.

[2] I used these names in the textbook not only because they represent Player ONE and Player TWO but also because Rochester (where I went to grad school) has a club called ONE. This led to an interesting conversation when the Graduate Student Association scheduled an open bar there. I was relatively new at the time and didn’t know much about the city. After hearing rumors about the vent, I asked a fellow grad student where it would be. “ONE,” she said.

“Yes, I know it’s at 1, but where is it?”


The last two lines repeated more times than I would like to admit.


Can More Information Ever Hurt You?

The answer would seem to be no. After all, if information is bad for you, you could always ignore it, continue living your life naively, and do better. Further, it is easy to write down games where a player’s payoff increases with the amount of information he has, and there are plenty of applications positively connecting information to welfare, like Condorcet jury theorem.

In reality, the answer is yes. Unfortunately, you can’t always credible commit to ignoring that information. This can lead to other players not trusting you later on in an interaction, which ultimately leads to a lower payoff for you.

Here’s an example. We begin by flipping a coin and covering it so that neither player observes which side is facing up. Player 1 then chooses whether to quit the game or continue. Quitting ends the game and gives 0 to both players. If he continues, player 2 chooses whether to call heads, tails, or pass. If she passes, both earn 1. If she calls heads or tails, player 2 earns 3 for making the correct call and -3 for making the incorrect call, while player 1 receive -1 regardless.

Because player 2 doesn’t observe the flip, her expected payoff for calling heads or tails is 0. As such, we can write the game tree as follows:


Backward induction easily gives the solution: player 2 chooses pass, so player 1 chooses continue. Both earn 1.

If information can only help, then allowing player 2 access to the result of the coin flip before she moves shouldn’t decrease her payoff. But look what happens when the coin flip is heads:


Now the solution is for player 2 to choose heads and player 1 to quit. Both earn 0!

The case where the coin landed on tails is analogous. Player 2 now chooses tails and player 1 still quits. Both earn 0, meaning player 1 is worse off knowing the result of the coin flip.

What’s going on here? The issue is credible commitment. When player 2 does not know the result of the coin flip, she can credibly commit to passing; although heads or tails could provide a greater payoff, the pass option generates the higher utility in expectation. This credible commitment assuages player 1’s concern that player 2 will screw him over, so he continues even though he could guarantee himself a break even outcome by quitting.

On the other hand, when player 2 knows the result of the coin flip, she cannot credibly commit to passing. Instead, she can’t help but pick the option (heads or tails) that gives her a payoff of 3. But this results in a commitment problem, wherein player 1 quits before player 2 picks an outcome that gives player 1 a payoff of -1. Both end up worse off because of it.

Weird counterexamples like this prevent us from making sweeping claims about whether more information is inherently a good thing. I noted at the beginning that it is easy to write down games where payoffs increase for a player as his information increases. Most game theorists would probably agree that more information is usually better. But it does not appear that we can prove general claims about the relationship.