Category Archives: Bargaining

Let’s Temper Expectations with Iran

US and Iran diplomatic teams negotiating the nuclear deal’s implementation in 2016

Recently, some commentators have suggested that the Biden administration use Trump’s withdrawal from the Joint Comprehensive Plan of Action—the Iran Deal—as leverage to obtain a better agreement. This logic has a tempting appeal. If the United States appears to be less interested in an agreement, and Iran still wants one, then Iran might have to offer more concessions to coax the United States.

However, setting this as the expectation misses the bigger picture. The United States is in a poor bargaining position for three reasons. Negotiations like these are generally difficult for the opponent of the potential nuclear weapons state. Leaving the deal has caused Iran to become more capable of developing nuclear weapons, not less. And because the United States has shown a willingness to leave agreements, the United States must sweeten the pot to coax moderates in Iran to take a domestic political risk by entering a new deal.

As such, the Biden administration ought to temper expectations heading into the new round of negotiations.

Nuclear Negotiations Favor the Would-Be Proliferator
The first problem is structural. Nonproliferators—the United States in this case—generally have the inferior position.

The deadlock with Iran will end in one of three ways: Iran developing a nuclear weapon, preventive war, or a deal. The United States has a long-standing policy to curb nuclear proliferation worldwide. Iran—a long time adversary—does not get a pass here. So the first option is off the table.

Preventive war is not a good outcome either. Optimists may look at how Israel handled Iraqi and Syrian programs and aspire to replicate those precision strikes. But there are major differences between those programs and Iran’s. Iran is in later stages, with multiple key facilities. Anticipating potential strikes, Iran even built its Fordow facility underground. This would mean that an effective preventive war would require boots on the ground and a more expansive mission. Lessons from Afghanistan and Iraq indicate that the United States should think twice here.

The Fordow Plant

The Fordow Fuel Enrichment Plant, most of which is underground

That leaves a deal as the final remaining option. As I write about in Bargaining over the Bomb, if the United States wants long-term Iranian compliance, the deal must be generous. That is because the agreement must be better for Iran than building a weapon. A starting point for negotiations is therefore to treat a potential proliferator as though it already has nuclear weapons. Put bluntly, the United States needs to afford Iran about the same amount of begrudging respect it gives to other nuclear powers, like Pakistan.

Yes, this is a high price to pay. But insufficient concessions will induce Iran to build nuclear weapons. At that point, the United States will have to give that begrudging respect anyway and suffer the systemic instability that comes with another nuclear power born into the world.

Iran’s Increased Nuclear Competency
One benefit of reaching an agreement is that Iran does not spend money and resources on building and maintaining nuclear weapons. As a result, the United States can somewhat tilt the agreement in its favor.

But this reveals the second problem with trying to obtain a better deal than what original Iran Deal provided. The United States withdrew from the agreement in 2018. Iran initially maintained compliance with provisions to freeze its nuclear infrastructure despite the Trump administration’s departure.

However, Iran recently reinitiated work on uranium centrifuges, the critical technology necessary to produce fissile material for nuclear power plants—and nuclear weapons. That means Iran can more easily build nuclear weapons today than it could four years ago.

If you want to stop someone from doing something, you will have to pay them more as doing that thing becomes cheaper and easier. That is Bargaining 101. But that means the United States must be more generous this time around, not less.

Furthermore, if the United States wastes time holding out for a better deal, Iran’s competency will only increase in the interim. That further shrinks the amount that negotiators can extract out of Iran.

The Inconsistency Premium
The final problem is signaling an unwillingness to stick to an agreement is actively harmful, not helpful.

Biden may want to reach an agreement now and stick to it in the long term, but Iran must worry about what will happen in 2024 or 2028. Will a Trump-thinking candidate come into office and tear up the agreement once again?

The fact that the United States did this once suggests that it could easily happen again. If so, why would a moderate in Iran want to expend domestic political capital that will not provide lasting benefits?

Fortunately, the United States can overcome this problem in the short-term. But that means sweetening the pot for moderates in Iran to go along with it and expose themselves to those political risks.

There Is Still Time
Fortunately, there are concrete steps the Biden administration can take to overcome these problems. Iran’s nuclear fate has not been sealed. What the United States does in the upcoming months may very well decide it.

The first step is to revert back to Obama era policies. This means terminating the Trump administration’s latest economic sanctions against Iran. The same goes for the targeted sanctions against Mohammad Javad Zarif, Iran’s Foreign Minister and chief negotiator of the Joint Comprehensive Plan of Action.

Mohammad Javad Zarif, shaking hands with U.S. lead negotiator John Kerry, in 2015

The second step is a change in rhetoric. This is a central policy goal for Iran, and obtaining it would reduce the payoff a nuclear weapon would provide. But right now, the United States is working against that. The two rivals are currently engaged in a stare down, waiting for the other side to blink to restart negotiations. For the reasons described above, Washington’s position gets worse as time progresses. Extending the invitation simultaneously softens the rhetoric and gets the process moving.

That is not to say that any of this will be politically easy. The original Iran Deal faced a fair degree of skepticism in Congress in 2015. More recently, some Republicans pushed Trump to send the Iran Deal to the Senate—precisely so that it can meet a public defeat.

In general, though, Americans should temper expectations. They currently have a bad hand and can only play the cards they currently have. It would be a mistake to let the misplaced hope of a perfect deal get in the way of a good deal.

Why Are Nuclear Agreements Credible?

Compliance is a central issue in arms control negotiations. Take Iran as an example. The United States has long pitched a better world standing for Iran in exchange for Tehran ending its pursuit of nuclear weapons. President Obama once described such an agreement in the following way:

Iran must comply with UN Security Council resolutions and make clear it is willing to meet its responsibilities as a member of the community of nations. We have offered IRan a clear path toward greater international integration if it lives up to its obligations…. But the Iranian government must now demonstrate through deeds its peaceful intentions…

At first pass, such a trade may seem impossible. The United States has to give concessions to Iran to make nonproliferation look attractive. But nothing stops Iran from accepting those concessions, building nuclear weapons anyway, and then leveraging its atomic threat for all of the corresponding benefits. Worse, if the United States expects this, then it has no incentive to offer any sort of deal in the first place.

I, for one, certainly felt that way when I watched Obama pitched the deal in 2009. In fact, I wrote a book that explores those incentives, which just came out:

Bargaining over the Bomb’s central finding is that, despite appearances to the contrary, those deals work. Countries like Iran do not have an inherent incentive to take those concessions and run with them. This is true even if Iran could build nuclear weapons without the United States noticing until the bombs are finished.

A little bit of formalization helps explain why. All we need are a few parameters to map out the incentives. Suppose that nuclear weapons are useful for coercive leverage, and let p be the percentage of the benefits the would-be proliferator can extract once it has acquired those weapons. Let c > 0 represent the cost of building them. Finally, let 𝛿 > 0 represent how much the would-be proliferator cares about the future.

From this, we can calculate what portion of the benefits the would-be proliferator requires now and for the rest of time to not want to develop nuclear weapons. Let x be that necessary share, so that if a deal is made and sustained, the would-be proliferator receives x for today and 𝛿x for the future, for a total payoff of (1 + 𝛿)x.

The apparent barrier to agreements is that the would-be proliferator can take x for today, pay the cost c, acquire nuclear weapons, and then capture p portion of the benefits in the future. And if it can get away with keeping that x value in the interim, why wouldn’t it?

Well, summing up that payoff for proliferation and comparing it to the payoff for accepting the deal, the potential proliferator prefers to not build if:

(1 + 𝛿)x > x + 𝛿p – k
x > p – k/𝛿

In fact, the would-be proliferator is willing to accept an agreement after all!

Where did intuition fail us? The key to understanding why an agreement works is that proliferation only provides a finite amount of benefits. If the opponent offers the potential proliferator those benefits up front, then proliferating at that point is unprofitable—developing weapons leaves the potential proliferator exactly where it was before, but it must pay the costs of proliferation in the interim.

We can see those incentives in the minimum acceptable offer. The value x must be close to p—in other words, the quantity of concessions given immediately must be close to the benefits the potential proliferator would receive if it built the weapons.

To be more precise, the opposing state can fudge this a little—by k/𝛿, to be exact. This is because, by accepting the deal, the would-be proliferator does not have to pay the cost to build. Note that when the would-be proliferator does not care at all about the future, 𝛿 goes toward 0, and thus the potential proliferator needs less to accept an agreement.

In Bargaining over the Bomb, I develop this model in much greater depth, providing context in how the states calculate the benefits of proliferation, how preventive war fits into this, and whether the opponent would actually want to offer such a deal. But the central finding remains: would-be proliferators are happy to accept agreements. The first half of the book explores some facets of those agreements while the second half explains why bargaining may still fail.

Building this model and writing the book has made me a lot more optimistic about whether negotiations with countries like Iran can succeed. If you share the same skepticism I once did, I would encourage you to read the book and think more about the central incentives at play. I don’t think agreements are universally workable, but they should be a critical part of our policymakers’ tool kits.

Does Increasing the Costs of Conflict Decrease the Probability of War?

According to many popular theories of war, the answer is yes. In fact, this is the textbook relationship for standard stories about why states would do well to pursue increased trade ties, alliances, and nuclear weapons. (I am guilty here, too.)

It is easy to understand why this is the conventional wisdom. Consider the bargaining model of war. In the standard set-up, one side expects to receive p portion of the good in dispute, while the other receives 1-p. But because war is costly, both sides are willing to take less than their expected share to avoid conflict. This gives rise to the famous bargaining range:

Notice that when you increase the costs of war for both sides, the bargaining range grows bigger:

Thus, in theory, the reason that increasing the costs of conflict decreases the probability of war is because it makes the set of mutually preferable alternatives larger. In turn, it should be easier to identify one such settlement. Even if no one is being strategic, if you randomly throw a dart on the line, additional costs makes you more likely to hit the range.

Nevertheless, history often yields international crises that run counter to this logic like trade ties before World War I. Intuition based on some formalization is not the same as solving for equilibrium strategies and taking comparative statics. Further, while it is true that increasing the costs of conflict decrease the probability of war for most mechanisms, this is not a universal law.

Such is the topic of a new working paper by Iris Malone and myself. In it, we show that when one state is uncertain about its opponent’s resolve, increasing the costs of war can also increase the probability of war.

The intuition comes from the risk-return tradeoff. If I do not know what your bottom line is, I can take one of two approaches to negotiations.

First, I can make a small offer that only an unresolved type will accept. This works great for me when you are an unresolved type because I capture a large share of the stakes. But if also backfires against a resolved type—they fight, leading to inefficient costs of war.

Second, I can make a large offer that all types will accept. The benefit here is that I assuredly avoid paying the costs of war. The downside is that I am essentially leaving money on the table for the unresolved type.

Many factors determine which is the superior option—the relative likelihoods of each type, my risk propensity, and my costs of war, for example. But one under-appreciated determinant is the relative difference between the resolved type’s reservation value (the minimum it is willing to accept) and the unresolved type’s.

Consider the left side of the above figure. Here, the difference between the reservation values of the resolved and unresolved types is fairly small. Thus, if I make the risky offer that only the unresolved type is willing to accept (the underlined x), I’m only stealing slightly more if I made the safe offer that both types are willing to accept (the bar x). Gambling is not particularly attractive in this case, since I am risking my own costs of war to attempt to take a only a tiny additional amount of the pie.

Now consider the right side of the figure. Here, the difference in types is much greater. Thus, gambling looks comparatively more attractive this time around.

But note that increasing the military/opportunity costs of war has this precise effect of increasing the gap in the types’ reservation values. This is because unresolved types—by definition—view incremental increases to the military/opportunity costs of war as larger than the resolved type. As a result, increasing the costs of conflict can increase the probability of war.

What’s going on here? The core of the problem is that inflating costs simultaneously exacerbates the information problem that the proposer faces. This is because the proposer faces no uncertainty whatsoever when the types have identical reservation values. But increasing costs simultaneously increases the bandwidth of the proposer’s uncertainty. Thus, while increasing costs ought to have a pacifying effect, the countervailing increased uncertainty can sometimes predominate.

The good news for proponents of economic interdependence theory and mutually assured destruction is that this is only a short-term effect. In the long term, the probability of war eventually goes down. This is because sufficiently high costs of war makes each type willing to accept an offer of 0, at which point the proposer will offer an amount that both types assuredly accept.

The above figure illustrates this non-monotonic effect, with the x-axis representing the relative influence of the new costs of war as compared to the old. Note that this has important implications for both economic interdependence and nuclear weapons research. Just because two groups are trading with each other at record levels (say, on the eve of World War I) does not mean that the probability of war will go down. In fact, the parameters for which war occurs with positive probability may increase if the new costs are sufficiently low compared to the already existing costs.

Meanwhile, the figure also shows that nuclear weapons might not have a pacifying effect in the short-run. While the potential damage of 1000 nuclear weapons may push the effect into the guaranteed peace region on the right, the short-run effect of a handful of nuclear weapons might increase the circumstances under which war occurs. This is particularly concerning when thinking about a country like North Korea, which only has a handful of nuclear weapons currently.

As a further caveat, the increased costs only cause more war when the ratio between the receiver’s new costs and the proposer’s costs is sufficiently great compared to that same ratio of the old costs. This is because if the proposer faces massively increased costs compared to its baseline risk-return tradeoff, it is less likely to pursue the risky option even if there is a larger difference between the two types’ reservation values.

Fortunately, this caveat gives a nice comparative static to work with. In the paper, we investigate relations between India and China from 1949 up through the start of the 1962 Sino-Indian War. Interestingly, we show that military tensions boiled over just as trade technologies were increasing their costs for fighting; cooler heads prevailed once again in the 1980s and beyond as potential trade grew to unprecedented levels. Uncertainty over resolve played a big role here, with Indian leadership (falsely) believing that China would back down rather than risk disrupting their trade relationship. We further identify that the critical ratio discussed above held—that is, the lost trade—evenly impacted the two countries, while the status quo costs of war were much smaller for China due to their massive (10:1 in personnel alone!) military advantage.

Again, you can view the paper here. Please send me an email if you have some comments!

Abstract. International relations bargaining theory predicts that increasing the costs of war makes conflict less likely, but some crises emerge after the potential costs of conflict have increased. Why? We show that a non-monotonic relationship exists between the costs of conflict and the probability of war when there is uncertainty about resolve. Under these conditions, increasing the costs of an uninformed party’s opponent has a second-order effect of exacerbating informational asymmetries. We derive precise conditions under which fighting can occur more frequently and empirically showcase the model’s implications through a case study of Sino-Indian relations from 1949 to 2007. As the model predicts, we show that the 1962 Sino-Indian war occurred after a major trade agreement went into effect because uncertainty over Chinese resolve led India to issue aggressive screening offers over a border dispute and gamble on the risk of conflict.

Why Appoint Someone More Extreme than You?

From Appointing Extremists, by Michael Bailey and Matthew Spitzer:

Given their long tenure and broad powers, Supreme Court Justices are among the most powerful actors in American politics. The nomination process is hard to predict and nominee characteristics are often chalked up to idiosyncratic features of each appointment. In this paper, we present a nomination and confirmation game that highlights…important features of the nomination process that have received little emphasis in the formal literature . . . . [U]ncertainty about justice preferences can lead a President to prefer a nominee with preferences more extreme than his preferences.

Wait, what? WHAT!? That cannot possibly be right. Someone with your ideal point can always mimic what you would want them to do. An extremist, on the other hand, might try to impose a policy further away from your optimal outcome.

But Bailey and Spitzer will have you convinced within a few pages. I will try to get the logic down to two pictures, inspired by the figures from their paper. Imagine the Supreme Court consists of just three justices. One has retired, leaving two justices with ideal points J_1 and J_2. You are the president, and you have ideal point P with standard single-peaked preferences. You can pick a nominee with any expected ideological positioning. Call that position N. Due to uncertainty, though, the actual realization of that justice’s ideal point is distributed uniformly on the interval [N – u, N + u]. Also, let’s pretend that the Senate doesn’t exist, because a potential veto is completely irrelevant to the point.

Here are two options. First, you could nominate someone on top of his ideal point in expectation:

n

Or you could nominate someone further to the right in expectation:

nprime

The first one is always better, right? After all, the nominee will be a lot closer to you on average.

Not so fast. Think about the logic of the median voter. If you nominate the more extreme justice (N’), you guarantee that J_2 will be the median voter on all future cases. If you nominate the justice you expect to match your ideological position, you will often get J_2 as the median voter. But sometimes your nominee will actually fall to the left of J_2. And when that’s the case, your nominee becomes the median voter at a position less attractive than J_2. Thus, to hedge against this circumstance, you should nominate a justice who is more extreme (on average) than you are. Very nice!

Obviously, this was a simple example. Nevertheless, the incentive to nominate someone more extreme still influences the president under a wide variety of circumstances, whether he has a Senate to contend with or he has to worry about future nominations. Bailey and Spitzer cover a lot of these concerns toward the end of their manuscript.

I like this paper a lot. Part of why it appeals to me is that they relax the assumption that ideal points are common knowledge. This is certainly a useful assumption to make for a lot of models. For whatever reason, though, both the American politics and IR literatures have almost made this certainty axiomatic. Some of my recent work—on judicial nominees with Maya Sen and crisis bargaining (parts one and two) with Peter Bils—has relaxed this and found interesting results. Adding Bailey and Spitzer to the mix, it appears that there might be a lot of room to grow here.

Understanding the Iran Deal: A Model of Nuclear Reversal

Most of the discussion surrounding the Joint Comprehensive Plan of Action (JCPOA, or the “Iran Deal”) has focused on mechanisms that monitor Iranian compliance. How can we be sure Iran is using this facility for scientific research? When can weapons inspectors show up? Who gets to take the soil samples? These kinds of questions seem to be the focus.

Fewer people have noted Iran’s nuclear divestment built into the deal. Yet Iran is doing a lot here. To wit, here are some of the features of the JCPOA:

  • At the Arak facility, the reactor under construction will be filled with concrete, and the redesigned reactor will not be suitable for weapons-grade plutonium. Excess heavy water supplies will be shipped out of the country. Existing centrifuges will be removed and stored under round-the-clock IAEA supervision at Natanz.
  • The Fordow Fuel Enrichment Plant will be converted to a nuclear, physics, and technology center. Many of its centrifuges will be removed and sent to Natanz under IAEA supervision. Existing cascades will be modified to produce stable isotopes instead of uranium hexafluoride. The associated pipework for the enrichment will also be sent Natanz.
  • All enriched uranium hexafluoride in excess of 300 kilograms will be downblended to 3.67% or sold on the international market.

Though such features are fairly common in arms agreements, they are nevertheless puzzling. None of this makes proliferation impossible, so the terms cannot be for that purpose. But they clearly make proliferating more expensive, which seems like a bad move for Iran if it truly wants to build a weapon. On the other hand, if Iran only wants to use the proliferation threat to coerce concessions out of the United States, this still seems like a bad move. After all, in bargaining, the deals you receive are commensurate with your outside options; make your outside options worse, and the amount of stuff you get goes down as well.

The JCPOA, perhaps the worst formatted treaty ever.

The JCPOA, perhaps the most poorly formatted treaty ever.

What gives? In a new working paper, I argue that undergoing such a reversal works to the benefit of potential proliferators. Indeed, potential proliferators can extract the entire surplus by divesting in this manner.

In short, the logic is as follows. Opponents (like the United States versus Iran) can deal with the proliferation problem in one of two ways. First, they can give “carrots” by striking a deal with the nuclearizing state. These types of deals provide enough benefits to potential proliferators that building weapons is no longer profitable. Consequently, and perhaps surprisingly, they are credible even in the absence of effective monitoring institutions.

Second, opponents can leverage the “stick” in the form of preventive war. The monitoring problem makes this difficult, though. Sometimes following through on the preventive war threat shuts down a real project. Sometimes preventive war just a bluff. Sometimes opponents end up fighting a target that was not even investing in proliferation. Sometimes the potential proliferator can successfully and secretly obtain a nuclear weapon. No matter what, though, this is a mess of inefficiency, both from the cost of war and the cost of proliferation.

Naturally, the opponent chooses the option that is cheaper for it. So if the cost of preventive war is sufficiently low, it goes in that direction. In contrast, if the price of concessions is relatively lower, carrots are preferable.

Note that one determinant of the opponent’s choice is the cost of proliferating. When building weapons is cheap, the concessions necessary to convince the potential proliferator not to build are very high. But if proliferation is very expensive, then making the deal looks very attractive to the opponent.

This is where nuclear reversals like those built into the JCPOA come into play. Think about the exact proliferation cost that flips the opponent’s preference from sticks to carrots. Below that line, the inefficiency weighs down everyone’s payoff. Right above that line, efficiency reigns supreme. But the opponent is right at indifference at this point. Thus, the entire surplus shifts to the potential proliferator!

The following payoff graph drives home this point. A is the potential proliferator; B is the opponent; k* is the exact value that flips the opponent from the stick strategy to the carrot strategy:

Making proliferation more difficult can work in your favor.

Making proliferation more difficult can work in your favor.

If you are below k*, the opponent opts for the preventive war threat, weighing down everyone’s payoff. But jump above k*, and suddenly the opponent wants to make a deal. Note that everyone’s payoff is greater under these circumstances because there is no deadweight loss built into the system.

Thus, imagine that you are a potential proliferator living in a world below k*. If you do nothing, your opponent is going to credibly threaten preventive war against you. However, if you increase the cost of proliferating—say, by agreeing to measures like those in the JCPOA—suddenly you make out like a bandit. As such, you obviously divest your program.

What does this say about Iran? Well, it indicates that a lot of the policy discussion is misplaced for a few of reasons:

  1. These sorts of agreements work even in the absence of effective monitoring institutions. So while monitoring might be nice, it is definitely not necessary to avoid a nuclear Iran. (The paper clarifies exactly why this works, which could be the subject of its own blog post.)
  2. Iranian refusal to agree to further restrictions is not proof positive of some secret plan to proliferate. Looking back at the graph, note that while some reversal works to Iran’s benefit, anything past k* decreases its payoff. As such, by standing firm, Iran may be playing a delicate balancing game to get exactly to k* and no further.
  3. These deals primarily benefit potential proliferators. This might come as a surprise. After all, potential proliferators do not have nuclear weapons at the start of the interaction, have to pay costs to acquire those weapons, and can have their efforts erased if the opponent decides to initiate a preventive war. Yet the potential proliferators can extract all of the surplus from a deal if they are careful.
  4. In light of (3), it is not surprising that a majority of Americans believe that Iran got the better end of the deal. But that’s not inherently because Washington bungled the negotiations. Rather, despite all the military power the United States has, these types of interactions inherently deal us a losing hand.

The paper works through the logic of the above argument and discusses the empirical implications in greater depth. Please take a look at it; I’d love to hear you comments.

Bargaining Power and the Iran Deal

Today’s post is not an attempt to give a full analysis of the Iran deal.[1] Rather, I just want to make a quick point about how the structure of negotiations greatly favors the Obama administration.

Recall the equilibrium of an ultimatum game. When two parties are trying to divide a bargaining pie and one side makes a take-it-or-leave-it offer, that proposer receives the entire benefit from bargaining. In fact, even if negotiations can continue past a single offer, as long as a single person controls all of the offers, the receiver still receives none of the surplus.

This result makes a lot of people feel uncomfortable. After all, the outcomes are far from fair. Fortunately, in real life, people are rarely constrained in this way. If I don’t like the offer you propose me, I can always propose a counteroffer. And if you don’t like that, nothing stops you from making a counter-counteroffer. That type of negotiations is called Rubinstein bargaining, and it ends with a even split of the pie.

In my book on bargaining, though, I point out that there are some prominent exceptions where negotiations take the form of an ultimatum game. For example, when returning a security deposit, your former landlord can write you a check and leave it at that. You could try suggesting a counteroffer, but the landlord doesn’t have to pay attention—you already have the check, and you need to decide whether that’s better than going to court or not. This helps explain why renters often dread the move out.

Unfortunately for members of Congress, “negotiations” between the Obama administration and Congress are more like security deposits than haggling over the price of strawberries at a farmer’s market. If Congress rejects the deal (which would require overriding a presidential veto), they can’t go to Iran and negotiate a new deal for themselves. The Obama administration controls dealings with Iran, giving it all of the proposal power. Bargaining theory would therefore predict that the Obama administration will be very satisfied[2], while Congress will find the deal about as attractive as if there were no deal at all.

And that’s basically what we are seeing right now. Congress is up in arms over the deal (hehe). They are going to make a big show about what they claim is an awful agreement, but they don’t have any say about the terms beyond an up/down vote. That—combined with the fact that Obama only needs 34 senators to get this to work—means that the Obama administration is going to receivea very favorable deal for itself.

[1] Here is my take on why such deals work. The paper is a bit dated, but it gets the point across.

[2] I mean that the Obama administration will be very satisfied by the deal insofar as it relates to its disagreement with Congress. It might not be so satisfied by the deal insofar as it relates to its disagreement with Iran.

The Game Theory of the Cardinals/Astros Spying Affair

The NY Times reported today that the St. Louis Cardinals hacked the Houston Astros’ internal files, including information on the trade market. I suspect that everyone has a basic understanding why the Cardinals would find this information useful. “Knowledge is power,” as they say. Heck, the United States spends $52.6 billion each year on spying. But game theorists have figured out how to quantify this intuition is both interesting and under-appreciated. That is the topic of this post.

Why Trade?
Trades are very popular in baseball, and the market will essentially take over sports headlines as we approach the July 31 trading deadline. Teams like to trade for the same reason countries like to trade with each other. Entity A has a lot of object X but lacks Y, while Entity B has a lot of object Y but lacks X. So teams swap a shortstop for an outfielder, and bad teams exchange their best players for good teams’ prospects. Everyone wins.

However, the extent to which one side wins also matters. If the Angels trade a second baseman to the Dodgers for a pitcher, they are happier than if they have to trade that same second baseman for that same pitcher and pay an additional $1 million to the Dodgers. Figuring out exactly what to offer is straightforward when each side is aware of exactly how much the other values all the components. In fact, bargaining theory indicates that teams should reach such deals rapidly. Unfortunately, life is not so simple.

The Risk-Return Tradeoff
What does a team do when it isn’t sure of the other side’s bottom line? They face what game theorists call a risk-return tradeoff. Suppose that the Angels know that the Dodgers are not willing to trade the second baseman for the pitcher straight up. Instead, the Angels know that the Dodgers either need $1 million or $5 million to sweeten the deal. While the Angels would be willing to make the trade at either price, they are not sure exactly what the Dodgers require.

For simplicity, suppose the Angels can only make a single take-it-or-leave-it offer. They have two choices. First, they can offer the additional $5 million. This is safe and guarantees the trade. However, if the Dodgers were actually willing to accept only $1 million, the Angels unnecessarily waste $4 million.

Alternatively, the Angels could gamble that the Dodgers will take the smaller $1 million amount. If this works, the Angels receive a steal of a deal. If the Dodgers actually needed $5 million, however, the Angels burned an opportunity to complete a profitable trade.

To generalize, the risk-return tradeoff says the following: the more one offers, the more likely the other side is to accept the deal. Yet, simultaneously, the more one offers, the worse that deal becomes for a proposer. Thus, the more you risk, the greater return you receive when the gamble works, but the gamble also fails more often.

 

Knowledge Is Power
The risk-return tradeoff allows us to precisely quantify the cost of uncertainty. In the above example, offering the safe amount wastes $4 million times the probability that the Dodgers were only willing to accept $1 million. Meanwhile, making an aggressive offer wastes the amount that the Angels would value the trade times the probability the Dodgers needed $5 million to accept the deal; this is because the trade fails to occur under these circumstances. Consequently, the Angels are damned-if-they-do, and damned-if-they-don’t. The risk-return tradeoff forces them to figure out how to minimize their losses.

At this point, it should be clear why the Cardinals would value the Astros’ secret information. The more information the Cardinals have about other teams’ minimal demands, the better they will fare in trade negotiations. The Astros’ database provided such information. Some of it was about what the Astros were looking for. Some of it was about what the Astros thought others were looking for. Either way, extra information for the Cardinals organization would decrease the likelihood of miscalculating in trade negotiations. And apparently such knowledge is so valuable that it was worth the risk of getting caught.

Game Theory and Bargaining on The Good Wife

Last week’s episode of The Good Wife (““Trust Issues”) was interesting for two reasons: it used a “ripped from the headlines” legal case that I discuss in my book on bargaining and the legal argument they use is essentially a trivial application of pre-play cheap talk in a repeated prisoner’s dilemma.

The $9 Billion Google/Apple Anti-Trust Lawsuit
First, the background of the real life version of the case. In the early 2000s, Google and Apple (along with Adobe and Intel) allegedly had a “no poaching” gentleman’s agreement. That is, each company in the group pledged to not attempt to hire employees at any of the other companies. The employees eventually figured out what was going on, filed a $9 billion lawsuit, and settled in April 2014 for an undisclosed amount.

Why is the practice illegal? It goes without saying that quashing competition among firms hurts the employees’ bargaining power, and the law is there to protect those employees. But what is not so clear is just how attractive a no poaching agreement is to the firms. In fact, when companies play by the rules, just about all of the potential for profit goes into the employees’ hands.

To see why, imagine that Google and Apple both wanted to hire Karen. Karen has impressive computer programming skills. And because Google and Apple value computing skills at a roughly equal rate, suppose that the most Google would be willing to pay her is $200,000 while Apple’s maximum is $195,000. Put differently, $200,000 and $195,000 represent the break even points for the respective companies. Put differently again, Karen will bring in $200,000 in profits to Google and $195,000 to Apple, so hiring her for any more than that will result in a net loss.

How will that profit ultimately be divided between Karen and her employer? You might think that Google should be the one hiring her. And you are right—she is worth $5000 more to Google than Apple. You might also think that Google will profit handsomely from her employment. However, as I discuss at length in the book, the logic of bargaining shows this to be untrue. If Google offers Karen any less than $195,000, she can always secure a job from Apple; this is because Apple values her at that amount, and so Apple would be willing to slightly outbid Google to hire her. Thus, the outbidding process ultimately ensures that Karen receives at least $195,000. She is the real winner. Although Google might still profit from her employment, its net gain will not exceed $5000 ($200,000 – $195,000).

Negotiating Collusion
So the firms have great incentive to collude, drive down wages, and secure more of the profits for themselves. What does that sort of collusion look like?

Well, we might think of it as a repeated prisoner’s dilemma. In this type of interaction, in any given year, each of us would maximize profits by trying to poach the rival firm’s employees regardless of what the other firm chooses to do. (If you don’t poach, then I make out like a bandit. If you do poach, I’m still better off poaching and not losing all the employees.) However, because each of us is poaching and driving up employee wages, both us are ultimately worse off than if we could enforce an agreement that required us to cooperate with each other and not poach.

Of course, anti-trust laws prevent us from explicitly contracting such an agreement in a legally enforceable manner. However, an informal and internally enforceable agreement is possible. Suppose we both start off by cooperating with each other by not poaching. Then, in each subsequent year, if both of us have consistently cooperated before, we continue cooperating. Otherwise, we revert to poaching.

Would anyone like to break the agreement? No. Although I could gain a temporary advantage against you by poaching your employees today, the higher wages over the long-term with mutual poaching are going to vastly outstrip that short-term benefit.

This is exactly the type of agreement Google and Apple struck. In fact, when a Google recruiter attempted to hire some Apple employees, Steve Jobs shot the following email to Google bigwigs: “If you hire a single one of these people, that means war.”

Alicia Florrick’s Defense
The episode of The Good Wife featured fictionalized versions of Google and Apple involved in the same affair. Like reality, employees caught on and sued.

The plaintiff’s lawyers thought they had the case in the bag. Indeed, they had turned one of the owners of a trust company against the defense. He went on record that the defense had negotiated the terms of the no poaching policy explicitly and was very happy to agree to the deal.

Alicia Florrick (the defense attorney and titular Good Wife) had a great defense: any discussion of such an agreement is not an unambiguous signal of plans to break the law. These repeated prisoner’s dilemmas have an interesting property in that regardless of whether you plan to cooperate with the other company or screw them over at the first possible moment, you always want to convince the other side that you will cooperate. If you plan to cooperate, then you want to tell the other side to cooperate as well so you can sustain that cooperation in the long term. If you want to follow the law and poach freely instead, you still want to convince the other side that you are going to cooperate so that they cooperate as well, allowing you to screw them over in the process.

So Florrick points out that this type of pre-play communication is meaningless. Regardless of the ultimate intend, the defendant would say the exact same thing. The testimony therefore proves nothing. The plaintiff promptly settled.

All told, I really appreciate two things about the episode: its sophisticated understanding of a potentially very complicated strategic situation and the how punny the “Trust Issues” title is.

Park Place Is Still Worthless: The Game Theory of McDonald’s Monopoly

McDonald’s Monopoly begins again today. With that in mind, I thought I would update my explanation of the game theory behind the value of each piece, especially since my new book on bargaining connects the same mechanism to the De Beers diamond monopoly, star free agent athletes, and a shady business deal between Google and Apple. Here’s the post, mostly in its original form:

__________________________________

McDonald’s Monopoly is back. As always, if you collect Park Place and Boardwalk, you win a million dollars. I just got a Park Place. That’s worth about $500,000, right?

Actually, it is worth nothing. Not close to nothing, but absolutely, positively nothing.

It helps to know how McDonald’s structures the game. Despite the apparent value of Park Place, McDonald’s floods the market with Park Place pieces, probably to trick naive players into thinking they are close to riches. I do not have an exact number, but I would imagine there are easily tens of thousands of Park Places floating around. However, they only one or two Boardwalks available. (Again, I do not know the exact number, but it is equal to the number of million dollar prizes McDonald’s want to give out.)

Even with that disparity, you might think Park Place maintains some value. Yet, it is easy to show that this intuition is wrong. Imagine you have a Boardwalk piece and you corral two Park Place holders into a room. (This works if you gathered thousands of them as well, but you only need two of them for this to work.) You tell them that you are looking to buy a Park Place piece. Each of them must write their sell price on a piece of paper. You will complete the transaction at the lowest price. For example, if one person wrote $500,000 and the other wrote $400,000, you would buy it from the second at $400,000.

Assume that sell prices are continuous and weakly positive, and that ties are broken by coin flip. How much should you expect to pay?

The answer is $0.

The proof is extremely simple. It is clear that both bidding $0 is a Nash equilibrium. (Check out my textbook or watch my YouTube videos if you do not know what a Nash equilibrium is.) If either Park Place owner deviates to a positive amount, that deviator would lose, since the other guy is bidding 0. So neither player can profitably deviate. Thus, both bidding 0 is a Nash equilibrium.

What if one bid $x greater than or equal to 0 and the other bid $y > x? Then the person bidding y could profitably deviate to any amount between y and x. He still wins the piece, but he pays less for it. Thus, this is a profitable deviation and bids x and y are not an equilibrium.

The final case is when both players bid the same amount z > 0. In expectation, both earn z/2. Regardless of the tiebreaking mechanism, one player must lose at least half the time. That player can profitably deviate to 3z/8 and win outright. This sell price is larger than the expectation.

This exhausts all possibilities. So both bidding $0 is the unique Nash equilibrium. Despite requiring another piece, your Boardwalk is worth a full million dollars.

What is going wrong for the Park Place holders? Supply simply outstrips demand. Any person with a Park Place but no Boardwalk walks away with nothing, which ultimately drives down the price of Park Place down to nothing as well.

Moral of the story: Don’t get excited if you get a Park Place piece.

Note 1: If money is discrete down to the cent, then the winning bid could be $0 or $0.01. (With the right tie breaker, it could also be $0.02.) Either way, this is not good for owners of Park Place.

Note 2: In practice, we might see Park Place sell for some marginally higher value. That is because it is (slightly) costly for a Boardwalk owner to seek out and solicit bids from more Park Place holders. However, Park Place itself is not creating any value here—it’s purely the transaction cost.

Note 3: An enterprising Park Place owner could purchase all other Park Place pieces and destroy them. This would force the Boardwalk controller to split the million dollars. While that is reasonable to do when there are only two individuals like the example, good luck buying all Park Places in reality. (Transaction costs strike again!)

__________________________________

Now time for an update. What might not have been clear in the original post is that McDonald’s Monopoly is a simple illustration of a matching problem. Whenever you have a situation with n individuals who need one of m partners, all of the economic benefits go to the partners if m < n. The logic is the same as above. If an individual does not obtain a partner, he receives no profit. This makes him desperate to partner with someone, even if it means drastically dropping his share of the money to be made. But then the underbidding process begins until the m partners are taking all of the revenues for themselves.

In the book, I have a more practical example involving star free agent athletes. For example, there is only one LeBron James. Every team would like to sign him to improve their chances of winning. Yet this ultimately results in the final contract price to be so high that the team doesn’t actually benefit much (or at all) from signing James.

Well, that’s how it would work if professional sports organizations were not scheming to stop this. The NBA in particular has a maximum salary. So even if LeBron James is worth $50 million per season, he won’t be paid that much. (The exact amount a player can earn is complicated.) This ensures that the team that signs him will benefit from the transaction but takes money away from James.

Non-sports business scheme in similar ways. More than 100 year ago, the De Beers diamond company realized that new mine discoveries would mean that diamond supply would soon outstrip demand. This would kill diamond prices. So De Beers began purchasing tons of mines to intentionally limit production and increase price. Similarly, Apple and Google once had a “no compete” informal agreement to not poach each other’s employees. Without the outside bidder, a superstar computer engineer would not be able to increase his wage to the fair market value. Of course, this is highly illegal. Employees filed a $9 billion anti-trust lawsuit when they learned of this. The parties eventually settled the suit outside of court for an undisclosed amount.

To sum up, matching is good for those in demand and bad for those in high supply. With that in mind, good luck finding that Boardwalk!

What Does Game Theory Say about Negotiating a Pay Raise?

A common question I get is what game theory tells us about negotiating a pay raise. Because I just published a book on bargaining, this is something I have been thinking about a lot recently. Fortunately, I can narrow the fundamentals to three simple points:

1) Virtually all of the work is done before you sit down at the table.
When you ask the average person how they negotiated their previous raise, you will commonly hear anecdotes about how that individual said some (allegedly) cunning things, (allegedly) outwitted his or her boss, and received a hefty pay hike. Drawing inferences from this is problematic for a number of reasons:

  1. Anecdotal “evidence” isn’t evidence.
  2. The reason for the raise might have been orthogonal to what was said.
  3. Worse, the raise might have been despite what was said.
  4. It assumes that the boss is more concerned about dazzling words than money, his own job performance, and institutional constraints.

The fourth point is especially concerning. Think about the people who control your salaries. They did not get their job because they are easily persuaded by rehearsed speeches. No, they are there because they are good at making smart hiring decisions and keeping salaries low. Moreover, because this is their job, they engage in this sort of bargaining frequently. It would thus be very strange for someone like that to make such a rookie mistake.

So if you think you can just be clever at the bargaining table, you are going to have a bad time. Indeed, the bargaining table is not a game of chess. It should simply be a declaration of checkmate. The real work is building your bargaining leverage ahead of time.

2) Do not be afraid to reject offers and make counteroffers.
Imagine a world where only one negotiator had the ability to make an offer, while the other could only accept or reject that proposal. Accepting implements the deal; rejecting means that neither party enjoys the benefits of mutual cooperation. What portion of the economic benefits will the proposer take? And how much of the benefits will go to the receiver?

You might guess that the proposer has the advantage here. And you’d be right. What surprises most people, however, is the extent of the advantage: the proposer reaps virtually all of the benefits of the relationship, while the receiver is barely any better off than had the parties not struck a deal.

How do we know this? Game theory allows us to study this exact scenario rigorously. Indeed, the setup has a specific name: the ultimatum game. It shows that a party with the exclusive right to make proposals has all of the bargaining power.

 

That might seem like a big problem if you are the one receiving the offers. Fortunately, the problem is easy to solve in practice. Few real life bargaining situations expressly prohibit parties from making counteroffers. (As I discuss in the book, return of security deposits is one such exception, and we all know that turns out poorly for the renter—i.e., the receiver of the offer.) Even the ability to make a single counteroffer drastically increases an individual’s bargaining power. And if the parties could potentially bargain back and forth without end—called Rubinstein bargaining, perhaps the most realistic of proposal structures—bargaining equitably divides the benefits.

As the section header says, the lesson here is that you should not be afraid to reject low offers and propose a more favorable division. Yet people often fail to do this. This is especially common at the time of hire. After culling through all of the applications, a hiring manager might propose a wage. The new employee, deathly afraid of losing the position, meekly accepts.

Of course, the new employee is not fully appreciating the company’s incentives. By making the proposal, the company has signaled that the individual is the best available candidate. This inevitably gives him a little bit of wiggle room with his wage. He should exercise this leverage and push for a little more—especially because starting wage is often the point of departure for all future raise negotiations.

3) Increase your value to other companies.
Your company does not pay you a lot of money to be nice to you. It pays you because it has no other choice. Although many things can force a company’s hand in this manner, competing offers is particularly important.

Imagine that your company values your work at $50 per hour. If you can only work for them, due the back-and-forth logic from above, we might imagine that your wage will land in the neighborhood of $40 per hour. However, suppose that a second company exists that is willing to pay you up to $25 per hour. Now how much will you make?

The answer is no less than $40 per hour. Why? Well, suppose not. If your current company is only paying you, say, $30 per hour, you could go to the other company and ask for a little bit more. They would be obliged to pay you that since they value you up to $40 per hour. But, of course, your original company values you up to $50 per hour. So they have incentive to ultimately outbid the other company and keep you under their roof.

(This same mechanism means that Park Place is worthless in McDonald’s monopoly.)

Game theorists call such alternatives “outside options”; the better your outside options are, the more attractive the offers your bargaining partner has to make to keep you around. Consequently, being attractive to other companies can get you a raise with your current company even if you have no serious intention to leave. Rather, you can diplomatically point out to your boss that a person with your particular skill set typically makes $X per year and that your wage should be commensurate with that amount. Your boss will see this as a thinly veiled threat that you might leave the company. Still, if the company values your work, she will have no choice but to bump you to that level. And if she doesn’t…well, you are valuable to other companies, so you can go make that amount of money elsewhere.

Conclusion
Bargaining can be a scary process. Unfortunately, this fear blinds us to some of the critical facets of the process. Negotiations are strategic; only thinking about your worries and concerns means you are ignoring your employer’s worries and concerns. Yet you can use those opposing worries and concerns to coerce a better deal for yourself. Employers do not hold all of the power. Once you realize this, you can take advantage of the opposing weakness at the bargaining table.

I talk about all of these issues in greater length in my book, Game Theory 101: Bargaining. I also cover a bunch of real world applications to these and a whole bunch of other theories. If this stuff seems interesting to you, you should check it out!