Category Archives: Bargaining

Does Increasing the Costs of Conflict Decrease the Probability of War?

According to many popular theories of war, the answer is yes. In fact, this is the textbook relationship for standard stories about why states would do well to pursue increased trade ties, alliances, and nuclear weapons. (I am guilty here, too.)

It is easy to understand why this is the conventional wisdom. Consider the bargaining model of war. In the standard set-up, one side expects to receive p portion of the good in dispute, while the other receives 1-p. But because war is costly, both sides are willing to take less than their expected share to avoid conflict. This gives rise to the famous bargaining range:

Notice that when you increase the costs of war for both sides, the bargaining range grows bigger:

Thus, in theory, the reason that increasing the costs of conflict decreases the probability of war is because it makes the set of mutually preferable alternatives larger. In turn, it should be easier to identify one such settlement. Even if no one is being strategic, if you randomly throw a dart on the line, additional costs makes you more likely to hit the range.

Nevertheless, history often yields international crises that run counter to this logic like trade ties before World War I. Intuition based on some formalization is not the same as solving for equilibrium strategies and taking comparative statics. Further, while it is true that increasing the costs of conflict decrease the probability of war for most mechanisms, this is not a universal law.

Such is the topic of a new working paper by Iris Malone and myself. In it, we show that when one state is uncertain about its opponent’s resolve, increasing the costs of war can also increase the probability of war.

The intuition comes from the risk-return tradeoff. If I do not know what your bottom line is, I can take one of two approaches to negotiations.

First, I can make a small offer that only an unresolved type will accept. This works great for me when you are an unresolved type because I capture a large share of the stakes. But if also backfires against a resolved type—they fight, leading to inefficient costs of war.

Second, I can make a large offer that all types will accept. The benefit here is that I assuredly avoid paying the costs of war. The downside is that I am essentially leaving money on the table for the unresolved type.

Many factors determine which is the superior option—the relative likelihoods of each type, my risk propensity, and my costs of war, for example. But one under-appreciated determinant is the relative difference between the resolved type’s reservation value (the minimum it is willing to accept) and the unresolved type’s.

Consider the left side of the above figure. Here, the difference between the reservation values of the resolved and unresolved types is fairly small. Thus, if I make the risky offer that only the unresolved type is willing to accept (the underlined x), I’m only stealing slightly more if I made the safe offer that both types are willing to accept (the bar x). Gambling is not particularly attractive in this case, since I am risking my own costs of war to attempt to take a only a tiny additional amount of the pie.

Now consider the right side of the figure. Here, the difference in types is much greater. Thus, gambling looks comparatively more attractive this time around.

But note that increasing the military/opportunity costs of war has this precise effect of increasing the gap in the types’ reservation values. This is because unresolved types—by definition—view incremental increases to the military/opportunity costs of war as larger than the resolved type. As a result, increasing the costs of conflict can increase the probability of war.

What’s going on here? The core of the problem is that inflating costs simultaneously exacerbates the information problem that the proposer faces. This is because the proposer faces no uncertainty whatsoever when the types have identical reservation values. But increasing costs simultaneously increases the bandwidth of the proposer’s uncertainty. Thus, while increasing costs ought to have a pacifying effect, the countervailing increased uncertainty can sometimes predominate.

The good news for proponents of economic interdependence theory and mutually assured destruction is that this is only a short-term effect. In the long term, the probability of war eventually goes down. This is because sufficiently high costs of war makes each type willing to accept an offer of 0, at which point the proposer will offer an amount that both types assuredly accept.

The above figure illustrates this non-monotonic effect, with the x-axis representing the relative influence of the new costs of war as compared to the old. Note that this has important implications for both economic interdependence and nuclear weapons research. Just because two groups are trading with each other at record levels (say, on the eve of World War I) does not mean that the probability of war will go down. In fact, the parameters for which war occurs with positive probability may increase if the new costs are sufficiently low compared to the already existing costs.

Meanwhile, the figure also shows that nuclear weapons might not have a pacifying effect in the short-run. While the potential damage of 1000 nuclear weapons may push the effect into the guaranteed peace region on the right, the short-run effect of a handful of nuclear weapons might increase the circumstances under which war occurs. This is particularly concerning when thinking about a country like North Korea, which only has a handful of nuclear weapons currently.

As a further caveat, the increased costs only cause more war when the ratio between the receiver’s new costs and the proposer’s costs is sufficiently great compared to that same ratio of the old costs. This is because if the proposer faces massively increased costs compared to its baseline risk-return tradeoff, it is less likely to pursue the risky option even if there is a larger difference between the two types’ reservation values.

Fortunately, this caveat gives a nice comparative static to work with. In the paper, we investigate relations between India and China from 1949 up through the start of the 1962 Sino-Indian War. Interestingly, we show that military tensions boiled over just as trade technologies were increasing their costs for fighting; cooler heads prevailed once again in the 1980s and beyond as potential trade grew to unprecedented levels. Uncertainty over resolve played a big role here, with Indian leadership (falsely) believing that China would back down rather than risk disrupting their trade relationship. We further identify that the critical ratio discussed above held—that is, the lost trade—evenly impacted the two countries, while the status quo costs of war were much smaller for China due to their massive (10:1 in personnel alone!) military advantage.

Again, you can view the paper here. Please send me an email if you have some comments!

Abstract. International relations bargaining theory predicts that increasing the costs of war makes conflict less likely, but some crises emerge after the potential costs of conflict have increased. Why? We show that a non-monotonic relationship exists between the costs of conflict and the probability of war when there is uncertainty about resolve. Under these conditions, increasing the costs of an uninformed party’s opponent has a second-order effect of exacerbating informational asymmetries. We derive precise conditions under which fighting can occur more frequently and empirically showcase the model’s implications through a case study of Sino-Indian relations from 1949 to 2007. As the model predicts, we show that the 1962 Sino-Indian war occurred after a major trade agreement went into effect because uncertainty over Chinese resolve led India to issue aggressive screening offers over a border dispute and gamble on the risk of conflict.

Why Appoint Someone More Extreme than You?

From Appointing Extremists, by Michael Bailey and Matthew Spitzer:

Given their long tenure and broad powers, Supreme Court Justices are among the most powerful actors in American politics. The nomination process is hard to predict and nominee characteristics are often chalked up to idiosyncratic features of each appointment. In this paper, we present a nomination and confirmation game that highlights…important features of the nomination process that have received little emphasis in the formal literature . . . . [U]ncertainty about justice preferences can lead a President to prefer a nominee with preferences more extreme than his preferences.

Wait, what? WHAT!? That cannot possibly be right. Someone with your ideal point can always mimic what you would want them to do. An extremist, on the other hand, might try to impose a policy further away from your optimal outcome.

But Bailey and Spitzer will have you convinced within a few pages. I will try to get the logic down to two pictures, inspired by the figures from their paper. Imagine the Supreme Court consists of just three justices. One has retired, leaving two justices with ideal points J_1 and J_2. You are the president, and you have ideal point P with standard single-peaked preferences. You can pick a nominee with any expected ideological positioning. Call that position N. Due to uncertainty, though, the actual realization of that justice’s ideal point is distributed uniformly on the interval [N – u, N + u]. Also, let’s pretend that the Senate doesn’t exist, because a potential veto is completely irrelevant to the point.

Here are two options. First, you could nominate someone on top of his ideal point in expectation:

n

Or you could nominate someone further to the right in expectation:

nprime

The first one is always better, right? After all, the nominee will be a lot closer to you on average.

Not so fast. Think about the logic of the median voter. If you nominate the more extreme justice (N’), you guarantee that J_2 will be the median voter on all future cases. If you nominate the justice you expect to match your ideological position, you will often get J_2 as the median voter. But sometimes your nominee will actually fall to the left of J_2. And when that’s the case, your nominee becomes the median voter at a position less attractive than J_2. Thus, to hedge against this circumstance, you should nominate a justice who is more extreme (on average) than you are. Very nice!

Obviously, this was a simple example. Nevertheless, the incentive to nominate someone more extreme still influences the president under a wide variety of circumstances, whether he has a Senate to contend with or he has to worry about future nominations. Bailey and Spitzer cover a lot of these concerns toward the end of their manuscript.

I like this paper a lot. Part of why it appeals to me is that they relax the assumption that ideal points are common knowledge. This is certainly a useful assumption to make for a lot of models. For whatever reason, though, both the American politics and IR literatures have almost made this certainty axiomatic. Some of my recent work—on judicial nominees with Maya Sen and crisis bargaining (parts one and two) with Peter Bils—has relaxed this and found interesting results. Adding Bailey and Spitzer to the mix, it appears that there might be a lot of room to grow here.

Understanding the Iran Deal: A Model of Nuclear Reversal

Most of the discussion surrounding the Joint Comprehensive Plan of Action (JCPOA, or the “Iran Deal”) has focused on mechanisms that monitor Iranian compliance. How can we be sure Iran is using this facility for scientific research? When can weapons inspectors show up? Who gets to take the soil samples? These kinds of questions seem to be the focus.

Fewer people have noted Iran’s nuclear divestment built into the deal. Yet Iran is doing a lot here. To wit, here are some of the features of the JCPOA:

  • At the Arak facility, the reactor under construction will be filled with concrete, and the redesigned reactor will not be suitable for weapons-grade plutonium. Excess heavy water supplies will be shipped out of the country. Existing centrifuges will be removed and stored under round-the-clock IAEA supervision at Natanz.
  • The Fordow Fuel Enrichment Plant will be converted to a nuclear, physics, and technology center. Many of its centrifuges will be removed and sent to Natanz under IAEA supervision. Existing cascades will be modified to produce stable isotopes instead of uranium hexafluoride. The associated pipework for the enrichment will also be sent Natanz.
  • All enriched uranium hexafluoride in excess of 300 kilograms will be downblended to 3.67% or sold on the international market.

Though such features are fairly common in arms agreements, they are nevertheless puzzling. None of this makes proliferation impossible, so the terms cannot be for that purpose. But they clearly make proliferating more expensive, which seems like a bad move for Iran if it truly wants to build a weapon. On the other hand, if Iran only wants to use the proliferation threat to coerce concessions out of the United States, this still seems like a bad move. After all, in bargaining, the deals you receive are commensurate with your outside options; make your outside options worse, and the amount of stuff you get goes down as well.

The JCPOA, perhaps the worst formatted treaty ever.

The JCPOA, perhaps the most poorly formatted treaty ever.

What gives? In a new working paper, I argue that undergoing such a reversal works to the benefit of potential proliferators. Indeed, potential proliferators can extract the entire surplus by divesting in this manner.

In short, the logic is as follows. Opponents (like the United States versus Iran) can deal with the proliferation problem in one of two ways. First, they can give “carrots” by striking a deal with the nuclearizing state. These types of deals provide enough benefits to potential proliferators that building weapons is no longer profitable. Consequently, and perhaps surprisingly, they are credible even in the absence of effective monitoring institutions.

Second, opponents can leverage the “stick” in the form of preventive war. The monitoring problem makes this difficult, though. Sometimes following through on the preventive war threat shuts down a real project. Sometimes preventive war just a bluff. Sometimes opponents end up fighting a target that was not even investing in proliferation. Sometimes the potential proliferator can successfully and secretly obtain a nuclear weapon. No matter what, though, this is a mess of inefficiency, both from the cost of war and the cost of proliferation.

Naturally, the opponent chooses the option that is cheaper for it. So if the cost of preventive war is sufficiently low, it goes in that direction. In contrast, if the price of concessions is relatively lower, carrots are preferable.

Note that one determinant of the opponent’s choice is the cost of proliferating. When building weapons is cheap, the concessions necessary to convince the potential proliferator not to build are very high. But if proliferation is very expensive, then making the deal looks very attractive to the opponent.

This is where nuclear reversals like those built into the JCPOA come into play. Think about the exact proliferation cost that flips the opponent’s preference from sticks to carrots. Below that line, the inefficiency weighs down everyone’s payoff. Right above that line, efficiency reigns supreme. But the opponent is right at indifference at this point. Thus, the entire surplus shifts to the potential proliferator!

The following payoff graph drives home this point. A is the potential proliferator; B is the opponent; k* is the exact value that flips the opponent from the stick strategy to the carrot strategy:

Making proliferation more difficult can work in your favor.

Making proliferation more difficult can work in your favor.

If you are below k*, the opponent opts for the preventive war threat, weighing down everyone’s payoff. But jump above k*, and suddenly the opponent wants to make a deal. Note that everyone’s payoff is greater under these circumstances because there is no deadweight loss built into the system.

Thus, imagine that you are a potential proliferator living in a world below k*. If you do nothing, your opponent is going to credibly threaten preventive war against you. However, if you increase the cost of proliferating—say, by agreeing to measures like those in the JCPOA—suddenly you make out like a bandit. As such, you obviously divest your program.

What does this say about Iran? Well, it indicates that a lot of the policy discussion is misplaced for a few of reasons:

  1. These sorts of agreements work even in the absence of effective monitoring institutions. So while monitoring might be nice, it is definitely not necessary to avoid a nuclear Iran. (The paper clarifies exactly why this works, which could be the subject of its own blog post.)
  2. Iranian refusal to agree to further restrictions is not proof positive of some secret plan to proliferate. Looking back at the graph, note that while some reversal works to Iran’s benefit, anything past k* decreases its payoff. As such, by standing firm, Iran may be playing a delicate balancing game to get exactly to k* and no further.
  3. These deals primarily benefit potential proliferators. This might come as a surprise. After all, potential proliferators do not have nuclear weapons at the start of the interaction, have to pay costs to acquire those weapons, and can have their efforts erased if the opponent decides to initiate a preventive war. Yet the potential proliferators can extract all of the surplus from a deal if they are careful.
  4. In light of (3), it is not surprising that a majority of Americans believe that Iran got the better end of the deal. But that’s not inherently because Washington bungled the negotiations. Rather, despite all the military power the United States has, these types of interactions inherently deal us a losing hand.

The paper works through the logic of the above argument and discusses the empirical implications in greater depth. Please take a look at it; I’d love to hear you comments.

Bargaining Power and the Iran Deal

Today’s post is not an attempt to give a full analysis of the Iran deal.[1] Rather, I just want to make a quick point about how the structure of negotiations greatly favors the Obama administration.

Recall the equilibrium of an ultimatum game. When two parties are trying to divide a bargaining pie and one side makes a take-it-or-leave-it offer, that proposer receives the entire benefit from bargaining. In fact, even if negotiations can continue past a single offer, as long as a single person controls all of the offers, the receiver still receives none of the surplus.

This result makes a lot of people feel uncomfortable. After all, the outcomes are far from fair. Fortunately, in real life, people are rarely constrained in this way. If I don’t like the offer you propose me, I can always propose a counteroffer. And if you don’t like that, nothing stops you from making a counter-counteroffer. That type of negotiations is called Rubinstein bargaining, and it ends with a even split of the pie.

In my book on bargaining, though, I point out that there are some prominent exceptions where negotiations take the form of an ultimatum game. For example, when returning a security deposit, your former landlord can write you a check and leave it at that. You could try suggesting a counteroffer, but the landlord doesn’t have to pay attention—you already have the check, and you need to decide whether that’s better than going to court or not. This helps explain why renters often dread the move out.

Unfortunately for members of Congress, “negotiations” between the Obama administration and Congress are more like security deposits than haggling over the price of strawberries at a farmer’s market. If Congress rejects the deal (which would require overriding a presidential veto), they can’t go to Iran and negotiate a new deal for themselves. The Obama administration controls dealings with Iran, giving it all of the proposal power. Bargaining theory would therefore predict that the Obama administration will be very satisfied[2], while Congress will find the deal about as attractive as if there were no deal at all.

And that’s basically what we are seeing right now. Congress is up in arms over the deal (hehe). They are going to make a big show about what they claim is an awful agreement, but they don’t have any say about the terms beyond an up/down vote. That—combined with the fact that Obama only needs 34 senators to get this to work—means that the Obama administration is going to receivea very favorable deal for itself.

[1] Here is my take on why such deals work. The paper is a bit dated, but it gets the point across.

[2] I mean that the Obama administration will be very satisfied by the deal insofar as it relates to its disagreement with Congress. It might not be so satisfied by the deal insofar as it relates to its disagreement with Iran.

The Game Theory of the Cardinals/Astros Spying Affair

The NY Times reported today that the St. Louis Cardinals hacked the Houston Astros’ internal files, including information on the trade market. I suspect that everyone has a basic understanding why the Cardinals would find this information useful. “Knowledge is power,” as they say. Heck, the United States spends $52.6 billion each year on spying. But game theorists have figured out how to quantify this intuition is both interesting and under-appreciated. That is the topic of this post.

Why Trade?
Trades are very popular in baseball, and the market will essentially take over sports headlines as we approach the July 31 trading deadline. Teams like to trade for the same reason countries like to trade with each other. Entity A has a lot of object X but lacks Y, while Entity B has a lot of object Y but lacks X. So teams swap a shortstop for an outfielder, and bad teams exchange their best players for good teams’ prospects. Everyone wins.

However, the extent to which one side wins also matters. If the Angels trade a second baseman to the Dodgers for a pitcher, they are happier than if they have to trade that same second baseman for that same pitcher and pay an additional $1 million to the Dodgers. Figuring out exactly what to offer is straightforward when each side is aware of exactly how much the other values all the components. In fact, bargaining theory indicates that teams should reach such deals rapidly. Unfortunately, life is not so simple.

The Risk-Return Tradeoff
What does a team do when it isn’t sure of the other side’s bottom line? They face what game theorists call a risk-return tradeoff. Suppose that the Angels know that the Dodgers are not willing to trade the second baseman for the pitcher straight up. Instead, the Angels know that the Dodgers either need $1 million or $5 million to sweeten the deal. While the Angels would be willing to make the trade at either price, they are not sure exactly what the Dodgers require.

For simplicity, suppose the Angels can only make a single take-it-or-leave-it offer. They have two choices. First, they can offer the additional $5 million. This is safe and guarantees the trade. However, if the Dodgers were actually willing to accept only $1 million, the Angels unnecessarily waste $4 million.

Alternatively, the Angels could gamble that the Dodgers will take the smaller $1 million amount. If this works, the Angels receive a steal of a deal. If the Dodgers actually needed $5 million, however, the Angels burned an opportunity to complete a profitable trade.

To generalize, the risk-return tradeoff says the following: the more one offers, the more likely the other side is to accept the deal. Yet, simultaneously, the more one offers, the worse that deal becomes for a proposer. Thus, the more you risk, the greater return you receive when the gamble works, but the gamble also fails more often.

 

Knowledge Is Power
The risk-return tradeoff allows us to precisely quantify the cost of uncertainty. In the above example, offering the safe amount wastes $4 million times the probability that the Dodgers were only willing to accept $1 million. Meanwhile, making an aggressive offer wastes the amount that the Angels would value the trade times the probability the Dodgers needed $5 million to accept the deal; this is because the trade fails to occur under these circumstances. Consequently, the Angels are damned-if-they-do, and damned-if-they-don’t. The risk-return tradeoff forces them to figure out how to minimize their losses.

At this point, it should be clear why the Cardinals would value the Astros’ secret information. The more information the Cardinals have about other teams’ minimal demands, the better they will fare in trade negotiations. The Astros’ database provided such information. Some of it was about what the Astros were looking for. Some of it was about what the Astros thought others were looking for. Either way, extra information for the Cardinals organization would decrease the likelihood of miscalculating in trade negotiations. And apparently such knowledge is so valuable that it was worth the risk of getting caught.

Game Theory and Bargaining on The Good Wife

Last week’s episode of The Good Wife (““Trust Issues”) was interesting for two reasons: it used a “ripped from the headlines” legal case that I discuss in my book on bargaining and the legal argument they use is essentially a trivial application of pre-play cheap talk in a repeated prisoner’s dilemma.

The $9 Billion Google/Apple Anti-Trust Lawsuit
First, the background of the real life version of the case. In the early 2000s, Google and Apple (along with Adobe and Intel) allegedly had a “no poaching” gentleman’s agreement. That is, each company in the group pledged to not attempt to hire employees at any of the other companies. The employees eventually figured out what was going on, filed a $9 billion lawsuit, and settled in April 2014 for an undisclosed amount.

Why is the practice illegal? It goes without saying that quashing competition among firms hurts the employees’ bargaining power, and the law is there to protect those employees. But what is not so clear is just how attractive a no poaching agreement is to the firms. In fact, when companies play by the rules, just about all of the potential for profit goes into the employees’ hands.

To see why, imagine that Google and Apple both wanted to hire Karen. Karen has impressive computer programming skills. And because Google and Apple value computing skills at a roughly equal rate, suppose that the most Google would be willing to pay her is $200,000 while Apple’s maximum is $195,000. Put differently, $200,000 and $195,000 represent the break even points for the respective companies. Put differently again, Karen will bring in $200,000 in profits to Google and $195,000 to Apple, so hiring her for any more than that will result in a net loss.

How will that profit ultimately be divided between Karen and her employer? You might think that Google should be the one hiring her. And you are right—she is worth $5000 more to Google than Apple. You might also think that Google will profit handsomely from her employment. However, as I discuss at length in the book, the logic of bargaining shows this to be untrue. If Google offers Karen any less than $195,000, she can always secure a job from Apple; this is because Apple values her at that amount, and so Apple would be willing to slightly outbid Google to hire her. Thus, the outbidding process ultimately ensures that Karen receives at least $195,000. She is the real winner. Although Google might still profit from her employment, its net gain will not exceed $5000 ($200,000 – $195,000).

Negotiating Collusion
So the firms have great incentive to collude, drive down wages, and secure more of the profits for themselves. What does that sort of collusion look like?

Well, we might think of it as a repeated prisoner’s dilemma. In this type of interaction, in any given year, each of us would maximize profits by trying to poach the rival firm’s employees regardless of what the other firm chooses to do. (If you don’t poach, then I make out like a bandit. If you do poach, I’m still better off poaching and not losing all the employees.) However, because each of us is poaching and driving up employee wages, both us are ultimately worse off than if we could enforce an agreement that required us to cooperate with each other and not poach.

Of course, anti-trust laws prevent us from explicitly contracting such an agreement in a legally enforceable manner. However, an informal and internally enforceable agreement is possible. Suppose we both start off by cooperating with each other by not poaching. Then, in each subsequent year, if both of us have consistently cooperated before, we continue cooperating. Otherwise, we revert to poaching.

Would anyone like to break the agreement? No. Although I could gain a temporary advantage against you by poaching your employees today, the higher wages over the long-term with mutual poaching are going to vastly outstrip that short-term benefit.

This is exactly the type of agreement Google and Apple struck. In fact, when a Google recruiter attempted to hire some Apple employees, Steve Jobs shot the following email to Google bigwigs: “If you hire a single one of these people, that means war.”

Alicia Florrick’s Defense
The episode of The Good Wife featured fictionalized versions of Google and Apple involved in the same affair. Like reality, employees caught on and sued.

The plaintiff’s lawyers thought they had the case in the bag. Indeed, they had turned one of the owners of a trust company against the defense. He went on record that the defense had negotiated the terms of the no poaching policy explicitly and was very happy to agree to the deal.

Alicia Florrick (the defense attorney and titular Good Wife) had a great defense: any discussion of such an agreement is not an unambiguous signal of plans to break the law. These repeated prisoner’s dilemmas have an interesting property in that regardless of whether you plan to cooperate with the other company or screw them over at the first possible moment, you always want to convince the other side that you will cooperate. If you plan to cooperate, then you want to tell the other side to cooperate as well so you can sustain that cooperation in the long term. If you want to follow the law and poach freely instead, you still want to convince the other side that you are going to cooperate so that they cooperate as well, allowing you to screw them over in the process.

So Florrick points out that this type of pre-play communication is meaningless. Regardless of the ultimate intend, the defendant would say the exact same thing. The testimony therefore proves nothing. The plaintiff promptly settled.

All told, I really appreciate two things about the episode: its sophisticated understanding of a potentially very complicated strategic situation and the how punny the “Trust Issues” title is.

Park Place Is Still Worthless: The Game Theory of McDonald’s Monopoly

McDonald’s Monopoly begins again today. With that in mind, I thought I would update my explanation of the game theory behind the value of each piece, especially since my new book on bargaining connects the same mechanism to the De Beers diamond monopoly, star free agent athletes, and a shady business deal between Google and Apple. Here’s the post, mostly in its original form:

__________________________________

McDonald’s Monopoly is back. As always, if you collect Park Place and Boardwalk, you win a million dollars. I just got a Park Place. That’s worth about $500,000, right?

Actually, it is worth nothing. Not close to nothing, but absolutely, positively nothing.

It helps to know how McDonald’s structures the game. Despite the apparent value of Park Place, McDonald’s floods the market with Park Place pieces, probably to trick naive players into thinking they are close to riches. I do not have an exact number, but I would imagine there are easily tens of thousands of Park Places floating around. However, they only one or two Boardwalks available. (Again, I do not know the exact number, but it is equal to the number of million dollar prizes McDonald’s want to give out.)

Even with that disparity, you might think Park Place maintains some value. Yet, it is easy to show that this intuition is wrong. Imagine you have a Boardwalk piece and you corral two Park Place holders into a room. (This works if you gathered thousands of them as well, but you only need two of them for this to work.) You tell them that you are looking to buy a Park Place piece. Each of them must write their sell price on a piece of paper. You will complete the transaction at the lowest price. For example, if one person wrote $500,000 and the other wrote $400,000, you would buy it from the second at $400,000.

Assume that sell prices are continuous and weakly positive, and that ties are broken by coin flip. How much should you expect to pay?

The answer is $0.

The proof is extremely simple. It is clear that both bidding $0 is a Nash equilibrium. (Check out my textbook or watch my YouTube videos if you do not know what a Nash equilibrium is.) If either Park Place owner deviates to a positive amount, that deviator would lose, since the other guy is bidding 0. So neither player can profitably deviate. Thus, both bidding 0 is a Nash equilibrium.

What if one bid $x greater than or equal to 0 and the other bid $y > x? Then the person bidding y could profitably deviate to any amount between y and x. He still wins the piece, but he pays less for it. Thus, this is a profitable deviation and bids x and y are not an equilibrium.

The final case is when both players bid the same amount z > 0. In expectation, both earn z/2. Regardless of the tiebreaking mechanism, one player must lose at least half the time. That player can profitably deviate to 3z/8 and win outright. This sell price is larger than the expectation.

This exhausts all possibilities. So both bidding $0 is the unique Nash equilibrium. Despite requiring another piece, your Boardwalk is worth a full million dollars.

What is going wrong for the Park Place holders? Supply simply outstrips demand. Any person with a Park Place but no Boardwalk walks away with nothing, which ultimately drives down the price of Park Place down to nothing as well.

Moral of the story: Don’t get excited if you get a Park Place piece.

Note 1: If money is discrete down to the cent, then the winning bid could be $0 or $0.01. (With the right tie breaker, it could also be $0.02.) Either way, this is not good for owners of Park Place.

Note 2: In practice, we might see Park Place sell for some marginally higher value. That is because it is (slightly) costly for a Boardwalk owner to seek out and solicit bids from more Park Place holders. However, Park Place itself is not creating any value here—it’s purely the transaction cost.

Note 3: An enterprising Park Place owner could purchase all other Park Place pieces and destroy them. This would force the Boardwalk controller to split the million dollars. While that is reasonable to do when there are only two individuals like the example, good luck buying all Park Places in reality. (Transaction costs strike again!)

__________________________________

Now time for an update. What might not have been clear in the original post is that McDonald’s Monopoly is a simple illustration of a matching problem. Whenever you have a situation with n individuals who need one of m partners, all of the economic benefits go to the partners if m < n. The logic is the same as above. If an individual does not obtain a partner, he receives no profit. This makes him desperate to partner with someone, even if it means drastically dropping his share of the money to be made. But then the underbidding process begins until the m partners are taking all of the revenues for themselves.

In the book, I have a more practical example involving star free agent athletes. For example, there is only one LeBron James. Every team would like to sign him to improve their chances of winning. Yet this ultimately results in the final contract price to be so high that the team doesn’t actually benefit much (or at all) from signing James.

Well, that’s how it would work if professional sports organizations were not scheming to stop this. The NBA in particular has a maximum salary. So even if LeBron James is worth $50 million per season, he won’t be paid that much. (The exact amount a player can earn is complicated.) This ensures that the team that signs him will benefit from the transaction but takes money away from James.

Non-sports business scheme in similar ways. More than 100 year ago, the De Beers diamond company realized that new mine discoveries would mean that diamond supply would soon outstrip demand. This would kill diamond prices. So De Beers began purchasing tons of mines to intentionally limit production and increase price. Similarly, Apple and Google once had a “no compete” informal agreement to not poach each other’s employees. Without the outside bidder, a superstar computer engineer would not be able to increase his wage to the fair market value. Of course, this is highly illegal. Employees filed a $9 billion anti-trust lawsuit when they learned of this. The parties eventually settled the suit outside of court for an undisclosed amount.

To sum up, matching is good for those in demand and bad for those in high supply. With that in mind, good luck finding that Boardwalk!