Tag Archives: bargaining

Why Appoint Someone More Extreme than You?

From Appointing Extremists, by Michael Bailey and Matthew Spitzer:

Given their long tenure and broad powers, Supreme Court Justices are among the most powerful actors in American politics. The nomination process is hard to predict and nominee characteristics are often chalked up to idiosyncratic features of each appointment. In this paper, we present a nomination and confirmation game that highlights…important features of the nomination process that have received little emphasis in the formal literature . . . . [U]ncertainty about justice preferences can lead a President to prefer a nominee with preferences more extreme than his preferences.

Wait, what? WHAT!? That cannot possibly be right. Someone with your ideal point can always mimic what you would want them to do. An extremist, on the other hand, might try to impose a policy further away from your optimal outcome.

But Bailey and Spitzer will have you convinced within a few pages. I will try to get the logic down to two pictures, inspired by the figures from their paper. Imagine the Supreme Court consists of just three justices. One has retired, leaving two justices with ideal points J_1 and J_2. You are the president, and you have ideal point P with standard single-peaked preferences. You can pick a nominee with any expected ideological positioning. Call that position N. Due to uncertainty, though, the actual realization of that justice’s ideal point is distributed uniformly on the interval [N – u, N + u]. Also, let’s pretend that the Senate doesn’t exist, because a potential veto is completely irrelevant to the point.

Here are two options. First, you could nominate someone on top of his ideal point in expectation:

n

Or you could nominate someone further to the right in expectation:

nprime

The first one is always better, right? After all, the nominee will be a lot closer to you on average.

Not so fast. Think about the logic of the median voter. If you nominate the more extreme justice (N’), you guarantee that J_2 will be the median voter on all future cases. If you nominate the justice you expect to match your ideological position, you will often get J_2 as the median voter. But sometimes your nominee will actually fall to the left of J_2. And when that’s the case, your nominee becomes the median voter at a position less attractive than J_2. Thus, to hedge against this circumstance, you should nominate a justice who is more extreme (on average) than you are. Very nice!

Obviously, this was a simple example. Nevertheless, the incentive to nominate someone more extreme still influences the president under a wide variety of circumstances, whether he has a Senate to contend with or he has to worry about future nominations. Bailey and Spitzer cover a lot of these concerns toward the end of their manuscript.

I like this paper a lot. Part of why it appeals to me is that they relax the assumption that ideal points are common knowledge. This is certainly a useful assumption to make for a lot of models. For whatever reason, though, both the American politics and IR literatures have almost made this certainty axiomatic. Some of my recent work—on judicial nominees with Maya Sen and crisis bargaining (parts one and two) with Peter Bils—has relaxed this and found interesting results. Adding Bailey and Spitzer to the mix, it appears that there might be a lot of room to grow here.

Understanding the Iran Deal: A Model of Nuclear Reversal

Most of the discussion surrounding the Joint Comprehensive Plan of Action (JCPOA, or the “Iran Deal”) has focused on mechanisms that monitor Iranian compliance. How can we be sure Iran is using this facility for scientific research? When can weapons inspectors show up? Who gets to take the soil samples? These kinds of questions seem to be the focus.

Fewer people have noted Iran’s nuclear divestment built into the deal. Yet Iran is doing a lot here. To wit, here are some of the features of the JCPOA:

  • At the Arak facility, the reactor under construction will be filled with concrete, and the redesigned reactor will not be suitable for weapons-grade plutonium. Excess heavy water supplies will be shipped out of the country. Existing centrifuges will be removed and stored under round-the-clock IAEA supervision at Natanz.
  • The Fordow Fuel Enrichment Plant will be converted to a nuclear, physics, and technology center. Many of its centrifuges will be removed and sent to Natanz under IAEA supervision. Existing cascades will be modified to produce stable isotopes instead of uranium hexafluoride. The associated pipework for the enrichment will also be sent Natanz.
  • All enriched uranium hexafluoride in excess of 300 kilograms will be downblended to 3.67% or sold on the international market.

Though such features are fairly common in arms agreements, they are nevertheless puzzling. None of this makes proliferation impossible, so the terms cannot be for that purpose. But they clearly make proliferating more expensive, which seems like a bad move for Iran if it truly wants to build a weapon. On the other hand, if Iran only wants to use the proliferation threat to coerce concessions out of the United States, this still seems like a bad move. After all, in bargaining, the deals you receive are commensurate with your outside options; make your outside options worse, and the amount of stuff you get goes down as well.

The JCPOA, perhaps the worst formatted treaty ever.

The JCPOA, perhaps the most poorly formatted treaty ever.

What gives? In a new working paper, I argue that undergoing such a reversal works to the benefit of potential proliferators. Indeed, potential proliferators can extract the entire surplus by divesting in this manner.

In short, the logic is as follows. Opponents (like the United States versus Iran) can deal with the proliferation problem in one of two ways. First, they can give “carrots” by striking a deal with the nuclearizing state. These types of deals provide enough benefits to potential proliferators that building weapons is no longer profitable. Consequently, and perhaps surprisingly, they are credible even in the absence of effective monitoring institutions.

Second, opponents can leverage the “stick” in the form of preventive war. The monitoring problem makes this difficult, though. Sometimes following through on the preventive war threat shuts down a real project. Sometimes preventive war just a bluff. Sometimes opponents end up fighting a target that was not even investing in proliferation. Sometimes the potential proliferator can successfully and secretly obtain a nuclear weapon. No matter what, though, this is a mess of inefficiency, both from the cost of war and the cost of proliferation.

Naturally, the opponent chooses the option that is cheaper for it. So if the cost of preventive war is sufficiently low, it goes in that direction. In contrast, if the price of concessions is relatively lower, carrots are preferable.

Note that one determinant of the opponent’s choice is the cost of proliferating. When building weapons is cheap, the concessions necessary to convince the potential proliferator not to build are very high. But if proliferation is very expensive, then making the deal looks very attractive to the opponent.

This is where nuclear reversals like those built into the JCPOA come into play. Think about the exact proliferation cost that flips the opponent’s preference from sticks to carrots. Below that line, the inefficiency weighs down everyone’s payoff. Right above that line, efficiency reigns supreme. But the opponent is right at indifference at this point. Thus, the entire surplus shifts to the potential proliferator!

The following payoff graph drives home this point. A is the potential proliferator; B is the opponent; k* is the exact value that flips the opponent from the stick strategy to the carrot strategy:

Making proliferation more difficult can work in your favor.

Making proliferation more difficult can work in your favor.

If you are below k*, the opponent opts for the preventive war threat, weighing down everyone’s payoff. But jump above k*, and suddenly the opponent wants to make a deal. Note that everyone’s payoff is greater under these circumstances because there is no deadweight loss built into the system.

Thus, imagine that you are a potential proliferator living in a world below k*. If you do nothing, your opponent is going to credibly threaten preventive war against you. However, if you increase the cost of proliferating—say, by agreeing to measures like those in the JCPOA—suddenly you make out like a bandit. As such, you obviously divest your program.

What does this say about Iran? Well, it indicates that a lot of the policy discussion is misplaced for a few of reasons:

  1. These sorts of agreements work even in the absence of effective monitoring institutions. So while monitoring might be nice, it is definitely not necessary to avoid a nuclear Iran. (The paper clarifies exactly why this works, which could be the subject of its own blog post.)
  2. Iranian refusal to agree to further restrictions is not proof positive of some secret plan to proliferate. Looking back at the graph, note that while some reversal works to Iran’s benefit, anything past k* decreases its payoff. As such, by standing firm, Iran may be playing a delicate balancing game to get exactly to k* and no further.
  3. These deals primarily benefit potential proliferators. This might come as a surprise. After all, potential proliferators do not have nuclear weapons at the start of the interaction, have to pay costs to acquire those weapons, and can have their efforts erased if the opponent decides to initiate a preventive war. Yet the potential proliferators can extract all of the surplus from a deal if they are careful.
  4. In light of (3), it is not surprising that a majority of Americans believe that Iran got the better end of the deal. But that’s not inherently because Washington bungled the negotiations. Rather, despite all the military power the United States has, these types of interactions inherently deal us a losing hand.

The paper works through the logic of the above argument and discusses the empirical implications in greater depth. Please take a look at it; I’d love to hear you comments.

Bargaining Power and the Iran Deal

Today’s post is not an attempt to give a full analysis of the Iran deal.[1] Rather, I just want to make a quick point about how the structure of negotiations greatly favors the Obama administration.

Recall the equilibrium of an ultimatum game. When two parties are trying to divide a bargaining pie and one side makes a take-it-or-leave-it offer, that proposer receives the entire benefit from bargaining. In fact, even if negotiations can continue past a single offer, as long as a single person controls all of the offers, the receiver still receives none of the surplus.

This result makes a lot of people feel uncomfortable. After all, the outcomes are far from fair. Fortunately, in real life, people are rarely constrained in this way. If I don’t like the offer you propose me, I can always propose a counteroffer. And if you don’t like that, nothing stops you from making a counter-counteroffer. That type of negotiations is called Rubinstein bargaining, and it ends with a even split of the pie.

In my book on bargaining, though, I point out that there are some prominent exceptions where negotiations take the form of an ultimatum game. For example, when returning a security deposit, your former landlord can write you a check and leave it at that. You could try suggesting a counteroffer, but the landlord doesn’t have to pay attention—you already have the check, and you need to decide whether that’s better than going to court or not. This helps explain why renters often dread the move out.

Unfortunately for members of Congress, “negotiations” between the Obama administration and Congress are more like security deposits than haggling over the price of strawberries at a farmer’s market. If Congress rejects the deal (which would require overriding a presidential veto), they can’t go to Iran and negotiate a new deal for themselves. The Obama administration controls dealings with Iran, giving it all of the proposal power. Bargaining theory would therefore predict that the Obama administration will be very satisfied[2], while Congress will find the deal about as attractive as if there were no deal at all.

And that’s basically what we are seeing right now. Congress is up in arms over the deal (hehe). They are going to make a big show about what they claim is an awful agreement, but they don’t have any say about the terms beyond an up/down vote. That—combined with the fact that Obama only needs 34 senators to get this to work—means that the Obama administration is going to receivea very favorable deal for itself.

[1] Here is my take on why such deals work. The paper is a bit dated, but it gets the point across.

[2] I mean that the Obama administration will be very satisfied by the deal insofar as it relates to its disagreement with Congress. It might not be so satisfied by the deal insofar as it relates to its disagreement with Iran.

The Game Theory of the Cardinals/Astros Spying Affair

The NY Times reported today that the St. Louis Cardinals hacked the Houston Astros’ internal files, including information on the trade market. I suspect that everyone has a basic understanding why the Cardinals would find this information useful. “Knowledge is power,” as they say. Heck, the United States spends $52.6 billion each year on spying. But game theorists have figured out how to quantify this intuition is both interesting and under-appreciated. That is the topic of this post.

Why Trade?
Trades are very popular in baseball, and the market will essentially take over sports headlines as we approach the July 31 trading deadline. Teams like to trade for the same reason countries like to trade with each other. Entity A has a lot of object X but lacks Y, while Entity B has a lot of object Y but lacks X. So teams swap a shortstop for an outfielder, and bad teams exchange their best players for good teams’ prospects. Everyone wins.

However, the extent to which one side wins also matters. If the Angels trade a second baseman to the Dodgers for a pitcher, they are happier than if they have to trade that same second baseman for that same pitcher and pay an additional $1 million to the Dodgers. Figuring out exactly what to offer is straightforward when each side is aware of exactly how much the other values all the components. In fact, bargaining theory indicates that teams should reach such deals rapidly. Unfortunately, life is not so simple.

The Risk-Return Tradeoff
What does a team do when it isn’t sure of the other side’s bottom line? They face what game theorists call a risk-return tradeoff. Suppose that the Angels know that the Dodgers are not willing to trade the second baseman for the pitcher straight up. Instead, the Angels know that the Dodgers either need $1 million or $5 million to sweeten the deal. While the Angels would be willing to make the trade at either price, they are not sure exactly what the Dodgers require.

For simplicity, suppose the Angels can only make a single take-it-or-leave-it offer. They have two choices. First, they can offer the additional $5 million. This is safe and guarantees the trade. However, if the Dodgers were actually willing to accept only $1 million, the Angels unnecessarily waste $4 million.

Alternatively, the Angels could gamble that the Dodgers will take the smaller $1 million amount. If this works, the Angels receive a steal of a deal. If the Dodgers actually needed $5 million, however, the Angels burned an opportunity to complete a profitable trade.

To generalize, the risk-return tradeoff says the following: the more one offers, the more likely the other side is to accept the deal. Yet, simultaneously, the more one offers, the worse that deal becomes for a proposer. Thus, the more you risk, the greater return you receive when the gamble works, but the gamble also fails more often.

 

Knowledge Is Power
The risk-return tradeoff allows us to precisely quantify the cost of uncertainty. In the above example, offering the safe amount wastes $4 million times the probability that the Dodgers were only willing to accept $1 million. Meanwhile, making an aggressive offer wastes the amount that the Angels would value the trade times the probability the Dodgers needed $5 million to accept the deal; this is because the trade fails to occur under these circumstances. Consequently, the Angels are damned-if-they-do, and damned-if-they-don’t. The risk-return tradeoff forces them to figure out how to minimize their losses.

At this point, it should be clear why the Cardinals would value the Astros’ secret information. The more information the Cardinals have about other teams’ minimal demands, the better they will fare in trade negotiations. The Astros’ database provided such information. Some of it was about what the Astros were looking for. Some of it was about what the Astros thought others were looking for. Either way, extra information for the Cardinals organization would decrease the likelihood of miscalculating in trade negotiations. And apparently such knowledge is so valuable that it was worth the risk of getting caught.

Park Place Is Still Worthless: The Game Theory of McDonald’s Monopoly

McDonald’s Monopoly begins again today. With that in mind, I thought I would update my explanation of the game theory behind the value of each piece, especially since my new book on bargaining connects the same mechanism to the De Beers diamond monopoly, star free agent athletes, and a shady business deal between Google and Apple. Here’s the post, mostly in its original form:

__________________________________

McDonald’s Monopoly is back. As always, if you collect Park Place and Boardwalk, you win a million dollars. I just got a Park Place. That’s worth about $500,000, right?

Actually, it is worth nothing. Not close to nothing, but absolutely, positively nothing.

It helps to know how McDonald’s structures the game. Despite the apparent value of Park Place, McDonald’s floods the market with Park Place pieces, probably to trick naive players into thinking they are close to riches. I do not have an exact number, but I would imagine there are easily tens of thousands of Park Places floating around. However, they only one or two Boardwalks available. (Again, I do not know the exact number, but it is equal to the number of million dollar prizes McDonald’s want to give out.)

Even with that disparity, you might think Park Place maintains some value. Yet, it is easy to show that this intuition is wrong. Imagine you have a Boardwalk piece and you corral two Park Place holders into a room. (This works if you gathered thousands of them as well, but you only need two of them for this to work.) You tell them that you are looking to buy a Park Place piece. Each of them must write their sell price on a piece of paper. You will complete the transaction at the lowest price. For example, if one person wrote $500,000 and the other wrote $400,000, you would buy it from the second at $400,000.

Assume that sell prices are continuous and weakly positive, and that ties are broken by coin flip. How much should you expect to pay?

The answer is $0.

The proof is extremely simple. It is clear that both bidding $0 is a Nash equilibrium. (Check out my textbook or watch my YouTube videos if you do not know what a Nash equilibrium is.) If either Park Place owner deviates to a positive amount, that deviator would lose, since the other guy is bidding 0. So neither player can profitably deviate. Thus, both bidding 0 is a Nash equilibrium.

What if one bid $x greater than or equal to 0 and the other bid $y > x? Then the person bidding y could profitably deviate to any amount between y and x. He still wins the piece, but he pays less for it. Thus, this is a profitable deviation and bids x and y are not an equilibrium.

The final case is when both players bid the same amount z > 0. In expectation, both earn z/2. Regardless of the tiebreaking mechanism, one player must lose at least half the time. That player can profitably deviate to 3z/8 and win outright. This sell price is larger than the expectation.

This exhausts all possibilities. So both bidding $0 is the unique Nash equilibrium. Despite requiring another piece, your Boardwalk is worth a full million dollars.

What is going wrong for the Park Place holders? Supply simply outstrips demand. Any person with a Park Place but no Boardwalk walks away with nothing, which ultimately drives down the price of Park Place down to nothing as well.

Moral of the story: Don’t get excited if you get a Park Place piece.

Note 1: If money is discrete down to the cent, then the winning bid could be $0 or $0.01. (With the right tie breaker, it could also be $0.02.) Either way, this is not good for owners of Park Place.

Note 2: In practice, we might see Park Place sell for some marginally higher value. That is because it is (slightly) costly for a Boardwalk owner to seek out and solicit bids from more Park Place holders. However, Park Place itself is not creating any value here—it’s purely the transaction cost.

Note 3: An enterprising Park Place owner could purchase all other Park Place pieces and destroy them. This would force the Boardwalk controller to split the million dollars. While that is reasonable to do when there are only two individuals like the example, good luck buying all Park Places in reality. (Transaction costs strike again!)

__________________________________

Now time for an update. What might not have been clear in the original post is that McDonald’s Monopoly is a simple illustration of a matching problem. Whenever you have a situation with n individuals who need one of m partners, all of the economic benefits go to the partners if m < n. The logic is the same as above. If an individual does not obtain a partner, he receives no profit. This makes him desperate to partner with someone, even if it means drastically dropping his share of the money to be made. But then the underbidding process begins until the m partners are taking all of the revenues for themselves.

In the book, I have a more practical example involving star free agent athletes. For example, there is only one LeBron James. Every team would like to sign him to improve their chances of winning. Yet this ultimately results in the final contract price to be so high that the team doesn’t actually benefit much (or at all) from signing James.

Well, that’s how it would work if professional sports organizations were not scheming to stop this. The NBA in particular has a maximum salary. So even if LeBron James is worth $50 million per season, he won’t be paid that much. (The exact amount a player can earn is complicated.) This ensures that the team that signs him will benefit from the transaction but takes money away from James.

Non-sports business scheme in similar ways. More than 100 year ago, the De Beers diamond company realized that new mine discoveries would mean that diamond supply would soon outstrip demand. This would kill diamond prices. So De Beers began purchasing tons of mines to intentionally limit production and increase price. Similarly, Apple and Google once had a “no compete” informal agreement to not poach each other’s employees. Without the outside bidder, a superstar computer engineer would not be able to increase his wage to the fair market value. Of course, this is highly illegal. Employees filed a $9 billion anti-trust lawsuit when they learned of this. The parties eventually settled the suit outside of court for an undisclosed amount.

To sum up, matching is good for those in demand and bad for those in high supply. With that in mind, good luck finding that Boardwalk!

What Does Game Theory Say about Negotiating a Pay Raise?

A common question I get is what game theory tells us about negotiating a pay raise. Because I just published a book on bargaining, this is something I have been thinking about a lot recently. Fortunately, I can narrow the fundamentals to three simple points:

1) Virtually all of the work is done before you sit down at the table.
When you ask the average person how they negotiated their previous raise, you will commonly hear anecdotes about how that individual said some (allegedly) cunning things, (allegedly) outwitted his or her boss, and received a hefty pay hike. Drawing inferences from this is problematic for a number of reasons:

  1. Anecdotal “evidence” isn’t evidence.
  2. The reason for the raise might have been orthogonal to what was said.
  3. Worse, the raise might have been despite what was said.
  4. It assumes that the boss is more concerned about dazzling words than money, his own job performance, and institutional constraints.

The fourth point is especially concerning. Think about the people who control your salaries. They did not get their job because they are easily persuaded by rehearsed speeches. No, they are there because they are good at making smart hiring decisions and keeping salaries low. Moreover, because this is their job, they engage in this sort of bargaining frequently. It would thus be very strange for someone like that to make such a rookie mistake.

So if you think you can just be clever at the bargaining table, you are going to have a bad time. Indeed, the bargaining table is not a game of chess. It should simply be a declaration of checkmate. The real work is building your bargaining leverage ahead of time.

2) Do not be afraid to reject offers and make counteroffers.
Imagine a world where only one negotiator had the ability to make an offer, while the other could only accept or reject that proposal. Accepting implements the deal; rejecting means that neither party enjoys the benefits of mutual cooperation. What portion of the economic benefits will the proposer take? And how much of the benefits will go to the receiver?

You might guess that the proposer has the advantage here. And you’d be right. What surprises most people, however, is the extent of the advantage: the proposer reaps virtually all of the benefits of the relationship, while the receiver is barely any better off than had the parties not struck a deal.

How do we know this? Game theory allows us to study this exact scenario rigorously. Indeed, the setup has a specific name: the ultimatum game. It shows that a party with the exclusive right to make proposals has all of the bargaining power.

 

That might seem like a big problem if you are the one receiving the offers. Fortunately, the problem is easy to solve in practice. Few real life bargaining situations expressly prohibit parties from making counteroffers. (As I discuss in the book, return of security deposits is one such exception, and we all know that turns out poorly for the renter—i.e., the receiver of the offer.) Even the ability to make a single counteroffer drastically increases an individual’s bargaining power. And if the parties could potentially bargain back and forth without end—called Rubinstein bargaining, perhaps the most realistic of proposal structures—bargaining equitably divides the benefits.

As the section header says, the lesson here is that you should not be afraid to reject low offers and propose a more favorable division. Yet people often fail to do this. This is especially common at the time of hire. After culling through all of the applications, a hiring manager might propose a wage. The new employee, deathly afraid of losing the position, meekly accepts.

Of course, the new employee is not fully appreciating the company’s incentives. By making the proposal, the company has signaled that the individual is the best available candidate. This inevitably gives him a little bit of wiggle room with his wage. He should exercise this leverage and push for a little more—especially because starting wage is often the point of departure for all future raise negotiations.

3) Increase your value to other companies.
Your company does not pay you a lot of money to be nice to you. It pays you because it has no other choice. Although many things can force a company’s hand in this manner, competing offers is particularly important.

Imagine that your company values your work at $50 per hour. If you can only work for them, due the back-and-forth logic from above, we might imagine that your wage will land in the neighborhood of $40 per hour. However, suppose that a second company exists that is willing to pay you up to $25 per hour. Now how much will you make?

The answer is no less than $40 per hour. Why? Well, suppose not. If your current company is only paying you, say, $30 per hour, you could go to the other company and ask for a little bit more. They would be obliged to pay you that since they value you up to $40 per hour. But, of course, your original company values you up to $50 per hour. So they have incentive to ultimately outbid the other company and keep you under their roof.

(This same mechanism means that Park Place is worthless in McDonald’s monopoly.)

Game theorists call such alternatives “outside options”; the better your outside options are, the more attractive the offers your bargaining partner has to make to keep you around. Consequently, being attractive to other companies can get you a raise with your current company even if you have no serious intention to leave. Rather, you can diplomatically point out to your boss that a person with your particular skill set typically makes $X per year and that your wage should be commensurate with that amount. Your boss will see this as a thinly veiled threat that you might leave the company. Still, if the company values your work, she will have no choice but to bump you to that level. And if she doesn’t…well, you are valuable to other companies, so you can go make that amount of money elsewhere.

Conclusion
Bargaining can be a scary process. Unfortunately, this fear blinds us to some of the critical facets of the process. Negotiations are strategic; only thinking about your worries and concerns means you are ignoring your employer’s worries and concerns. Yet you can use those opposing worries and concerns to coerce a better deal for yourself. Employers do not hold all of the power. Once you realize this, you can take advantage of the opposing weakness at the bargaining table.

I talk about all of these issues in greater length in my book, Game Theory 101: Bargaining. I also cover a bunch of real world applications to these and a whole bunch of other theories. If this stuff seems interesting to you, you should check it out!

Park Place Is Worthless: The Game Theory of McDonald’s Monopoly

McDonald’s Monopoly is back. As always, if you collect Park Place and Boardwalk, you win a million dollars. I just got a Park Place. That’s worth about $500,000, right?

Actually, as I show in my book on bargaining, it is worth nothing. Not close to nothing, but absolutely, positively nothing.

It helps to know how McDonald’s structures the game. Despite the apparent value of Park Place, McDonald’s floods the market with Park Place pieces, probably to trick naive players into thinking they are close to riches. I do not have an exact number, but I would imagine there are easily tens of thousands of Park Places floating around. However, they only one or two Boardwalks available. (Again, I do not know the exact number, but it is equal to the number of million dollar prizes McDonald’s want to give out.)

Even with that disparity, you might think Park Place maintains some value. Yet, it is easy to show that this intuition is wrong. Imagine you have a Boardwalk piece and you corral two Park Place holders into a room. (This works if you gathered thousands of them as well, but you only need two of them for this to work.) You tell them that you are looking to buy a Park Place piece. Each of them must write their sell price on a piece of paper. You will complete the transaction at the lowest price. For example, if one person wrote $500,000 and the other wrote $400,000, you would buy it from the second at $400,000.

Assume that sell prices are continuous and weakly positive, and that ties are broken by coin flip. How much should you expect to pay?

The answer is $0.

The proof is extremely simple. It is clear that both bidding $0 is a Nash equilibrium. (Check out my textbook or watch my YouTube videos if you do not know what a Nash equilibrium is.) If either Park Place owner deviates to a positive amount, that deviator would lose, since the other guy is bidding 0. So neither player can profitably deviate. Thus, both bidding 0 is a Nash equilibrium.

What if one bid $x greater than or equal to 0 and the other bid $y > x? Then the person bidding y could profitably deviate to any amount between y and x. He still wins the piece, but he pays less for it. Thus, this is a profitable deviation and bids x and y are not an equilibrium.

The final case is when both players bid the same amount z > 0. In expectation, both earn z/2. Regardless of the tiebreaking mechanism, one player must lose at least half the time. That player can profitably deviate to 3z/8 and win outright. This sell price is larger than the expectation.

This exhausts all possibilities. So both bidding $0 is the unique Nash equilibrium. Despite requiring another piece, your Boardwalk is worth a full million dollars.

What is going wrong for the Park Place holders? Supply simply outstrips demand. Any person with a Park Place but no Boardwalk walks away with nothing, which ultimately drives down the price of Park Place down to nothing as well.

Moral of the story: Don’t get excited if you get a Park Place piece.

Note 1: If money is discrete down to the cent, then the winning bid could be $0 or $0.01. (With the right tie breaker, it could also be $0.02.) Either way, this is not good for owners of Park Place.

Note 2: In practice, we might see Park Place sell for some marginally higher value. That is because it is (slightly) costly for a Boardwalk owner to seek out and solicit bids from more Park Place holders. However, Park Place itself is not creating any value here—it’s purely the transaction cost.

Note 3: An enterprising Park Place owner could purchase all other Park Place pieces and destroy them. This would force the Boardwalk controller to split the million dollars. While that is reasonable to do when there are only two individuals like the example, good luck buying all Park Places in reality. (Transaction costs strike again!)