Tag Archives: Game Theory

Park Place Is Still Worthless: The Game Theory of McDonald’s Monopoly

McDonald’s Monopoly begins again today. With that in mind, I thought I would update my explanation of the game theory behind the value of each piece, especially since my new book on bargaining connects the same mechanism to the De Beers diamond monopoly, star free agent athletes, and a shady business deal between Google and Apple. Here’s the post, mostly in its original form:


McDonald’s Monopoly is back. As always, if you collect Park Place and Boardwalk, you win a million dollars. I just got a Park Place. That’s worth about $500,000, right?

Actually, it is worth nothing. Not close to nothing, but absolutely, positively nothing.

It helps to know how McDonald’s structures the game. Despite the apparent value of Park Place, McDonald’s floods the market with Park Place pieces, probably to trick naive players into thinking they are close to riches. I do not have an exact number, but I would imagine there are easily tens of thousands of Park Places floating around. However, they only one or two Boardwalks available. (Again, I do not know the exact number, but it is equal to the number of million dollar prizes McDonald’s want to give out.)

Even with that disparity, you might think Park Place maintains some value. Yet, it is easy to show that this intuition is wrong. Imagine you have a Boardwalk piece and you corral two Park Place holders into a room. (This works if you gathered thousands of them as well, but you only need two of them for this to work.) You tell them that you are looking to buy a Park Place piece. Each of them must write their sell price on a piece of paper. You will complete the transaction at the lowest price. For example, if one person wrote $500,000 and the other wrote $400,000, you would buy it from the second at $400,000.

Assume that sell prices are continuous and weakly positive, and that ties are broken by coin flip. How much should you expect to pay?

The answer is $0.

The proof is extremely simple. It is clear that both bidding $0 is a Nash equilibrium. (Check out my textbook or watch my YouTube videos if you do not know what a Nash equilibrium is.) If either Park Place owner deviates to a positive amount, that deviator would lose, since the other guy is bidding 0. So neither player can profitably deviate. Thus, both bidding 0 is a Nash equilibrium.

What if one bid $x greater than or equal to 0 and the other bid $y > x? Then the person bidding y could profitably deviate to any amount between y and x. He still wins the piece, but he pays less for it. Thus, this is a profitable deviation and bids x and y are not an equilibrium.

The final case is when both players bid the same amount z > 0. In expectation, both earn z/2. Regardless of the tiebreaking mechanism, one player must lose at least half the time. That player can profitably deviate to 3z/8 and win outright. This sell price is larger than the expectation.

This exhausts all possibilities. So both bidding $0 is the unique Nash equilibrium. Despite requiring another piece, your Boardwalk is worth a full million dollars.

What is going wrong for the Park Place holders? Supply simply outstrips demand. Any person with a Park Place but no Boardwalk walks away with nothing, which ultimately drives down the price of Park Place down to nothing as well.

Moral of the story: Don’t get excited if you get a Park Place piece.

Note 1: If money is discrete down to the cent, then the winning bid could be $0 or $0.01. (With the right tie breaker, it could also be $0.02.) Either way, this is not good for owners of Park Place.

Note 2: In practice, we might see Park Place sell for some marginally higher value. That is because it is (slightly) costly for a Boardwalk owner to seek out and solicit bids from more Park Place holders. However, Park Place itself is not creating any value here—it’s purely the transaction cost.

Note 3: An enterprising Park Place owner could purchase all other Park Place pieces and destroy them. This would force the Boardwalk controller to split the million dollars. While that is reasonable to do when there are only two individuals like the example, good luck buying all Park Places in reality. (Transaction costs strike again!)


Now time for an update. What might not have been clear in the original post is that McDonald’s Monopoly is a simple illustration of a matching problem. Whenever you have a situation with n individuals who need one of m partners, all of the economic benefits go to the partners if m < n. The logic is the same as above. If an individual does not obtain a partner, he receives no profit. This makes him desperate to partner with someone, even if it means drastically dropping his share of the money to be made. But then the underbidding process begins until the m partners are taking all of the revenues for themselves.

In the book, I have a more practical example involving star free agent athletes. For example, there is only one LeBron James. Every team would like to sign him to improve their chances of winning. Yet this ultimately results in the final contract price to be so high that the team doesn’t actually benefit much (or at all) from signing James.

Well, that’s how it would work if professional sports organizations were not scheming to stop this. The NBA in particular has a maximum salary. So even if LeBron James is worth $50 million per season, he won’t be paid that much. (The exact amount a player can earn is complicated.) This ensures that the team that signs him will benefit from the transaction but takes money away from James.

Non-sports business scheme in similar ways. More than 100 year ago, the De Beers diamond company realized that new mine discoveries would mean that diamond supply would soon outstrip demand. This would kill diamond prices. So De Beers began purchasing tons of mines to intentionally limit production and increase price. Similarly, Apple and Google once had a “no compete” informal agreement to not poach each other’s employees. Without the outside bidder, a superstar computer engineer would not be able to increase his wage to the fair market value. Of course, this is highly illegal. Employees filed a $9 billion anti-trust lawsuit when they learned of this. The parties eventually settled the suit outside of court for an undisclosed amount.

To sum up, matching is good for those in demand and bad for those in high supply. With that in mind, good luck finding that Boardwalk!

What Does Game Theory Say about Negotiating a Pay Raise?

A common question I get is what game theory tells us about negotiating a pay raise. Because I just published a book on bargaining, this is something I have been thinking about a lot recently. Fortunately, I can narrow the fundamentals to three simple points:

1) Virtually all of the work is done before you sit down at the table.
When you ask the average person how they negotiated their previous raise, you will commonly hear anecdotes about how that individual said some (allegedly) cunning things, (allegedly) outwitted his or her boss, and received a hefty pay hike. Drawing inferences from this is problematic for a number of reasons:

  1. Anecdotal “evidence” isn’t evidence.
  2. The reason for the raise might have been orthogonal to what was said.
  3. Worse, the raise might have been despite what was said.
  4. It assumes that the boss is more concerned about dazzling words than money, his own job performance, and institutional constraints.

The fourth point is especially concerning. Think about the people who control your salaries. They did not get their job because they are easily persuaded by rehearsed speeches. No, they are there because they are good at making smart hiring decisions and keeping salaries low. Moreover, because this is their job, they engage in this sort of bargaining frequently. It would thus be very strange for someone like that to make such a rookie mistake.

So if you think you can just be clever at the bargaining table, you are going to have a bad time. Indeed, the bargaining table is not a game of chess. It should simply be a declaration of checkmate. The real work is building your bargaining leverage ahead of time.

2) Do not be afraid to reject offers and make counteroffers.
Imagine a world where only one negotiator had the ability to make an offer, while the other could only accept or reject that proposal. Accepting implements the deal; rejecting means that neither party enjoys the benefits of mutual cooperation. What portion of the economic benefits will the proposer take? And how much of the benefits will go to the receiver?

You might guess that the proposer has the advantage here. And you’d be right. What surprises most people, however, is the extent of the advantage: the proposer reaps virtually all of the benefits of the relationship, while the receiver is barely any better off than had the parties not struck a deal.

How do we know this? Game theory allows us to study this exact scenario rigorously. Indeed, the setup has a specific name: the ultimatum game. It shows that a party with the exclusive right to make proposals has all of the bargaining power.


That might seem like a big problem if you are the one receiving the offers. Fortunately, the problem is easy to solve in practice. Few real life bargaining situations expressly prohibit parties from making counteroffers. (As I discuss in the book, return of security deposits is one such exception, and we all know that turns out poorly for the renter—i.e., the receiver of the offer.) Even the ability to make a single counteroffer drastically increases an individual’s bargaining power. And if the parties could potentially bargain back and forth without end—called Rubinstein bargaining, perhaps the most realistic of proposal structures—bargaining equitably divides the benefits.

As the section header says, the lesson here is that you should not be afraid to reject low offers and propose a more favorable division. Yet people often fail to do this. This is especially common at the time of hire. After culling through all of the applications, a hiring manager might propose a wage. The new employee, deathly afraid of losing the position, meekly accepts.

Of course, the new employee is not fully appreciating the company’s incentives. By making the proposal, the company has signaled that the individual is the best available candidate. This inevitably gives him a little bit of wiggle room with his wage. He should exercise this leverage and push for a little more—especially because starting wage is often the point of departure for all future raise negotiations.

3) Increase your value to other companies.
Your company does not pay you a lot of money to be nice to you. It pays you because it has no other choice. Although many things can force a company’s hand in this manner, competing offers is particularly important.

Imagine that your company values your work at $50 per hour. If you can only work for them, due the back-and-forth logic from above, we might imagine that your wage will land in the neighborhood of $40 per hour. However, suppose that a second company exists that is willing to pay you up to $25 per hour. Now how much will you make?

The answer is no less than $40 per hour. Why? Well, suppose not. If your current company is only paying you, say, $30 per hour, you could go to the other company and ask for a little bit more. They would be obliged to pay you that since they value you up to $40 per hour. But, of course, your original company values you up to $50 per hour. So they have incentive to ultimately outbid the other company and keep you under their roof.

(This same mechanism means that Park Place is worthless in McDonald’s monopoly.)

Game theorists call such alternatives “outside options”; the better your outside options are, the more attractive the offers your bargaining partner has to make to keep you around. Consequently, being attractive to other companies can get you a raise with your current company even if you have no serious intention to leave. Rather, you can diplomatically point out to your boss that a person with your particular skill set typically makes $X per year and that your wage should be commensurate with that amount. Your boss will see this as a thinly veiled threat that you might leave the company. Still, if the company values your work, she will have no choice but to bump you to that level. And if she doesn’t…well, you are valuable to other companies, so you can go make that amount of money elsewhere.

Bargaining can be a scary process. Unfortunately, this fear blinds us to some of the critical facets of the process. Negotiations are strategic; only thinking about your worries and concerns means you are ignoring your employer’s worries and concerns. Yet you can use those opposing worries and concerns to coerce a better deal for yourself. Employers do not hold all of the power. Once you realize this, you can take advantage of the opposing weakness at the bargaining table.

I talk about all of these issues in greater length in my book, Game Theory 101: Bargaining. I also cover a bunch of real world applications to these and a whole bunch of other theories. If this stuff seems interesting to you, you should check it out!

Tesla’s Patent Giveaway Isn’t Altruistic—And That’s Not a Bad Thing

Tesla Motors recently announced that it is opening its electric car patents to competitors. The buzz around the Internet is that this is another case of Tesla’s CEO Elon Musk doing something good for humanity. However, the evidence suggests another explanation: Tesla is doing this to make money, and that’s not a bad thing.

The issue Tesla faces is what game theorists call a coordination problem. Specifically, it is a stag hunt:

For those unfamiliar and who did not watch the video, a stag hunt is the general name for a game where both parties want to coordinate on taking the same action because it gives each side its individually best outcome. However, a party’s worst possible outcome is to take that action while the other side does not. This leads to two reasonable outcomes: both coordinate on the good action and do very well or both do not take that action (because they expect the other one not to) and do poorly.

This is a common problem in emerging markets. The core issue is that some technologies need other technologies to function properly. That is, technology A is worthless without technology B, and technology B is worthless without technology A. Manufacturers of A might want to produce A and manufacturers of B might want to produce B, but they cannot do this profitably without each other’s support.

Take HDTV as a recent example. We are all happy to live in a world of HD: producers now create a better product, and consumers find the images to be far more visually appeasing. However, the development of HDTV took longer than it should have. The problem was that producers had no reason to switch over to HD broadcasting until people owned HDTVs. Yet television manufacturers had no reason to create HDTVs until there were HD programs available for consumption. This created an awkward coordination problem in which both producers and manufacturers were waiting around for each other. HDTV only became commonplace after cheaper production costs made the transition less risky for either party.

I imagine car manufacturers faced a similar problem a century ago. Ford and General Motors may have been ready to sell cars to the public, but the public had little reason to buy them without gas stations all around to make it easy to refuel their vehicles. But small business owners had little reason to start up gas stations without a large group of car owners around to purchase from them.

The above problem should make Tesla’s major barrier clear. Tesla has the electric car technology ready. What they lack is a network of charging stations that can make long-distance travel with electric cars practical. Giving away the patents to competitors potentially means more electric cars on the road and more charging stations, without having to spend significant capital that the small company does not have. Tesla ultimately wins because they have a first-mover advantage in developing the technology.

So this is less about altruism and more about self-interest. But that is not a bad thing. 99% of the driving force behind economics is mutual gain. I think this fact gets lost in the modern political/economic debate because there are some (really bad) cases where that is not true. But here, Tesla wins, other car manufacturers win, and consumers win.

Oh, oil producing companies lose. Whatever.

H/T to Dillon Bowman (a student of mine at the University of Rochester) and /u/Mubarmi for inspiring this post.

The Game Theory of Soccer Penalty Kicks

With the World Cup starting today, now is a great time to discuss the game theory behind soccer penalty kicks. This blog post will do three things: (1) show that penalty kicks is a very common type of game and one that game theory can solve very easily, (2) players behave more or less as game theory would predict, and (3) a striker becoming more accurate to one side makes him less likely to kick to that side. Why? Read on.

The Basics: Matching Pennies
Penalty kicks are straightforward. A striker lines up with the ball in front of him. He runs forwards and kicks the ball toward the net. The goalie tries to stop it.

Despite the ordering I just listed, the players essentially move simultaneously. Although the goalie dives after the striker has kicked the ball, he cannot actually wait to the ball comes off the foot to decide which way to dive—because the ball moves so fast, it will already be behind him by the time he finishes his dive. So the goalie must pick his strategy before observing any relevant information from the striker.

This type of game is actually very common. Both players pick a side. One player wants to match sides (the goalie), while the other wants to mismatch (the striker). That is, from the striker’s perspective, the goalie wants to dive left when the striker kicks left and dive right when the striker kicks right; the striker wants to kick left when the goalie dives right and kick right when the goalie dives left. This is like a baseball batter trying to guess what pitch the pitcher will throw while the pitcher tries to confuse the batter. Similarly, a basketball shooter wants a defender to break the wrong way to give him an open lane to the basket, while the defender wants to stay lined up with the ball handler.

Because the game is so common, it should not be surprised that game theorists have studied this type of game at length. (Game theory, after all, is the mathematical study of strategy.) The common name for the game is matching pennies. When the sides are equally powerful, the solution is very simple:

If you skipped the video, the solution is for both players to pick each side with equal probability. For penalty kicks, that means the striker kicks left half the time and right half the time; the goalie dives left half the time and dives right half the time.

Why are these optimal strategies? The answer is simple: neither party can be exploited under these circumstances. This might be easier to see by looking at why all other strategies are not optimal. If the striker kicked left 100% of the time, it would be very easy for the goalie to stop the shot—he would simply dive left 100% of the time. In essence, the striker’s predictability allows the goalie to exploit him. This is also true if the striker is aiming left 99% of the time, or 98% of the time, and so forth—the goalie would still want to always dive left, and the striker would not perform as well as he could by randomizing in a less predictable manner.

In contrast, if the striker is kicking left half the time and kicking right half the time, it does not matter which direction the goalie dives—he is equally likely to stop the ball at that point. Likewise, if the goalie is diving left half the time and diving right half the time, it does not matter which direction he striker kicks—he is equally likely to score at that point.

The key takeaways here are twofold: (1) you have to randomize to not be exploited and (2) you need to think of your opponent’s strategic constraints when choosing your move.

Real Life Penalty Kicks
So that’s the basic theory of penalty kicks. How does it play out in reality?

Fortunately, we have a decent idea. A group of economists (including Freakonomics’ Steve Levitt) once studied the strategies and results of penalty kicks from the French and Italian leagues. They found that players strategize roughly how they ought to.

How did they figure this out? To begin, they used a more sophisticated model than the one I introduced above. Real life penalty kicks differ in two key ways. First, kicking to the left is not the same thing as kicking to the right. A right-footed striker naturally hits the ball harder and more accurately to the left than the right. This means that a ball aimed to the right is more likely to miss the goal completely and more likely to be stopped if the goalie also dives that way. And second, a third strategy for both players is also reasonable: aim to the middle/defend the middle.

Regardless of the additional complications, there are a couple of key generalizations that hold from the logic of the first section. First, a striker’s probability of scoring should be equal regardless of whether he kicks left, straight, or right. Why? Suppose this were not true. Then someone is being unnecessarily exploited in this situation. For example, imagine that strikers are kicking very frequently to the left. Realizing this, goalies are also diving very frequently to the left. This leaves the striker with a small scoring percentage to the left and a much higher scoring percentage when he aims to the undefended right. Thus, the striker should be correcting his strategy by aiming right more frequently. So if everyone is playing optimally, his scoring percentage needs to be equal across all his strategies, otherwise some sort of exploitation is available.

Second, a goalie’s probability of not being scored against must be equal across all of his defending strategies. This follows from the same reason as above: if diving toward one side is less likely to result in a goal, then someone is being exploited who should not be.

All told, this means that we should observe equal probabilities among all strategies. And, sure enough, this is more or less what goes on. Here’s Figure 4 from the article, which gives the percentage of shots that go in for any combination of strategies:


The key places to look are the “total” column and row. The total column for the goalie on the right shows that he is very close to giving up a goal 75% of the time regardless of his strategy. The total row for the striker at the bottom shows more variance—in the data, he scores 81% of the time aiming toward the middle but only 70.1% of the time aiming to the right—but those differences are not statistically significant. In other words, we would expect that sort of variation to occur purely due to chance.

Thus, as far as we can tell, the players are playing optimal strategies as we would suspect. (Take that, you damn dirty apes!)

Relying on Your Weakness
One thing I glossed over in the second part is specifically how a striker’s strategy should change due to the weakness of the right side versus the left. Let’s take care of that now.

Imagine you are a striker with an amazingly accurate left side but a very inaccurate right side. More concretely, you will always hit the target if you shoot left, but you will miss some percentage of the time on the right side. Realizing your weakness, you spend months practicing your right shot and double its accuracy. Now that you have a stronger right side, how will this affect your penalty kick strategy?

The intuitive answer is that it should make you shoot more frequently toward the right—after all, your shot has improved on that side. However, this intuition is not always correct—you may end up shooting less often to the right. Equivalently, this means the more inaccurate you are to one side, the more you end up aiming in that direction.

Why is this the case? If you want the full explanation, watch the following two videos:

The shorter explanation is as follows. As mentioned at the end of the first section of this blog post, players must consider their opponent’s capabilities as they develop their strategies. When you improve your accuracy to the right side, your opponent reacts by defending the right side more—he can no longer so strongly rely on your inaccuracy as a phantom defense. So if you start aiming more frequently to the right side, you end up with an over-correction—you are kicking too frequently toward a better defended side. Thus, you end up kicking more frequently to the left to account for the goalie wanting to dive right more frequently.

And that’s the game theory of penalty kicks.

Chimps Aren’t Better than Humans at Game Theory

(At least the evidence doesn’t match the claim.)

“Chimps Outsmart Humans When It Comes To Game Theory” has been making the social media rounds today. Unfortunately, this seems to be a case of social media run amok–the paper has some interesting results, but that interpretation is horribly off base.

Below, I will give four reasons why we shouldn’t conclude that chimps are better at game theory than humans. But first, let’s quickly review what happened. A bunch of chimps, Japanese students, and villagers played some basic, zero-sum, simultaneous move games like matching pennies. The mixed strategy algorithm derives equilibrium predictions of what each player should do under these scenarios. As it turns out, chimps played strategies closer to the equilibrium predictions. Therefore, the proposed conclusion is that chimps are better game theorists than humans.

So what’s wrong here? Well…

The Sample Size Is Lacking
Who participated in the study? Six chimps, thirteen female Japanese students, and twelve males from Guinea. We can’t generalize differences between these groups in a meaningful way without a larger sample size.

The Chimps Aren’t a Random Sample
From the study:

Six chimpanzees (Pan Troglodytes) at the Kyoto University Primate Research Institute voluntarily participated in the experiment. Each chimpanzee pair was a mother and her offspring…all six had previously participated in cognitive studies, including social tasks involving food and token sharing.

There are a couple of problems here. First, the pairs of chimps that played are related. It stands to reason that a mother who is good at these games would produce offspring that is also good at these games. So we really aren’t looking at six chimps so much as three. Ouch.

(It should be noted that the Japanese students aren’t really random either since they all come from the same university. However, this is true for many studies of this sort, so I’m going to overlook it.)

Second, these aren’t even your regular chimps. They have played plenty of games before!

Combined, this is like taking a group of University of Rochester Department of Political Scientist (URDPS) students and comparing their results to a group of random Californians. The URDPS group is “related” (they all go to the same school) and they all have plenty of experience playing games (at least three semesters’ worth). They would undoubtedly play more rationally than the random group from California. But you can’t use this to claim that New Yorkers play more rationally than Californians. Yet you are seeing the analogous claim being made.

They Aren’t Playing the Game the Researchers Are Testing Against
Only the researchers knew the game that the players were playing. In contrast, the players only knew their payoffs, not their opponents’. The mixed strategy algorithm only makes predictions about how players should play given that all facets of the game are common knowledge. That’s clearly not the case here.

Instead, the real game here is spending a number of iterations of the game trying decipher what your opponent’s payoffs are and then figuring out how to strategize accordingly. It’s not clear how to interpret the results in this light, though it is interesting that the (small, biased sample of) chimps figured this out more quickly.

The Game Was Not Inter-species
If you want to say that chimps are better at these games than humans, you need to have chimps playing humans. You would then have them play some number of iteration and see who received more apples/yen by the end of the game. Instead, it was chimps versus chimps and humans against humans. With that data, you cannot claim one party is better than the other.

Nash Equilibrium Isn’t a Good Baseline
“Fine,” you might say in response to the last point, “but the chimps still played closer to the Nash equilibrium strategies than humans. Therefore, chimps are better game theorists than humans are.” That’s still not want we want to know, though. Who cares if the players were playing Nash? If I played this game tomorrow, would I play Nash? Yes–if I thought the other player was clever enough to do the same. If not, I would try to beat them.

This is a nuanced problem, so let’s look at an example. If you ask people about soccer penalty kicks, they will likely tell you that you should kick more frequently to your stronger side as it becomes more and more accurate. This is wrong: you should increase your reliance on your weaker side. Knowing this, if I played the role of the goalie, I would start diving to the kicker’s stronger side more frequently. The kicker would do poorly and I would do very well.

How would the study interpret this? It would say that we are both bad at game theory! But that’s not what’s going on here. We have one bad strategist and one sophisticated one. The interpretation of the study would get half of it right but completely blow the other half. Worse, a sophisticated goalie taking advantage of the kicker’s incompetence would outperform a goalie who played Nash instead.

Nash equilibrium is useful for many reasons; testing whether one species is better than another with it as a baseline is not one of them.

I’d have to go through the paper more closely than I have so far to give an overall impression of it. However, even without that, it is clear that the way social media is describing the results is very questionable.

Game Theory Is Really Counterintuitive

Every now and then, I hear someone say that game theory doesn’t tell us anything we don’t already know. In a sense, they are right—game theory is a methodology, so it’s not really telling us anything that our assumptions are not. However, I challenge someone to tell me that they would have believed most of the things below if we didn’t have formal modeling.

  • People often take aggressive postures that lead to mutually bad outcomes even though mutual cooperation is mutually preferable. Source.
  • Even if everyone agrees that an outcome is everyone’s favorite, they might not get that outcome. Source.
  • Sometimes having fewer options is better than having more options. Source.
  • On a penalty kick, soccer players might wish to kick more frequently toward their weaker side as their weaker side becomes increasingly inaccurate. Source.
  • In a duel, both gunslingers should shoot at the same time, even if one is a worse shot and would seem to benefit by walking closer to his target. Source.
  • There’s a reason why gas stations are on the same corner and politicians adopt very similar platforms. And it’s the same reason. Source.
  • Closing roads can improve everyone’s commute time. Source.
  • Fewer witnesses to a crime might be preferable to more. Source.
  • You should bid how much you value the good at stake in a second price auction. Source.
  • If you pay the value you think something is worth, you are going to end up with a negative net profit. Source.
  • Lighting money on fire is often profitable. Source.
  • Going to college can be valuable even if college doesn’t teach you anything. Source.
  • An animal might be better off jumping high in the air repeatedly than running away from a predator. Source.
  • Knowing just slightly more about the value of your car than a potential buyer can make it impossible to sell it. Source.
  • Nigerian email scammers should say they are from Nigeria even though just about everyone is familiar with the scam. Source.
  • Everyone might mimic everyone else just because two people chose to do the same thing. Source.
  • A biased media may be better than an unbiased media. Source.
  • Every voting system is manipulable. Source.
  • You might want to abstain from voting even though you strictly prefer one candidate to another. Source.
  • Unanimous jury rulings are more likely to convict the innocent than simple majority rule if jurors vote intelligently. Source.
  • The House of Representatives caters to the median member of the majority party, not the median member of the institution overall. Source.
  • Plurality, first-past-the-post voting leads to two-party systems. Source.
  • United Nations Security Council members sometimes do not veto resolutions even though they strongly dislike them. Source.
  • Without the ability to propose offers, you receive very few benefits from bargaining. Source.
  • Settlements always exist that are mutually preferable to war. Source.
  • Fighting wars removes the need for war. Source.
  • You might want to shoot to miss in war. Source.
  • Nonproliferation agreements can be credible. Source.
  • Weapons inspections are useful even if they never find anything. Source.
  • Economic sanctions are useful even though they often fail in application. Source.
  • Pitchers shouldn’t change their pitch selection with a runner on third base, even though curveballs are more likely to result in wild pitches. Source.
  • Sports teams can benefit from a lack of player safety in contract negotiations. Source.
  • You shouldn’t try to maximize your score in Words with Friends/Scrabble. Source.
  • In speed sailing, competitors deliberately choose paths they believe will be slower. Source.
  • The first player wins in Connect Four. Checkers ends in a draw. Source.
  • Chess has a solution, though we don’t know it yet. Source. (Or maybe not.)
  • Warren Buffett was never going to pay $1 billion the winner of the March Madness bracket challenge. Source.
  • Park Place is worthless in McDonald’s Monopoly. Source.
  • Losing pays. Source.
  • As drug tests become more accurate, they should be implemented less often. Source.

Am I missing anything?

Roger Craig’s Daily Double Strategy: Smart Play, Bad Luck

Jeopardy! is in the finals of its Battle of the Decades, with Brad Rutter, Ken Jennings, and Roger Craig squaring off. The players have ridiculous résumés. Brad is the all-time Jeopardy! king, having never lost to a human and racking up $3 million in the process. Ken has won more games than anyone else. And Roger has the single-day earnings record.

That sets the scene for the middle of Double Jeopardy. Roger had accumulated a modest lead through the course of play and hit a Daily Double. He then made the riskiest play possible–he wagered everything. The plan backfired, and he lost all of his money. He was in the negatives by the end of the round and had to sit out of Final Jeopardy.

Did we witness the dumbest play in the history of Jeopardy? I don’t think so–Roger’s play actually demonstrated quite a bit of savvy. Although Roger is a phenomenal player, Brad and Ken are leaps and bounds better than everyone else. (And Brad might be leaps and bounds better than Ken as well.) If Roger had made a safe wager, Brad and Ken would have likely eventually marched past his score as time went on–they are the best for a reason, after all. So safe wagers aren’t likely to win. Neither is wagering everything and getting it wrong. But wagering everything and getting it right would have given him a fighting chance. He just got unlucky.

All too often, weaker Jeopardy! players make all the safest plays in the world, doing everything they can to keep themselves from losing immediately. They are like football coaches who bring in the punting unit down 10 with five minutes left in the fourth. Yes, punting is safe. Yes, punting will keep you from outright losing in the next two minutes. But there is little difference between losing now and losing by the end of the game. If there is only one chance to win–to go for it on fourth down–you have to take it. And if there is only one way to beat Brad and Ken–to bet it all on a Daily Double and hope for the best–you have to make it a true Daily Double.

Edit: Roger Craig pretty much explicitly said that this was the reason for his Daily Double strategy on the following night’s episode. Also, this “truncated punishment” mechanism also has real world consequences, such as the start of war.

Edit #2: Julia Collins in the midst of an impressive run, having won 14 times (the third most consecutive games of all time) and earned more money than any other woman in regular play. She is also fortunate that many of her opponents are doing very dumb things like betting $1000 on a Daily Double that desperately needs to be a true Daily Double. People did the same thing during Ken Jennings’ run, and it is mindbogglingly painful to watch.

Interpret Your Cutpoints

Here is a bad research design I see way too frequently.* The author presents a model. The model shows that if sufficient amounts of x exist, then y follows. The author then provides a case study, showing that x existed and y occurred. Done.

Do you see the problem there? I removed “sufficient” as a qualifier for x from one sentence to the next. Unfortunately, by doing so, I have made the case study worthless. In fact, such case studies often undermine the exact point the author was trying to make with the model!

Let me illustrate my point with the following (intentionally ridiculous) example. Consider the standard bargaining model of war. State A and State B are in negotiations. If bargaining breaks down, A prevails militarily and takes the entire good the parties are bargaining over with probability p_A; B prevails with complementary probability, or 1 – p_A. War is costly, however; states pay respective costs c_A and c_B > 0.

That is the standard model. Now let me spice it up. One thing that the model does not consider is the cost of the stationery**, ink, and paper necessary to sign a peaceful agreement. Let’s call that cost s, and let’s suppose (without loss of generality) that state A necessarily pays the stationery costs.

Can the parties reach a peaceful agreement? Well, let x be A’s share of a peaceful settlement. A prefers a settlement if it pays more than war, or x – s > p_A – c_A. We can rewrite this as x > p_A – c_A + s.

Meanwhile, B prefers a settlement if the remainder pays better than war, or 1 – x > 1 – p_A – c_B. This reduces to x < p_A + c_B.

Stringing these inequalities together, mutually preferable peaceful settlements exist if p_A – c_A + s < x < p_A + c_B. In turn, such an x exists if s < c_A + c_B.

Nice! I have found a new rationalist explanation for war! You see, if the costs of stationery exceed the costs of war (or s > c_A + c_B), at least one state would always prefer war to peace. Thus, peace is unsustainable.

Of course, my argument is completely ridiculous–stationery does not cost that much. My theory remains valid, it just lacks empirical plausibility.

And, yet, formal theorists too often fail to substantively interpret their cutpoints in this way. That is, they do not ask if real-life parameters could ever sustain the conditions necessary to lead to the behavior described.

Instead, you will get case studies that look like the following:

I presented a model that shows that the costs of stationery can lead to war. In analyzing the historical record of World War I, it becomes clear that the stationery of the bargained resolution would have been very expensive, as the ball point pen had only been invented 25 years ago and was still prohibitively costly. Thus, World War I started.

Completely ridiculous! And, in fact, the case study demonstrated the opposite of what the author had intended. That is, if you actually analyze the cutpoint, you will see that the cost of stationery was much lower than the costs of war, and thus the cost of stationery (at best) had a negligible causal connection to the conflict.

In sum, please, please interpret your cutpoints. Your model only provides useful insight if its parameters match what occurred in reality. It is not sufficient to say that cost existed; rather, you must show that the cost was sufficiently high (or low) compared to the other parameters of your model.

* This blog post is the result of presentations I observed at ISA and Midwest, though I have seen some published papers like this as well.

** I am resisting the urge to make this an infinite horizon model so I can solve for the stationary MPE of a stationery game.

The Game Theory of MPSA Elevators

TL;DR: The historic Palmer House Hilton elevators are terribly slow because of bad strategic design, not mechanical issues or overcrowding.

Midwest Political Science Association’s annual meeting–the largest gathering of political scientists–takes place at the historic* Palmer House Hilton each year. While the venue is nice, the elevator system is horrible. And with gatherings on the first eight floors, the routine gets old really fast.

Interestingly, though, the delays are not the result of an old elevator system or too many political scientists moving at once.** Rather, the problem is shoddy strategic thinking.

Each elevator bay has three walls. The elevators along each wall have different tasks. Here’s the first one:


Elevators on this floor go from the ground floor to the 12th floor.

Here’s the second:


These go from the ground floor to the eighth floor or the 18th floor to the 23rd floor.

And the last wall:


These go from the ground floor to the eighth floor and the 13th floor to the 17th.

Now suppose you are on the ground level want to go to the 7th floor. What’s the fastest way to get there? For most elevator systems, you press a single button. The system figures out which elevator will most efficiently take you there and dispatches that elevator to the ground level.

But historic Palmer House Hilton’s elevators are not a normal system. Each wall runs independent of one another with three separate buttons to press. So if you really want to get to the seventh floor as fast as possible, you have to press all three–after all, you do not know which of the three systems will most quickly deliver an elevator to your position.

Unfortunately, this has a pernicious effect. Once the first elevator arrives, the call order to the other two systems does not cancel. Thus, they will both (eventually) send an elevator to that floor. Often times, this means an elevator wastes a trip by going to the floor and picking no one up. In turn, people on other floors waiting for that elevator suffer some unnecessary delay.

This is why (1) the elevator system takes forever and (2) you often stop at various floors and pick up no one. We would all be better off if people limited their choice to a single system, but a political scientist running late to his or her next panel does not care about efficiency.

(Let this sink in for a moment. The largest gathering of political scientists has yet to overcome a collective action that plagues it on an every day basis.)

Given the floor restrictions for the elevators, the best solution I can think of would be to install an elevator system where you press the button of the floor you want outside the elevator, and the system chooses which to send from the three walls. This would be mildly inconvenient but would stop all the unnecessary elevator movements.


*Why is it historic? I have no clue. But everyone says it is.

**The latter undoubtedly contributes to the problem, however.

The Nefarious Reason to Draw on Jeopardy

Arthur Chu, current four-day champion on Jeopardy!, has made a lot of waves around the blogosphere with his unusual play style. (Among other things, he hunts all over the board for Daily Doubles, has waged strange dollar amounts when getting one, and clicks loudly when ringing in.) What has garnered the most attention, though, is his determination to play for the draw. On three occasions, Arthur has had the opportunity to bet enough to eliminate his opponent from the show. Each time, he has bet enough so that if his opponent wagers everything, he or she will draw with Arthur.

It is worth noting that draws aren’t the worst thing in Jeopardy. Unlike just about all other game shows, there is no sudden death mechanism. Instead, both players “win” and become co-champions, keeping the money accumulated from that game and coming back to play again the next day. There is no cost to you as the player; Jeopardy! foots the bill.

Why is Arthur doing this? The links provided above give two reasons. First, there have been instances where betting $1 more than enough to force a draw has resulted in the leader ultimately losing the game. Betting more than the absolute minimum necessary to ensure that you get to stay the next day thus has some risks. Second, if your opponents know that you will bet to draw, it induces them to wager all of their money. This is advantageous to the leader in case everyone gets the clue wrong.

That second point might be a little complicated, so an example might help. Suppose the leader had $20,000, second place had $15,000, and third place died in the middle of the taping. If the leader wagers $10,000, second place might sensibly wager $15,000 to force the draw if she thought she had a good chance of responding correctly. If only one is correct, that person wins. If they are both right, they draw. If both are wrong, second place goes bankrupt and the leader wins with $10,000.

Compare that to what happens if the leader wagers $10,001 (enough to guarantee victory with a correct response) and second place wagers $5,000. All outcomes remain the same except when both are wrong. Now the leader drops to $9,999 and the person trailing previously wins with $10,000.

Sure, these are good reasons to play to draw, but I think there is something more nefarious going on. Arthur knows he is better than the contestants he has been beating. One of the easiest ways to lose as Jeopardy! champion is to play a game against someone who is better than you. So why would you want to get rid of contestants that you are better than? Creating a co-champion means that the producers will draw one less person from the contestant pool for the next game, meaning there is one less chance you will play against someone better than you. This is nefarious because it looks nice–he is allowing second place to take home thousands and thousands of dollars more than they would be able to otherwise–but really he is saying “hey, you are bad at this game, so please keep playing with me!”

In addition, his alleged kindness might even be reciprocated one day. Perhaps someone he permits a draw to will one day have the lead going into Final Jeopardy. Do you think that contestant is going to play for the win or the draw? Well, if Arthur is going to keep that person on the gravy train for match after match, I suspect that person is going to give Arthur the opportunity to draw.

It’s nefarious. Arthur’s draws could spread like some sort of vile plague.