Park Place Is Still Worthless: The Game Theory of McDonald’s Monopoly

McDonald’s Monopoly begins again today. With that in mind, I thought I would update my explanation of the game theory behind the value of each piece, especially since my new book on bargaining connects the same mechanism to the De Beers diamond monopoly, star free agent athletes, and a shady business deal between Google and Apple. Here’s the post, mostly in its original form:

__________________________________

McDonald’s Monopoly is back. As always, if you collect Park Place and Boardwalk, you win a million dollars. I just got a Park Place. That’s worth about $500,000, right?

Actually, it is worth nothing. Not close to nothing, but absolutely, positively nothing.

It helps to know how McDonald’s structures the game. Despite the apparent value of Park Place, McDonald’s floods the market with Park Place pieces, probably to trick naive players into thinking they are close to riches. I do not have an exact number, but I would imagine there are easily tens of thousands of Park Places floating around. However, they only one or two Boardwalks available. (Again, I do not know the exact number, but it is equal to the number of million dollar prizes McDonald’s want to give out.)

Even with that disparity, you might think Park Place maintains some value. Yet, it is easy to show that this intuition is wrong. Imagine you have a Boardwalk piece and you corral two Park Place holders into a room. (This works if you gathered thousands of them as well, but you only need two of them for this to work.) You tell them that you are looking to buy a Park Place piece. Each of them must write their sell price on a piece of paper. You will complete the transaction at the lowest price. For example, if one person wrote $500,000 and the other wrote $400,000, you would buy it from the second at $400,000.

Assume that sell prices are continuous and weakly positive, and that ties are broken by coin flip. How much should you expect to pay?

The answer is $0.

The proof is extremely simple. It is clear that both bidding $0 is a Nash equilibrium. (Check out my textbook or watch my YouTube videos if you do not know what a Nash equilibrium is.) If either Park Place owner deviates to a positive amount, that deviator would lose, since the other guy is bidding 0. So neither player can profitably deviate. Thus, both bidding 0 is a Nash equilibrium.

What if one bid $x greater than or equal to 0 and the other bid $y > x? Then the person bidding y could profitably deviate to any amount between y and x. He still wins the piece, but he pays less for it. Thus, this is a profitable deviation and bids x and y are not an equilibrium.

The final case is when both players bid the same amount z > 0. In expectation, both earn z/2. Regardless of the tiebreaking mechanism, one player must lose at least half the time. That player can profitably deviate to 3z/8 and win outright. This sell price is larger than the expectation.

This exhausts all possibilities. So both bidding $0 is the unique Nash equilibrium. Despite requiring another piece, your Boardwalk is worth a full million dollars.

What is going wrong for the Park Place holders? Supply simply outstrips demand. Any person with a Park Place but no Boardwalk walks away with nothing, which ultimately drives down the price of Park Place down to nothing as well.

Moral of the story: Don’t get excited if you get a Park Place piece.

Note 1: If money is discrete down to the cent, then the winning bid could be $0 or $0.01. (With the right tie breaker, it could also be $0.02.) Either way, this is not good for owners of Park Place.

Note 2: In practice, we might see Park Place sell for some marginally higher value. That is because it is (slightly) costly for a Boardwalk owner to seek out and solicit bids from more Park Place holders. However, Park Place itself is not creating any value here—it’s purely the transaction cost.

Note 3: An enterprising Park Place owner could purchase all other Park Place pieces and destroy them. This would force the Boardwalk controller to split the million dollars. While that is reasonable to do when there are only two individuals like the example, good luck buying all Park Places in reality. (Transaction costs strike again!)

__________________________________

Now time for an update. What might not have been clear in the original post is that McDonald’s Monopoly is a simple illustration of a matching problem. Whenever you have a situation with n individuals who need one of m partners, all of the economic benefits go to the partners if m < n. The logic is the same as above. If an individual does not obtain a partner, he receives no profit. This makes him desperate to partner with someone, even if it means drastically dropping his share of the money to be made. But then the underbidding process begins until the m partners are taking all of the revenues for themselves.

In the book, I have a more practical example involving star free agent athletes. For example, there is only one LeBron James. Every team would like to sign him to improve their chances of winning. Yet this ultimately results in the final contract price to be so high that the team doesn’t actually benefit much (or at all) from signing James.

Well, that’s how it would work if professional sports organizations were not scheming to stop this. The NBA in particular has a maximum salary. So even if LeBron James is worth $50 million per season, he won’t be paid that much. (The exact amount a player can earn is complicated.) This ensures that the team that signs him will benefit from the transaction but takes money away from James.

Non-sports business scheme in similar ways. More than 100 year ago, the De Beers diamond company realized that new mine discoveries would mean that diamond supply would soon outstrip demand. This would kill diamond prices. So De Beers began purchasing tons of mines to intentionally limit production and increase price. Similarly, Apple and Google once had a “no compete” informal agreement to not poach each other’s employees. Without the outside bidder, a superstar computer engineer would not be able to increase his wage to the fair market value. Of course, this is highly illegal. Employees filed a $9 billion anti-trust lawsuit when they learned of this. The parties eventually settled the suit outside of court for an undisclosed amount.

To sum up, matching is good for those in demand and bad for those in high supply. With that in mind, good luck finding that Boardwalk!

What Does Game Theory Say about Negotiating a Pay Raise?

A common question I get is what game theory tells us about negotiating a pay raise. Because I just published a book on bargaining, this is something I have been thinking about a lot recently. Fortunately, I can narrow the fundamentals to three simple points:

1) Virtually all of the work is done before you sit down at the table.
When you ask the average person how they negotiated their previous raise, you will commonly hear anecdotes about how that individual said some (allegedly) cunning things, (allegedly) outwitted his or her boss, and received a hefty pay hike. Drawing inferences from this is problematic for a number of reasons:

  1. Anecdotal “evidence” isn’t evidence.
  2. The reason for the raise might have been orthogonal to what was said.
  3. Worse, the raise might have been despite what was said.
  4. It assumes that the boss is more concerned about dazzling words than money, his own job performance, and institutional constraints.

The fourth point is especially concerning. Think about the people who control your salaries. They did not get their job because they are easily persuaded by rehearsed speeches. No, they are there because they are good at making smart hiring decisions and keeping salaries low. Moreover, because this is their job, they engage in this sort of bargaining frequently. It would thus be very strange for someone like that to make such a rookie mistake.

So if you think you can just be clever at the bargaining table, you are going to have a bad time. Indeed, the bargaining table is not a game of chess. It should simply be a declaration of checkmate. The real work is building your bargaining leverage ahead of time.

2) Do not be afraid to reject offers and make counteroffers.
Imagine a world where only one negotiator had the ability to make an offer, while the other could only accept or reject that proposal. Accepting implements the deal; rejecting means that neither party enjoys the benefits of mutual cooperation. What portion of the economic benefits will the proposer take? And how much of the benefits will go to the receiver?

You might guess that the proposer has the advantage here. And you’d be right. What surprises most people, however, is the extent of the advantage: the proposer reaps virtually all of the benefits of the relationship, while the receiver is barely any better off than had the parties not struck a deal.

How do we know this? Game theory allows us to study this exact scenario rigorously. Indeed, the setup has a specific name: the ultimatum game. It shows that a party with the exclusive right to make proposals has all of the bargaining power.

 

That might seem like a big problem if you are the one receiving the offers. Fortunately, the problem is easy to solve in practice. Few real life bargaining situations expressly prohibit parties from making counteroffers. (As I discuss in the book, return of security deposits is one such exception, and we all know that turns out poorly for the renter—i.e., the receiver of the offer.) Even the ability to make a single counteroffer drastically increases an individual’s bargaining power. And if the parties could potentially bargain back and forth without end—called Rubinstein bargaining, perhaps the most realistic of proposal structures—bargaining equitably divides the benefits.

As the section header says, the lesson here is that you should not be afraid to reject low offers and propose a more favorable division. Yet people often fail to do this. This is especially common at the time of hire. After culling through all of the applications, a hiring manager might propose a wage. The new employee, deathly afraid of losing the position, meekly accepts.

Of course, the new employee is not fully appreciating the company’s incentives. By making the proposal, the company has signaled that the individual is the best available candidate. This inevitably gives him a little bit of wiggle room with his wage. He should exercise this leverage and push for a little more—especially because starting wage is often the point of departure for all future raise negotiations.

3) Increase your value to other companies.
Your company does not pay you a lot of money to be nice to you. It pays you because it has no other choice. Although many things can force a company’s hand in this manner, competing offers is particularly important.

Imagine that your company values your work at $50 per hour. If you can only work for them, due the back-and-forth logic from above, we might imagine that your wage will land in the neighborhood of $40 per hour. However, suppose that a second company exists that is willing to pay you up to $25 per hour. Now how much will you make?

The answer is no less than $40 per hour. Why? Well, suppose not. If your current company is only paying you, say, $30 per hour, you could go to the other company and ask for a little bit more. They would be obliged to pay you that since they value you up to $40 per hour. But, of course, your original company values you up to $50 per hour. So they have incentive to ultimately outbid the other company and keep you under their roof.

(This same mechanism means that Park Place is worthless in McDonald’s monopoly.)

Game theorists call such alternatives “outside options”; the better your outside options are, the more attractive the offers your bargaining partner has to make to keep you around. Consequently, being attractive to other companies can get you a raise with your current company even if you have no serious intention to leave. Rather, you can diplomatically point out to your boss that a person with your particular skill set typically makes $X per year and that your wage should be commensurate with that amount. Your boss will see this as a thinly veiled threat that you might leave the company. Still, if the company values your work, she will have no choice but to bump you to that level. And if she doesn’t…well, you are valuable to other companies, so you can go make that amount of money elsewhere.

Conclusion
Bargaining can be a scary process. Unfortunately, this fear blinds us to some of the critical facets of the process. Negotiations are strategic; only thinking about your worries and concerns means you are ignoring your employer’s worries and concerns. Yet you can use those opposing worries and concerns to coerce a better deal for yourself. Employers do not hold all of the power. Once you realize this, you can take advantage of the opposing weakness at the bargaining table.

I talk about all of these issues in greater length in my book, Game Theory 101: Bargaining. I also cover a bunch of real world applications to these and a whole bunch of other theories. If this stuff seems interesting to you, you should check it out!

Book Review: Naked Ecomoics

Book: Naked Economics by Charles Wheelan
Five stars out of five.

A few months ago, I wrote a post on how game theory has led to a variety of counterintuitive results. People apparently find that kind of thing extremely interesting—that post accounts for about a quarter of all traffic in this website’s six year history. At some point, I am going to write a book on the subject. I’m still probably a couple years away from actually doing that. But to prepare, I’ve made a list of pop-economics books to read through to get an understanding of what makes them tick and why they were so successful. Naked Economics is the first of my list.

Why Naked Economics? Purely by chance, I saw a thread on Reddit a couple months ago about the nefarious reason that stores often offer you a free meal if you do not receive a receipt with your purchase. Do you think the store owner is being generous and trying to make sure you receive the best possible service? Hell no. They are worried that the cashier is going to pocket the cash. The offer effectively employs the customer as an extra pair of watchful eyes. This deters the cashier from stealing. The owner has successfully retained his rightful share of the money, and it didn’t cost him a dime.

And that Reddit post? It was a picture of the page from Naked Economics explaining this. I immediately put the book in my queue.

Yes, my queue. I borrowed the book from the University of Rochester’s library. Someone already had it on loan, so I had to recall it. Soon after I checked it out, it was recalled again. It’s apparently that popular. I’m now stuck writing this review without actually having the book on me, but I digress.

Anyway, Naked Economics is a layman’s introduction to micro and macroeconomics. There is no math. That’s a good thing to promote a greater understanding from a wider audience. It’s a bad thing because it will lead people who don’t understand economics to falsely believe they do. To wit, one of the top reviewer comment on Amazon as I write this says that the reviewer uses it as his textbook for his economics class. That’s pure silliness. This is not not NOT a textbook. At all.

Rather, Naked Economics is an infomercial for why people should study economics. It contains insightful analysis of critical social, political, and economic phenomenon from recent times. Why is mackerel used as currency in some prisons? In the book. Why did our economy melt down in 2008? In the book. Why are insurance markets such a problem? In the book. Why is dirty money (that is, physically unclean money) not worth anything in India? In the book. Why is it hard for developing countries to retain intelligent workers? In the book.

So if you like understanding why the social world works the way it does, you can’t ask for a better start than Naked Economics. That’s why I’m giving it five stars. But please don’t read this book and think that you know economics as a result.

Fun with Incentives: Baseball Contracts Edition

Continuing in the long line of “why do people structure these things in such a crazy way” posts, we have the sad story of Phil Hughes. Hughes is a pitcher for the Minnesota Twins. Like many other players, Hughes’ contract has specific benchmarks that reward bonuses. One in particular gives him $500,000 if he pitches 210 innings this year.

209 2/3s innings? Worthless! Who needs someone who pitches 209 2/3s innings?

But 210 innings? Yep! Definitely worth a half million dollars.

You can see where this is going. The Twins were rained out on Friday. He pitched in a double header today. However, this pushes his next start back a day, his start after that by another day, and so forth. Due to some unfortunate timing, this will ultimately mean he will (probably) end up with one fewer start than he should otherwise. Extrapolating a reasonable expectation of number of innings per start, losing this one start will likely mean he will not reach the 210 inning threshold and thus not receive a $500,000 bonus.

For completeness, this post might all be for nothing. If Hughes averages 7 2/3s innings per start for the remainder of the season, he will reach 210 innings and the point will be moot. But it seems doubtful that this will happen for two reasons. First, the Twins have him under contract for two more years; with the team eliminated for the playoffs, it makes little sense to stretch him out when a younger pitcher in greater need of MLB experience could get those innings. Second, if you were the owner of the team and could reasonable limit his innings for the rest of the season, why wouldn’t you save yourself a half million dollars?

So why oh why are contract structured in this way? I don’t have a good answer. It would be exceedingly easy to simply structure contracts so that the incentive pays a pitcher a fixed amount per inning. This ensures that teams will use pitchers for the number of innings that is economically worthwhile and do not face the incentive-twisting discontinuity between 209 2/3s innings and 210 innings.[1] Transaction costs could conceivably force actors to accept these discontinuities, but that does not seem to be a problem here. Instead, agents and players seemingly accept these contractual terms despite the obvious conflicts of interest they create.

[1] To be fair, the contract has something like this built-in. Hughes receives quarter million dollar bonuses for 180 innings and 195 innings. But there is still no good reason to create these discontinuities.

How to Remove Beamer Navigaton Buttons

TL;DR: Put \setbeamertemplate{navigation symbols}{} in your preamble.

Presentation slides should be minimalist—the more the viewer has to scan, the more time he will take looking at the slide, and the less time he will spend actually listening to you. Minimalism is learned, and it is something I still struggle with. I’m getting better, but I can still improve.

Today, though, I’m taking a simple step to simplify the rest of my slides forever: I’m removing Beamer’s unnecessary navigation buttons.

What navigation buttons? These navigation buttons:

nav

You have almost certainly seen these before. In fact, there is a chance you put them into your Beamer slides without actually knowing what they do. (I spent a good 18 months using Beamer without ever experimenting with them.) The buttons allow you to navigate between slides, subsections, and sections of your presentation.

For my money, these buttons aren’t particularly useful. Most people use clickers for presentations, which rules out the buttons entirely. Even if you are working from the laptop, you can navigate slides using left and right keys. Meanwhile, jumping subsections or sections is usually too disorienting to work efficiently.

Indeed, I have seen someone click navigation buttons during a presentation exactly once—and that was only because the person evidently did not know you could (more efficiently) use the right key instead.

So, in sum, I hate navigation buttons. If you also never use them, then they have no reason to be in the slides. They are just taking up room for no reason.

Fortunately, the fix is simple. Immediately below your \begin{document} command, simply add the following line of code:

\setbeamertemplate{navigation symbols}{}

Now your slides will look like this:

nonav

Much cleaner! Thus, unless I rediscover the navigation buttons as being extremely handy, I’m taking them out of all my future presentations.

And if “Arms Treaties and the Credibility of Preventive War” sounds like too scintillating to ignore, you can see the full presentation here and read the paper here.

Welcome!

I am a political scientist who studies war, nuclear proliferation, and terrorism (mostly) using formal models. Currently, I am an associate professor in the University of Pittsburgh’s Department of Political Science. Before that, I was a Stanton Nuclear Security Postdoctoral Fellow at Stanford’s Center for International Security and Cooperation. I received a PhD from the University of Rochester in 2015.

If you want to know more, you can check my CV page.

My APSA 2014 Presentation: Policy Bargaining and International Conflict

If you are looking for something to do on Friday from 10:15 to noon, head over to the Marriott Jefferson room to see my presentation on Ideology Matters: Policy Bargaining and International Conflict. It is based on a joint project with Peter Bils. Here is the abstract:

Studies of bargaining and war generally focus on two sources of incomplete information: uncertainty about the probability of victory and uncertainty about the costs of fighting. We introduce a third: ideological preferences of a spatial policy. Under these conditions, standard results from the bargaining model of war break down: peace can be inefficient and it may be impossible to avoid war. We then extend the model to allow for cheap talk pre-play communications. Whereas incentives to misrepresent normally render cheap talk irrelevant, here communication can cause peace and ensure that agreements are efficient. Moreover, peace can become more likely when the proposer becomes more uncertain about the opposing state. Our results indicate one major purpose of diplomacy during a crisis is simply to communicate preferences and that such communications can be credible.

If you can’t make it, you can download the paper here, view the slides here, or watch the presentation below:

Multi-Method Research: The Case for Formal Theory

Hein Goemans and I have collaborated on a new research note on formal theory and case studies. Here’s the abstract:

We argue that formal theory and historical case studies, in particular those that use process-tracing, are extremely well-suited companions in multi-method research. To bolster future research employing both case studies and formal theory, we suggest some best practices as well as some (common) pitfalls to avoid.

Since the research note is short by nature, I won’t spend too much extra space discussing it here. You’d be better off skimming or reading the note itself. In essence, though, we argue that formal theory and case studies are natural methodological allies. We also advocate for serious interpretation of a model’s cutpoint into the informal analysis. Manuscripts that combine formal theory with case studies too often spend considerable time developing the model only to ignore it when they begin discussing substance. They should be tied together.

Also, and something that I stress heavily in my book project on nuclear proliferation, we must be very careful in how we interpret those cutpoints. For example, a common fallacy takes the following form: the model says w occurs if x > y + z. The case study then goes to great lengths to prove that y was close to 0 or negative, therefore w should occur. This overlooks the values of x and z, however—even with y equal to 0, the inequality could still fail depending on the relationship between the other parameters. Put differently, and with certain notable exceptions detailed in the research note, we must think about the cutpoints holistically.

Again, you can read the full note here.

Mario Kart 8’s Most Popular Tracks

Mario Kart 8 has consumed most of my entertainment hours since it came out a couple of months ago. Its online play is great. When you queue, the game randomly gives you three (of thirty-two possible) tracks to pick from, or you can select random if none are to your liking. Social scientist that I am, I saw an obvious data collecting opportunity. So the last few weeks, I have painstakingly charted every single choice I have observed. This allowed me to create a rough ranking system of all the tracks in the game. Which track do people like the most? The least? Check below:

mk1

The numbers reflect the percentage of the time I observed players picking any given track, not the track the game randomly selected from those ballots. For example, over the many, many times Sunshine Airport randomly popped up in the queue, players selected it 48.3% of the time. The tiers simply cut the data into a top bucket of four and four other buckets of seven.

There are a number of important caveats to the image, so please read what follows before boldly declaring that Bone-Dry Dunes is the worst thing Nintendo has ever created.

  • I don’t claim that this is the be-all, end-all to Mario Kart track popularity. Rather, without any other metrics to rank the courses, I think that this is a useful first-cut at the question.
  • While I gathered a lot of data to do this, I am only one man. The number of potential picks ranges from 113 for Water Park to 258 for Bone Dry Dunes. We should expect such randomness from the queue selection system. However, it also means that some of these percentages are secure than others. I plan on continuing to collect data over time.
  • Be careful about making pairwise comparisons. Based on what I have, it is reasonable to conclude that players prefer GBA Mario Circuit (41.8%) to Electrodome (27.7%), but it is not reasonable to conclude that players prefer Electrodome to Mario Kart Stadium (27.5%).
  • With people duo queuing, I included both votes. I can see why people might think this should only count as one, but the choice from a duo queue (in theory) reflects the preferences of two people. So I count it twice. It would be very difficult to count them as one vote anyway; I would have to keep tabs on who is submitting at the same time, which difficult when I am trying to count so many things at once.
  • I collected the data as I rose from 2000 to 3100. So if you believe that preferences are different for this group than a different one, you are not looking at the image you may wish to see.
  • I did not count my votes. We want a measure of what people like the most, not what I like the most.
  • I excluded “forced” votes that occur if players take more than the allotted time to make a selection. These votes are pure noise anyway.
  • An active vote for random counts as a vote against everything else. For example, suppose the choices were Yoshi Valley, Royal Racewway, and Music Park. Three players select Yoshi Valley and one picks random. Then Yoshi has received three of four votes and the other two tracks have received none of the four. In other words, the “random” doesn’t magically disappear from the denominator in the data tabulation.
  • I only played worldwide games.
  • These were all races. No battles.

And now for a little bit of analysis:

  • I did some fancy statistical tests to see if a variety of track qualities (length, difficulty, newness) determines player preferences. All of the results were null. So whatever is driving these votes is highly idiosyncratic.
  • The new Rainbow Road was very disappointing. It was the last track I played when I went through the game for the first time. I was very excited until all I found was boring turn after boring turn.
  • Some might also describe the original N64 Rainbow Road as boring turn after turn, but it seems that Nintendo made a smart decision to turn the course into a straight-shot and not a five lap race.
  • I question Nintendo’s wisdom in putting Music Park, Grumble Volcano, Sherbet Land, and Dry Dry Desert in the game. What’s the point of having classic tracks if no one wants to play them?
  • To be fair, perhaps players actually wanted to see these tracks and just failed in the execution. But that still doesn’t explain why you would put Grumble Volcano back in the game. Its main course feature is that lava randomly shoots up and kills you for no good reason. I understand Mario Kart is full of randomness, but let that come from interactive item blocks and not from the computer.
  • I feel really bad for whoever designed Bone-Dry Dunes.

See you in the queues.

Update: With eight new tracks coming out this week, I decided to update the data one last time. Here’s where we are today:

mk2

I’ll probably run the data once again after the new tracks have been out for a couple months.

Kindle Unlimited and the Economics of Bundling

Today, Amazon announced Kindle Unlimited, a subscription service for $9.99 per month that gives buyers all-you-can-read access to more than 600,000 books. And it took, oh, five minutes before someone called this the death of publishing.

Calm down. This isn’t the end of publishing—it is a natural extension of market forces and is potentially good for everyone.

Amazon is taking advantage of the economics of bundling—selling multiple products at an identical price regardless of how much the consumer uses each component. Bundles are all over the place; cable TV, Netflix, Spotify, and Microsoft Office are all examples of bundles. These business plans are pervasive because they work, they bring in a lot of money for their providers, and they leave consumers better off as well.

Wait, what!? How is it possible that both providers and consumers are better off by bundling? A while back, I too believed that this was insane and that bundles were a scam to get me to pay more money than I wanted to. (Why should I pay $1 for Home and Gardening when all I want is ESPN?) But then I read up on bundling and understood my folly.

An example will clarify things (and potentially amaze you, as it did for me not too long ago). As usual, I will keep thing simple to illustrate the fundamental logic without getting us bogged down in unnecessarily complicated math. Imagine a world with only two books available for purchase:

Further, let’s assume that there are only two customers in the world. Let’s call them Albert and Barbara. Albert and Barbara have different tastes in books. Albert prefers Hunger Games to Game Theory 101; he would pay at most $4.99 to read Hunger Games but only $1.50 at most for Game Theory 101. Barbara has the opposite preference; she would pay at most $2.25 to read Hunger Games and $3.99 to read Game Theory 101. You might find the following graphical representation more digestible:

BOOKS

Finally, assume that the marginal cost of each book is $0.00. That is, once the book has been written, it costs $0.00 to distribute each book. This is a bit of an exaggeration, but it is close to reality for electronic books. However, it is definitely not true for physical books (printing, shipping, etc.). This distinction will be important later.

With all those preliminaries out of the way, consider how a seller should price those books in a world without bundling. There are two options. First, you can price a book at a low price to capture the entire market share. Second, you can publish the book at a high price; it will sell fewer copies but make more money per unit.

Let’s apply that decision to Hunger Games. Selling at the low price means a cost of $2.25 so that both Albert and Barbara purchase it. (This is because Barbara’s maximum price for it is $2.25). That brings in $4.50 of revenue. Alternatively, you could sell at a high price of $4.99. This ensures that only Albert will buy. But it also brings in $4.99 in revenue, which is more than if you had set a low price. So you would sell Hunger Games for $4.99.

Now consider the price for Game Theory 101. Selling at the low price requires a cost of $1.50 so that both Albert and Barbara purchase it. (This is because Albert’s maximum price for it is $1.50.) That brings in $3.00 of revenue. Alternatively, you could sell at a price of $3.99. Only Barbara would buy it at this price. But it also nets $3.99 in revenue, which is more than if you had set a low price. So you would sell Game Theory 101 for $3.99. (Not coincidentally, if you click on the books above, you will find that they are priced like that in real life.)

Let’s recap the world without bundling. Hunger Games costs $4.99 and Game Theory 101 costs $3.99. The seller brings in $7.98 in revenue. Neither Albert nor Barbara benefit from this arrangement; Albert is paying $4.99 for a book that he values at $4.99, while Barbara is paying $3.99 for a book she values at $3.99.

Now for the magic of bundling. Suppose the seller bundle of both books for $5.99. Who is willing to buy here? Albert values Hunger Games and Game Theory 101 at $4.99 and $1.50 respectively. Thus, he is willing to pay up to $6.49 for the pair. So he will definitely purchase the bundle for $5.99. In fact, he’s much happier than he was before because he internalizing a net gain of $0.50 whereas he had no gain before.

What about Barbara? She was willing to pay respective prices of $2.25 and $3.99. Consequently, she is willing to pay up to $6.24 for the pair. So she will also definitely purchase the bundle for $5.99. And similar to Albert, she is internalizing a net gain of $0.25, up from no gain before.

So Albert and Barbara both win. But so do the producers—rather than bringing in a total of $7.98, the producers now earn $11.98. Every. Body. Wins. (!)

(Yes, I know that Kindle Unlimited costs $9.99 per month. If we added another book to this puzzle, we could get Albert and Barbara to want to pay that price. But that would require more math, and we don’t want more math.)

Why does this work? Bundling has two keys. First, as previewed earlier, the marginal cost of the products must be very small. If they were larger, those costs would make distributing more goods look comparatively less attractive. This would drive up the cost of the bundle and make it less attractive for the consumers, perhaps forcing them to prefer the a la carte pricing. That helps explain why book bundling is just now catching on; electronic books only cost server space whereas physical copies involve UPS.

Second, it helps when customer preferences are negatively correlated. This pushes everyone’s reservation price for the bundle closer together, which in turn makes the producer more likely to want to sell at the bundled price.

Before wrapping up, bundling has an important secondary effect for authors. The main takeaway here is that producers of the materials can make more money through bundling. This gives authors more incentive to create additional materials—an author who would otherwise only make $10,000 from a novel could now make, say, $15,000 instead. So an author on the fence whether to produce the book is more likely to follow through. This further enhances consumer welfare because those buyers can now read a book that would otherwise not exist.

Finally, “producers” here has meant a combination of authors and Amazon. A skeptic might worry that Amazon will end up taking away all of the revenues. That may be an issue in the long run if Amazon becomes a monopoly, but the revenue share is more than fair for now. Indeed, Amazon is giving authors roughly $2 every time a Kindle Unlimited subscriber reads 10% of a book, which is substantial. And with Kindle Unlimited reaching more consumers than a la carte pricing would, writers can earn revenue from a larger share of readers.

If you want to know more about bundling, I highly recommend you read the Marginal Revolution post on the subject.