Welcome!

Graduated from the University of California, San Diego in 2009. Currently a PhD candidate at the University of Rochester.

Interested in formal models of inter-state conflict.

You may know me from my textbook (Game Theory 101: The Complete Textbook), my other website (gametheory101.com), my game theory videos on YouTube, or Freakonomics.

Email me at williamspaniel@gmail.com or follow me on Twitter @gametheory101.

Kindle Unlimited and the Economics of Bundling

Today, Amazon announced Kindle Unlimited, a subscription service for $9.99 per month that gives buyers all-you-can-read access to more than 600,000 books. And it took, oh, five minutes before someone called this the death of publishing.

Calm down. This isn’t the end of publishing—it is a natural extension of market forces and is potentially good for everyone.

Amazon is taking advantage of the economics of bundling—selling multiple products at an identical price regardless of how much the consumer uses each component. Bundles are all over the place; cable TV, Netflix, Spotify, and Microsoft Office are all examples of bundles. These business plans are pervasive because they work, they bring in a lot of money for their providers, and they leave consumers better off as well.

Wait, what!? How is it possible that both providers and consumers are better off by bundling? A while back, I too believed that this was insane and that bundles were a scam to get me to pay more money than I wanted to. (Why should I pay $1 for Home and Gardening when all I want is ESPN?) But then I read up on bundling and understood my folly.

An example will clarify things (and potentially amaze you, as it did for me not too long ago). As usual, I will keep thing simple to illustrate the fundamental logic without getting us bogged down in unnecessarily complicated math. Imagine a world with only two books available for purchase:

Further, let’s assume that there are only two customers in the world. Let’s call them Albert and Barbara. Albert and Barbara have different tastes in books. Albert prefers Hunger Games to Game Theory 101; he would pay at most $4.99 to read Hunger Games but only $1.50 at most for Game Theory 101. Barbara has the opposite preference; she would pay at most $2.25 to read Hunger Games and $3.99 to read Game Theory 101. You might find the following graphical representation more digestible:

BOOKS

Finally, assume that the marginal cost of each book is $0.00. That is, once the book has been written, it costs $0.00 to distribute each book. This is a bit of an exaggeration, but it is close to reality for electronic books. However, it is definitely not true for physical books (printing, shipping, etc.). This distinction will be important later.

With all those preliminaries out of the way, consider how a seller should price those books in a world without bundling. There are two options. First, you can price a book at a low price to capture the entire market share. Second, you can publish the book at a high price; it will sell fewer copies but make more money per unit.

Let’s apply that decision to Hunger Games. Selling at the low price means a cost of $2.25 so that both Albert and Barbara purchase it. (This is because Barbara’s maximum price for it is $2.25). That brings in $4.50 of revenue. Alternatively, you could sell at a high price of $4.99. This ensures that only Albert will buy. But it also brings in $4.99 in revenue, which is more than if you had set a low price. So you would sell Hunger Games for $4.99.

Now consider the price for Game Theory 101. Selling at the low price requires a cost of $1.50 so that both Albert and Barbara purchase it. (This is because Albert’s maximum price for it is $1.50.) That brings in $3.00 of revenue. Alternatively, you could sell at a price of $3.99. Only Barbara would buy it at this price. But it also nets $3.99 in revenue, which is more than if you had set a low price. So you would sell Game Theory 101 for $3.99. (Not coincidentally, if you click on the books above, you will find that they are priced like that in real life.)

Let’s recap the world without bundling. Hunger Games costs $4.99 and Game Theory 101 costs $3.99. The seller brings in $7.98 in revenue. Neither Albert nor Barbara benefit from this arrangement; Albert is paying $4.99 for a book that he values at $4.99, while Barbara is paying $3.99 for a book she values at $3.99.

Now for the magic of bundling. Suppose the seller bundle of both books for $5.99. Who is willing to buy here? Albert values Hunger Games and Game Theory 101 at $4.99 and $1.50 respectively. Thus, he is willing to pay up to $6.49 for the pair. So he will definitely purchase the bundle for $5.99. In fact, he’s much happier than he was before because he internalizing a net gain of $0.50 whereas he had no gain before.

What about Barbara? She was willing to pay respective prices of $2.25 and $3.99. Consequently, she is willing to pay up to $6.24 for the pair. So she will also definitely purchase the bundle for $5.99. And similar to Albert, she is internalizing a net gain of $0.25, up from no gain before.

So Albert and Barbara both win. But so do the producers—rather than bringing in a total of $7.98, the producers now earn $11.98. Every. Body. Wins. (!)

(Yes, I know that Kindle Unlimited costs $9.99 per month. If we added another book to this puzzle, we could get Albert and Barbara to want to pay that price. But that would require more math, and we don’t want more math.)

Why does this work? Bundling has two keys. First, as previewed earlier, the marginal cost of the products must be very small. If they were larger, those costs would make distributing more goods look comparatively less attractive. This would drive up the cost of the bundle and make it less attractive for the consumers, perhaps forcing them to prefer the a la carte pricing. That helps explain why book bundling is just now catching on; electronic books only cost server space whereas physical copies involve UPS.

Second, it helps when customer preferences are negatively correlated. This pushes everyone’s reservation price for the bundle closer together, which in turn makes the producer more likely to want to sell at the bundled price.

Before wrapping up, bundling has an important secondary effect for authors. The main takeaway here is that producers of the materials can make more money through bundling. This gives authors more incentive to create additional materials—an author who would otherwise only make $10,000 from a novel could now make, say, $15,000 instead. So an author on the fence whether to produce the book is more likely to follow through. This further enhances consumer welfare because those buyers can now read a book that would otherwise not exist.

Finally, “producers” here has meant a combination of authors and Amazon. A skeptic might worry that Amazon will end up taking away all of the revenues. That may be an issue in the long run if Amazon becomes a monopoly, but the revenue share is more than fair for now. Indeed, Amazon is giving authors roughly $2 every time a Kindle Unlimited subscriber reads 10% of a book, which is substantial. And with Kindle Unlimited reaching more consumers than a la carte pricing would, writers can earn revenue from a larger share of readers.

If you want to know more about bundling, I highly recommend you read the Marginal Revolution post on the subject.

Calculate Day-of-Week Sales Averages on KDP

For the longest time, KDP aggregated all sales information by week. Now KDP has nice graphical breakdowns of daily sales. Naturally, I wondered if my sales averages differed significantly by the day of the week. I compiled an Excel spreadsheet to give me a quick answer. Apparently the day of the week does not an impact for me, at least not in any significant way.

Still, I figured others would want to know the same information. As such, I did a little bit of extra work on the spreadsheet to make it usable for others. You can download it here. It is very simple to use. Just follow these four steps:

1) Select the tab in Excel that corresponds to the current day of the week. (For example, if today is Tuesday, use the Tuesday tab.)

2) Go to KDP’s sales dashboard. Use one of the pull down menus to open the last 90 days of sales. This will give you the most days to average over.

3) Copy each day of sales from the graph to the spreadsheet. This will require some work because you have to do it manually and need to pay close attention to graph to make sure you are copying down the correct number.

4) Excel will automatically calculate each day of the week’s average sales.

Again, you can download it here. Let me know what you think.

averages

We Shouldn’t Generalize Based on World War I

As you probably already know, today is the 100th anniversary of the assassination of Archduke Franz Ferdinand, which would set off the July Crisis and then World War I. For the next few months, the media will undoubtedly bombard us with World War I history and attempt to teach us something based on it.

This is not a good idea. World War I was exceptional. To make generalizations based on it would be like making generalizations about basketball players based on LeBron James. LeBron is so many standard deviations away from the norm that anything you learn from him hardly carries over to basketball players in general. At best, it carries over to maybe one player per generation. The same is true about World War I. It was so many standard deviations away from the norm that anything you learn from it hardly carries over to wars in general. At best, it carries over to maybe one war per generation.

Anyway, the impetus for this post was a piece on Saturday’s edition of CBS’s This Morning, where a guest said something to the effect of “the lesson of World War I is that sometimes it is difficult to stop fighting once you’ve started.” (Apologies I don’t have the exact quote. They didn’t put the piece on their website, but I will update this post if they do.) I suppose this is true in the strictest sense–sometimes countries fight for a very long time. However, such long wars are rare events. Most armed conflicts between countries are very, very short.

To illustrate this, I did some basic analysis of militarized interstate disputes (MIDs)–armed conflicts that may or may not escalate to full scale war from 1816 to 2010. If we are interested in whether fighting really begets further fighting, this is the dataset that we are interested in analyzing since it represents all instances in which states started butting heads, not just the most salient ones.

So what do we find in the data? Well, the dataset includes a measure of length of conflicts. If fighting begets further fighting in general, we would expect to see very few instances of short conflicts and a much larger distribution of longer conflicts. Yet, looking at a histogram of conflicts by length, we find the exact opposite:

mids

(I used the maximum length measure in the MIDs dataset to create this histogram. Because the length of a conflict can vary depending on who you ask, the MIDs dataset includes a minimum and maximum duration measure. By using the maximum, I have stacked the deck to make it appear that conflicts last longer.)

Each bin in the histogram represents 50 days. A majority of all MIDs fit into that very first bin. More than 90% fall into the first seven bins, or roughly a year in time. Conflicts as long as World War I are barely a blip in the radar. Thus, fighting does not appear to beget further fighting. If anything, it appears to do just the opposite.

One potential confound here is that these conflicts are ending because one side is being completely destroyed militarily. In turn, fighting begets further fighting but stops rather quickly because one side cannot fight any longer. But this is not true. Less than 5% of MIDs result in more than 1000 causualties and even fewer destroy enough resources to prohibit one side from continuing to fight.

So why doesn’t fighting beget further fighting? The simple answer is that war is a learning process. Countries begin fighting when they fail to reach a negotiated settlement, often because one side overestimates its ability to win a war or underestimates how willing to fight the opposing side is. War thus acts as a mechanism for information transmission–the more countries fight, the more they learn about each other, and the easier it is to reach a settlement and avoid further costs of conflict. As a result, we should expect war to beget less war, not more. And the histogram shows that this is true in practice.

Do not take this post to mean that World War I was unimportant. Although it was exceptional, it also represents a disproportionately large percentage of mankind’s casualties in war. It was brutal. It was devastating. It was ugly. But for all those reasons, it was not normal. Consequently, we should not be generalizing based on it.

“I Was a Digital Best Seller!”: NY Times’ Bizarrely Misleading Op-Ed

A couple days ago, the New York Times published an op-ed from Tony Horwitz, a Pulitzer Prize winner, chronicling his publishing of BOOM: Oil, Money, Cowboys, Strippers, and the Energy Rush That Could Change America Forever. A Long, Strange Journey Along the Keystone XL Pipeline. Ostensibly, it is the story of how online publishing does not live up to its hype. In reality, it is a parable of someone without good strategic or business sense committing a bunch of mistakes. And the best part: despite a lack of self-awareness, he gets paid off anyway.

To recap the important points from the op-ed, The Global Mail offered Horwitz $15,000 (plus $5,000 for expenses) to write a long-form piece on the Keystone XL pipeline. By the time Horwitz finished, The Global Mail had folded. He thus approached Byliner, who offered to publish the story as a digital book for 33% of the profits and a $2,000 advance. After a month, his book had only sold 800 copies, not enough to pay through the advance. This leads Horwitz to conclude that digital publishing is a failing enterprise.

However, the op-ed is actually a story of Horwitz making a bunch of mistakes and not realizing it. To wit:

1) As far as I can tell, he never signed a contract with The Global Mail. If I were going to spend a large percentage of my year writing a single story with the promise of $15,000 at the end, I would want a legal guarantee to that money precisely because of the issues he encountered.

2) He a publisher (Byliner) that apparently did nothing for him. With digital publishing so easy now, the only reason to use a publisher is because they will actually do something for you. After all, Amazon will give you 2/3rds of the purchase price if you go it alone. If you are giving half of that to your publisher, you had better be getting a lot back. Instead, the publisher gave him a cover and siphoned off a large chunk of money.

3) He used an incompetent agent to sign the deal with Byliner. A good agent here would make sure the contract forces Byliner to do its job by publicizing the book to warrant its share of the revenue. Apparently there is no such language in the contract. If you are planning on signing a contract without giving it much forethought, why let the agent steal a percentage of your money as well?

4) Okay, #3 is not completely true—Byliner’s publicist “wrote a glowing review of “Boom” on Amazon, the main retailer of Byliner titles.” Amazon’s review policies make it clear that this is a flagrant violation: “Sentiments by or on behalf of a person or company with a financial interest in the product or a directly competing product (including reviews by publishers…)” are not allowed. So Horwitz is openly admitting that he has used false reviews. In the process, he implicates his publisher as well.

5) He thinks that being on the best sellers list for a particular subcategory means that he was selling a lot of copies. For someone with an extensive publishing history, this is remarkably naive. In fact, you can sell a handful of copies and get on these lists; you should not expect to make it rich unless you are on the overall best sellers list.

So we have a publisher that is completely unhelpful and an author who lacks business and strategic sense who are not making much money on a book venture. Does this warrant a New York Times op-ed on how digital publishing is full of false promises? Hardly.

The irony? The New York Times provides great publicity, even if your op-ed is completely wrong. As it stands, the book is #445 on Amazon’s best sellers list and was probably higher a couple of days ago when the story was first published. The real lesson here is that you can be horribly incompetent and still make a lot of money by writing about all of the mistakes you make—as long as you can convince the New York Times that it is the system’s fault, not yours.

This post has been very negative overall, so I feel like I should end on a kinder note. Tony Horwitz may be a fantastic writer. (I don’t know—I’ve never read anything of his. But a Pulitzer is a good indication.) His book on the Keystone pipeline might be great too. (The reviews on Amazon are good, likely even if you take out the fake review(s).) The takeaway point is that you need more than just good writing to succeed in the publishing world. Horwitz showed a lack of good sense here, and these are mistakes that you should avoid making yourself.

Penalty Kicks Are Random

Here’s a quick followup to my post on the game theory of penalty kicks.

During today’s World Cup match between Switzerland and France, Karim Benzema took a penalty kick versus Swiss goalkeeper Diego Benaglio. Benzema shot left; Benaglio guessed left and successfully stopped the shot. Immediately thereafter, the ESPN broadcasters explained why this outcome occurred: Benaglio “did his homework,” insinuating that Benaglio knew which way the kick was coming and stopped it appropriately.

This is idiotic analysis for two reasons. First is the game theoretical issue. It makes no sense for Benzema to be predictable in this manner. Imagine for a moment that Benzema had a strong tendency to shoot left. The Swiss analytics crew would pick up on this and tell Benaglio. But the French analytics crew can spot this just as easily. At that point, they would tell Benaglio his problem and instruct him to shoot right more frequently. After all, the way things are going, the Swiss goalie is going to guess left, which leaves the right wide open.

In turn, to avoid this kind of nonsense, the players need to be randomizing. The mixed strategy algorithm gives us a way to solve this problem, and it isn’t particularly laborious. Moreover, there is decent empirical evidence to suggest that something to this effect occurs in practice.

The second issue is statistical. Suppose for the moment that the players were not playing equilibrium strategies but still not stupid enough to always take the same action. (That is, the goalie sometimes dives left and sometimes dives right while the striker sometimes aims left and sometimes aims right. However, the probabilities do not match the equilibrium.) Then we only have one observation to study. If you have spent a day in a statistics class, you would then know that the evidence we have does not allow us to differentiate between the following:

  1. a player who successfully outsmarted his opponent
  2. a player who outsmarted his opponent but got unlucky
  3. a player who got outsmarted but got lucky
  4. a player who got outsmarted and lost
  5. players playing equilibrium strategies

I can’t think of a compelling reason to make anything other than (5) the null hypothesis in this case. Jumping to conclusions about (1), (2), (3), or (4) is just bad commentary, pure and simple.

The embarrassing thing about this kind of commentary is that it is pervasive and could be reasonably stopped with just a tiny bit of game theory classroom experience. Even someone who watched the first 58 minutes of my Game Theory 101 (up to and including the mixed strategy algorithm) playlist could provide better analysis.

Tesla’s Patent Giveaway Isn’t Altruistic—And That’s Not a Bad Thing

Tesla Motors recently announced that it is opening its electric car patents to competitors. The buzz around the Internet is that this is another case of Tesla’s CEO Elon Musk doing something good for humanity. However, the evidence suggests another explanation: Tesla is doing this to make money, and that’s not a bad thing.

The issue Tesla faces is what game theorists call a coordination problem. Specifically, it is a stag hunt:

For those unfamiliar and who did not watch the video, a stag hunt is the general name for a game where both parties want to coordinate on taking the same action because it gives each side its individually best outcome. However, a party’s worst possible outcome is to take that action while the other side does not. This leads to two reasonable outcomes: both coordinate on the good action and do very well or both do not take that action (because they expect the other one not to) and do poorly.

This is a common problem in emerging markets. The core issue is that some technologies need other technologies to function properly. That is, technology A is worthless without technology B, and technology B is worthless without technology A. Manufacturers of A might want to produce A and manufacturers of B might want to produce B, but they cannot do this profitably without each other’s support.

Take HDTV as a recent example. We are all happy to live in a world of HD: producers now create a better product, and consumers find the images to be far more visually appeasing. However, the development of HDTV took longer than it should have. The problem was that producers had no reason to switch over to HD broadcasting until people owned HDTVs. Yet television manufacturers had no reason to create HDTVs until there were HD programs available for consumption. This created an awkward coordination problem in which both producers and manufacturers were waiting around for each other. HDTV only became commonplace after cheaper production costs made the transition less risky for either party.

I imagine car manufacturers faced a similar problem a century ago. Ford and General Motors may have been ready to sell cars to the public, but the public had little reason to buy them without gas stations all around to make it easy to refuel their vehicles. But small business owners had little reason to start up gas stations without a large group of car owners around to purchase from them.

The above problem should make Tesla’s major barrier clear. Tesla has the electric car technology ready. What they lack is a network of charging stations that can make long-distance travel with electric cars practical. Giving away the patents to competitors potentially means more electric cars on the road and more charging stations, without having to spend significant capital that the small company does not have. Tesla ultimately wins because they have a first-mover advantage in developing the technology.

So this is less about altruism and more about self-interest. But that is not a bad thing. 99% of the driving force behind economics is mutual gain. I think this fact gets lost in the modern political/economic debate because there are some (really bad) cases where that is not true. But here, Tesla wins, other car manufacturers win, and consumers win.

Oh, oil producing companies lose. Whatever.

H/T to Dillon Bowman (a student of mine at the University of Rochester) and /u/Mubarmi for inspiring this post.

The Game Theory of Soccer Penalty Kicks

With the World Cup starting today, now is a great time to discuss the game theory behind soccer penalty kicks. This blog post will do three things: (1) show that penalty kicks is a very common type of game and one that game theory can solve very easily, (2) players behave more or less as game theory would predict, and (3) a striker becoming more accurate to one side makes him less likely to kick to that side. Why? Read on.

The Basics: Matching Pennies
Penalty kicks are straightforward. A striker lines up with the ball in front of him. He runs forwards and kicks the ball toward the net. The goalie tries to stop it.

Despite the ordering I just listed, the players essentially move simultaneously. Although the goalie dives after the striker has kicked the ball, he cannot actually wait to the ball comes off the foot to decide which way to dive—because the ball moves so fast, it will already be behind him by the time he finishes his dive. So the goalie must pick his strategy before observing any relevant information from the striker.

This type of game is actually very common. Both players pick a side. One player wants to match sides (the goalie), while the other wants to mismatch (the striker). That is, from the striker’s perspective, the goalie wants to dive left when the striker kicks left and dive right when the striker kicks right; the striker wants to kick left when the goalie dives right and kick right when the goalie dives left. This is like a baseball batter trying to guess what pitch the pitcher will throw while the pitcher tries to confuse the batter. Similarly, a basketball shooter wants a defender to break the wrong way to give him an open lane to the basket, while the defender wants to stay lined up with the ball handler.

Because the game is so common, it should not be surprised that game theorists have studied this type of game at length. (Game theory, after all, is the mathematical study of strategy.) The common name for the game is matching pennies. When the sides are equally powerful, the solution is very simple:

If you skipped the video, the solution is for both players to pick each side with equal probability. For penalty kicks, that means the striker kicks left half the time and right half the time; the goalie dives left half the time and dives right half the time.

Why are these optimal strategies? The answer is simple: neither party can be exploited under these circumstances. This might be easier to see by looking at why all other strategies are not optimal. If the striker kicked left 100% of the time, it would be very easy for the goalie to stop the shot—he would simply dive left 100% of the time. In essence, the striker’s predictability allows the goalie to exploit him. This is also true if the striker is aiming left 99% of the time, or 98% of the time, and so forth—the goalie would still want to always dive left, and the striker would not perform as well as he could by randomizing in a less predictable manner.

In contrast, if the striker is kicking left half the time and kicking right half the time, it does not matter which direction the goalie dives—he is equally likely to stop the ball at that point. Likewise, if the goalie is diving left half the time and diving right half the time, it does not matter which direction he striker kicks—he is equally likely to score at that point.

The key takeaways here are twofold: (1) you have to randomize to not be exploited and (2) you need to think of your opponent’s strategic constraints when choosing your move.

Real Life Penalty Kicks
So that’s the basic theory of penalty kicks. How does it play out in reality?

Fortunately, we have a decent idea. A group of economists (including Freakonomics’ Steve Levitt) once studied the strategies and results of penalty kicks from the French and Italian leagues. They found that players strategize roughly how they ought to.

How did they figure this out? To begin, they used a more sophisticated model than the one I introduced above. Real life penalty kicks differ in two key ways. First, kicking to the left is not the same thing as kicking to the right. A right-footed striker naturally hits the ball harder and more accurately to the left than the right. This means that a ball aimed to the right is more likely to miss the goal completely and more likely to be stopped if the goalie also dives that way. And second, a third strategy for both players is also reasonable: aim to the middle/defend the middle.

Regardless of the additional complications, there are a couple of key generalizations that hold from the logic of the first section. First, a striker’s probability of scoring should be equal regardless of whether he kicks left, straight, or right. Why? Suppose this were not true. Then someone is being unnecessarily exploited in this situation. For example, imagine that strikers are kicking very frequently to the left. Realizing this, goalies are also diving very frequently to the left. This leaves the striker with a small scoring percentage to the left and a much higher scoring percentage when he aims to the undefended right. Thus, the striker should be correcting his strategy by aiming right more frequently. So if everyone is playing optimally, his scoring percentage needs to be equal across all his strategies, otherwise some sort of exploitation is available.

Second, a goalie’s probability of not being scored against must be equal across all of his defending strategies. This follows from the same reason as above: if diving toward one side is less likely to result in a goal, then someone is being exploited who should not be.

All told, this means that we should observe equal probabilities among all strategies. And, sure enough, this is more or less what goes on. Here’s Figure 4 from the article, which gives the percentage of shots that go in for any combination of strategies:

pks

The key places to look are the “total” column and row. The total column for the goalie on the right shows that he is very close to giving up a goal 75% of the time regardless of his strategy. The total row for the striker at the bottom shows more variance—in the data, he scores 81% of the time aiming toward the middle but only 70.1% of the time aiming to the right—but those differences are not statistically significant. In other words, we would expect that sort of variation to occur purely due to chance.

Thus, as far as we can tell, the players are playing optimal strategies as we would suspect. (Take that, you damn dirty apes!)

Relying on Your Weakness
One thing I glossed over in the second part is specifically how a striker’s strategy should change due to the weakness of the right side versus the left. Let’s take care of that now.

Imagine you are a striker with an amazingly accurate left side but a very inaccurate right side. More concretely, you will always hit the target if you shoot left, but you will miss some percentage of the time on the right side. Realizing your weakness, you spend months practicing your right shot and double its accuracy. Now that you have a stronger right side, how will this affect your penalty kick strategy?

The intuitive answer is that it should make you shoot more frequently toward the right—after all, your shot has improved on that side. However, this intuition is wrong—you end up shooting less often to the right. Equivalently, this means the more inaccurate you are to one side, the more you end up aiming in that direction.

Why is this the case? If you want the full explanation, watch the following two videos:

The shorter explanation is as follows. As mentioned at the end of the first section of this blog post, players must consider their opponent’s capabilities as they develop their strategies. When you improve your accuracy to the right side, your opponent reacts by defending the right side more—he can no longer so strongly rely on your inaccuracy as a phantom defense. So if you start aiming more frequently to the right side, you end up with an over-correction—you are kicking too frequently toward a better defended side. Thus, you end up kicking more frequently to the left to account for the goalie wanting to dive right more frequently.

And that’s the game theory of penalty kicks.