Calculate Day-of-Week Sales Averages on KDP

For the longest time, KDP aggregated all sales information by week. Now KDP has nice graphical breakdowns of daily sales. Naturally, I wondered if my sales averages differed significantly by the day of the week. I compiled an Excel spreadsheet to give me a quick answer. Apparently the day of the week does not an impact for me, at least not in any significant way.

Still, I figured others would want to know the same information. As such, I did a little bit of extra work on the spreadsheet to make it usable for others. You can download it here. It is very simple to use. Just follow these four steps:

1) Select the tab in Excel that corresponds to the current day of the week. (For example, if today is Tuesday, use the Tuesday tab.)

2) Go to KDP’s sales dashboard. Use one of the pull down menus to open the last 90 days of sales. This will give you the most days to average over.

3) Copy each day of sales from the graph to the spreadsheet. This will require some work because you have to do it manually and need to pay close attention to graph to make sure you are copying down the correct number.

4) Excel will automatically calculate each day of the week’s average sales.

Again, you can download it here. Let me know what you think.

averages

We Shouldn’t Generalize Based on World War I

As you probably already know, today is the 100th anniversary of the assassination of Archduke Franz Ferdinand, which would set off the July Crisis and then World War I. For the next few months, the media will undoubtedly bombard us with World War I history and attempt to teach us something based on it.

This is not a good idea. World War I was exceptional. To make generalizations based on it would be like making generalizations about basketball players based on LeBron James. LeBron is so many standard deviations away from the norm that anything you learn from him hardly carries over to basketball players in general. At best, it carries over to maybe one player per generation. The same is true about World War I. It was so many standard deviations away from the norm that anything you learn from it hardly carries over to wars in general. At best, it carries over to maybe one war per generation.

Anyway, the impetus for this post was a piece on Saturday’s edition of CBS’s This Morning, where a guest said something to the effect of “the lesson of World War I is that sometimes it is difficult to stop fighting once you’ve started.” (Apologies I don’t have the exact quote. They didn’t put the piece on their website, but I will update this post if they do.) I suppose this is true in the strictest sense–sometimes countries fight for a very long time. However, such long wars are rare events. Most armed conflicts between countries are very, very short.

To illustrate this, I did some basic analysis of militarized interstate disputes (MIDs)–armed conflicts that may or may not escalate to full scale war from 1816 to 2010. If we are interested in whether fighting really begets further fighting, this is the dataset that we are interested in analyzing since it represents all instances in which states started butting heads, not just the most salient ones.

So what do we find in the data? Well, the dataset includes a measure of length of conflicts. If fighting begets further fighting in general, we would expect to see very few instances of short conflicts and a much larger distribution of longer conflicts. Yet, looking at a histogram of conflicts by length, we find the exact opposite:

mids

(I used the maximum length measure in the MIDs dataset to create this histogram. Because the length of a conflict can vary depending on who you ask, the MIDs dataset includes a minimum and maximum duration measure. By using the maximum, I have stacked the deck to make it appear that conflicts last longer.)

Each bin in the histogram represents 50 days. A majority of all MIDs fit into that very first bin. More than 90% fall into the first seven bins, or roughly a year in time. Conflicts as long as World War I are barely a blip in the radar. Thus, fighting does not appear to beget further fighting. If anything, it appears to do just the opposite.

One potential confound here is that these conflicts are ending because one side is being completely destroyed militarily. In turn, fighting begets further fighting but stops rather quickly because one side cannot fight any longer. But this is not true. Less than 5% of MIDs result in more than 1000 causualties and even fewer destroy enough resources to prohibit one side from continuing to fight.

So why doesn’t fighting beget further fighting? The simple answer is that war is a learning process. Countries begin fighting when they fail to reach a negotiated settlement, often because one side overestimates its ability to win a war or underestimates how willing to fight the opposing side is. War thus acts as a mechanism for information transmission–the more countries fight, the more they learn about each other, and the easier it is to reach a settlement and avoid further costs of conflict. As a result, we should expect war to beget less war, not more. And the histogram shows that this is true in practice.

Do not take this post to mean that World War I was unimportant. Although it was exceptional, it also represents a disproportionately large percentage of mankind’s casualties in war. It was brutal. It was devastating. It was ugly. But for all those reasons, it was not normal. Consequently, we should not be generalizing based on it.

“I Was a Digital Best Seller!”: NY Times’ Bizarrely Misleading Op-Ed

A couple days ago, the New York Times published an op-ed from Tony Horwitz, a Pulitzer Prize winner, chronicling his publishing of BOOM: Oil, Money, Cowboys, Strippers, and the Energy Rush That Could Change America Forever. A Long, Strange Journey Along the Keystone XL Pipeline. Ostensibly, it is the story of how online publishing does not live up to its hype. In reality, it is a parable of someone without good strategic or business sense committing a bunch of mistakes. And the best part: despite a lack of self-awareness, he gets paid off anyway.

To recap the important points from the op-ed, The Global Mail offered Horwitz $15,000 (plus $5,000 for expenses) to write a long-form piece on the Keystone XL pipeline. By the time Horwitz finished, The Global Mail had folded. He thus approached Byliner, who offered to publish the story as a digital book for 33% of the profits and a $2,000 advance. After a month, his book had only sold 800 copies, not enough to pay through the advance. This leads Horwitz to conclude that digital publishing is a failing enterprise.

However, the op-ed is actually a story of Horwitz making a bunch of mistakes and not realizing it. To wit:

1) As far as I can tell, he never signed a contract with The Global Mail. If I were going to spend a large percentage of my year writing a single story with the promise of $15,000 at the end, I would want a legal guarantee to that money precisely because of the issues he encountered.

2) He had a publisher (Byliner) that apparently did nothing for him. With digital publishing so easy now, the only reason to use a publisher is because they will actually do something for you. After all, Amazon will give you 2/3rds of the purchase price if you go it alone. If you are giving half of that to your publisher, you had better be getting a lot back. Instead, the publisher gave him a cover and siphoned off a large chunk of money.

3) He used an incompetent agent to sign the deal with Byliner. A good agent here would make sure the contract forces Byliner to do its job by publicizing the book to warrant its share of the revenue. Apparently there is no such language in the contract. If you are planning on signing a contract without giving it much forethought, why let the agent steal a percentage of your money as well?

4) Okay, #3 is not completely true—Byliner’s publicist “wrote a glowing review of “Boom” on Amazon, the main retailer of Byliner titles.” Amazon’s review policies make it clear that this is a flagrant violation: “Sentiments by or on behalf of a person or company with a financial interest in the product or a directly competing product (including reviews by publishers…)” are not allowed. So Horwitz is openly admitting that he has used false reviews. In the process, he implicates his publisher as well.

5) He thinks that being on the best sellers list for a particular subcategory means that he was selling a lot of copies. For someone with an extensive publishing history, this is remarkably naive. In fact, you can sell a handful of copies and get on these lists; you should not expect to make it rich unless you are on the overall best sellers list.

So we have a publisher that is completely unhelpful and an author who lacks business and strategic sense who are not making much money on a book venture. Does this warrant a New York Times op-ed on how digital publishing is full of false promises? Hardly.

The irony? The New York Times provides great publicity, even if your op-ed is completely wrong. As it stands, the book is #445 on Amazon’s best sellers list and was probably higher a couple of days ago when the story was first published. The real lesson here is that you can be horribly incompetent and still make a lot of money by writing about all of the mistakes you make—as long as you can convince the New York Times that it is the system’s fault, not yours.

This post has been very negative overall, so I feel like I should end on a kinder note. Tony Horwitz may be a fantastic writer. (I don’t know—I’ve never read anything of his. But a Pulitzer is a good indication.) His book on the Keystone pipeline might be great too. (The reviews on Amazon are good, likely even if you take out the fake review(s).) The takeaway point is that you need more than just good writing to succeed in the publishing world. Horwitz showed a lack of good sense here, and these are mistakes that you should avoid making yourself.

Penalty Kicks Are Random

Here’s a quick followup to my post on the game theory of penalty kicks.

During today’s World Cup match between Switzerland and France, Karim Benzema took a penalty kick versus Swiss goalkeeper Diego Benaglio. Benzema shot left; Benaglio guessed left and successfully stopped the shot. Immediately thereafter, the ESPN broadcasters explained why this outcome occurred: Benaglio “did his homework,” insinuating that Benaglio knew which way the kick was coming and stopped it appropriately.

This is idiotic analysis for two reasons. First is the game theoretical issue. It makes no sense for Benzema to be predictable in this manner. Imagine for a moment that Benzema had a strong tendency to shoot left. The Swiss analytics crew would pick up on this and tell Benaglio. But the French analytics crew can spot this just as easily. At that point, they would tell Benaglio his problem and instruct him to shoot right more frequently. After all, the way things are going, the Swiss goalie is going to guess left, which leaves the right wide open.

In turn, to avoid this kind of nonsense, the players need to be randomizing. The mixed strategy algorithm gives us a way to solve this problem, and it isn’t particularly laborious. Moreover, there is decent empirical evidence to suggest that something to this effect occurs in practice.

The second issue is statistical. Suppose for the moment that the players were not playing equilibrium strategies but still not stupid enough to always take the same action. (That is, the goalie sometimes dives left and sometimes dives right while the striker sometimes aims left and sometimes aims right. However, the probabilities do not match the equilibrium.) Then we only have one observation to study. If you have spent a day in a statistics class, you would then know that the evidence we have does not allow us to differentiate between the following:

  1. a player who successfully outsmarted his opponent
  2. a player who outsmarted his opponent but got unlucky
  3. a player who got outsmarted but got lucky
  4. a player who got outsmarted and lost
  5. players playing equilibrium strategies

I can’t think of a compelling reason to make anything other than (5) the null hypothesis in this case. Jumping to conclusions about (1), (2), (3), or (4) is just bad commentary, pure and simple.

The embarrassing thing about this kind of commentary is that it is pervasive and could be reasonably stopped with just a tiny bit of game theory classroom experience. Even someone who watched the first 58 minutes of my Game Theory 101 (up to and including the mixed strategy algorithm) playlist could provide better analysis.

Tesla’s Patent Giveaway Isn’t Altruistic—And That’s Not a Bad Thing

Tesla Motors recently announced that it is opening its electric car patents to competitors. The buzz around the Internet is that this is another case of Tesla’s CEO Elon Musk doing something good for humanity. However, the evidence suggests another explanation: Tesla is doing this to make money, and that’s not a bad thing.

The issue Tesla faces is what game theorists call a coordination problem. Specifically, it is a stag hunt:

For those unfamiliar and who did not watch the video, a stag hunt is the general name for a game where both parties want to coordinate on taking the same action because it gives each side its individually best outcome. However, a party’s worst possible outcome is to take that action while the other side does not. This leads to two reasonable outcomes: both coordinate on the good action and do very well or both do not take that action (because they expect the other one not to) and do poorly.

This is a common problem in emerging markets. The core issue is that some technologies need other technologies to function properly. That is, technology A is worthless without technology B, and technology B is worthless without technology A. Manufacturers of A might want to produce A and manufacturers of B might want to produce B, but they cannot do this profitably without each other’s support.

Take HDTV as a recent example. We are all happy to live in a world of HD: producers now create a better product, and consumers find the images to be far more visually appeasing. However, the development of HDTV took longer than it should have. The problem was that producers had no reason to switch over to HD broadcasting until people owned HDTVs. Yet television manufacturers had no reason to create HDTVs until there were HD programs available for consumption. This created an awkward coordination problem in which both producers and manufacturers were waiting around for each other. HDTV only became commonplace after cheaper production costs made the transition less risky for either party.

I imagine car manufacturers faced a similar problem a century ago. Ford and General Motors may have been ready to sell cars to the public, but the public had little reason to buy them without gas stations all around to make it easy to refuel their vehicles. But small business owners had little reason to start up gas stations without a large group of car owners around to purchase from them.

The above problem should make Tesla’s major barrier clear. Tesla has the electric car technology ready. What they lack is a network of charging stations that can make long-distance travel with electric cars practical. Giving away the patents to competitors potentially means more electric cars on the road and more charging stations, without having to spend significant capital that the small company does not have. Tesla ultimately wins because they have a first-mover advantage in developing the technology.

So this is less about altruism and more about self-interest. But that is not a bad thing. 99% of the driving force behind economics is mutual gain. I think this fact gets lost in the modern political/economic debate because there are some (really bad) cases where that is not true. But here, Tesla wins, other car manufacturers win, and consumers win.

Oh, oil producing companies lose. Whatever.

H/T to Dillon Bowman (a student of mine at the University of Rochester) and /u/Mubarmi for inspiring this post.

The Game Theory of Soccer Penalty Kicks

With the World Cup starting today, now is a great time to discuss the game theory behind soccer penalty kicks. This blog post will do three things: (1) show that penalty kicks is a very common type of game and one that game theory can solve very easily, (2) players behave more or less as game theory would predict, and (3) a striker becoming more accurate to one side makes him less likely to kick to that side. Why? Read on.

The Basics: Matching Pennies
Penalty kicks are straightforward. A striker lines up with the ball in front of him. He runs forwards and kicks the ball toward the net. The goalie tries to stop it.

Despite the ordering I just listed, the players essentially move simultaneously. Although the goalie dives after the striker has kicked the ball, he cannot actually wait to the ball comes off the foot to decide which way to dive—because the ball moves so fast, it will already be behind him by the time he finishes his dive. So the goalie must pick his strategy before observing any relevant information from the striker.

This type of game is actually very common. Both players pick a side. One player wants to match sides (the goalie), while the other wants to mismatch (the striker). That is, from the striker’s perspective, the goalie wants to dive left when the striker kicks left and dive right when the striker kicks right; the striker wants to kick left when the goalie dives right and kick right when the goalie dives left. This is like a baseball batter trying to guess what pitch the pitcher will throw while the pitcher tries to confuse the batter. Similarly, a basketball shooter wants a defender to break the wrong way to give him an open lane to the basket, while the defender wants to stay lined up with the ball handler.

Because the game is so common, it should not be surprised that game theorists have studied this type of game at length. (Game theory, after all, is the mathematical study of strategy.) The common name for the game is matching pennies. When the sides are equally powerful, the solution is very simple:

If you skipped the video, the solution is for both players to pick each side with equal probability. For penalty kicks, that means the striker kicks left half the time and right half the time; the goalie dives left half the time and dives right half the time.

Why are these optimal strategies? The answer is simple: neither party can be exploited under these circumstances. This might be easier to see by looking at why all other strategies are not optimal. If the striker kicked left 100% of the time, it would be very easy for the goalie to stop the shot—he would simply dive left 100% of the time. In essence, the striker’s predictability allows the goalie to exploit him. This is also true if the striker is aiming left 99% of the time, or 98% of the time, and so forth—the goalie would still want to always dive left, and the striker would not perform as well as he could by randomizing in a less predictable manner.

In contrast, if the striker is kicking left half the time and kicking right half the time, it does not matter which direction the goalie dives—he is equally likely to stop the ball at that point. Likewise, if the goalie is diving left half the time and diving right half the time, it does not matter which direction he striker kicks—he is equally likely to score at that point.

The key takeaways here are twofold: (1) you have to randomize to not be exploited and (2) you need to think of your opponent’s strategic constraints when choosing your move.

Real Life Penalty Kicks
So that’s the basic theory of penalty kicks. How does it play out in reality?

Fortunately, we have a decent idea. A group of economists (including Freakonomics’ Steve Levitt) once studied the strategies and results of penalty kicks from the French and Italian leagues. They found that players strategize roughly how they ought to.

How did they figure this out? To begin, they used a more sophisticated model than the one I introduced above. Real life penalty kicks differ in two key ways. First, kicking to the left is not the same thing as kicking to the right. A right-footed striker naturally hits the ball harder and more accurately to the left than the right. This means that a ball aimed to the right is more likely to miss the goal completely and more likely to be stopped if the goalie also dives that way. And second, a third strategy for both players is also reasonable: aim to the middle/defend the middle.

Regardless of the additional complications, there are a couple of key generalizations that hold from the logic of the first section. First, a striker’s probability of scoring should be equal regardless of whether he kicks left, straight, or right. Why? Suppose this were not true. Then someone is being unnecessarily exploited in this situation. For example, imagine that strikers are kicking very frequently to the left. Realizing this, goalies are also diving very frequently to the left. This leaves the striker with a small scoring percentage to the left and a much higher scoring percentage when he aims to the undefended right. Thus, the striker should be correcting his strategy by aiming right more frequently. So if everyone is playing optimally, his scoring percentage needs to be equal across all his strategies, otherwise some sort of exploitation is available.

Second, a goalie’s probability of not being scored against must be equal across all of his defending strategies. This follows from the same reason as above: if diving toward one side is less likely to result in a goal, then someone is being exploited who should not be.

All told, this means that we should observe equal probabilities among all strategies. And, sure enough, this is more or less what goes on. Here’s Figure 4 from the article, which gives the percentage of shots that go in for any combination of strategies:

pks

The key places to look are the “total” column and row. The total column for the goalie on the right shows that he is very close to giving up a goal 75% of the time regardless of his strategy. The total row for the striker at the bottom shows more variance—in the data, he scores 81% of the time aiming toward the middle but only 70.1% of the time aiming to the right—but those differences are not statistically significant. In other words, we would expect that sort of variation to occur purely due to chance.

Thus, as far as we can tell, the players are playing optimal strategies as we would suspect. (Take that, you damn dirty apes!)

Relying on Your Weakness
One thing I glossed over in the second part is specifically how a striker’s strategy should change due to the weakness of the right side versus the left. Let’s take care of that now.

Imagine you are a striker with an amazingly accurate left side but a very inaccurate right side. More concretely, you will always hit the target if you shoot left, but you will miss some percentage of the time on the right side. Realizing your weakness, you spend months practicing your right shot and double its accuracy. Now that you have a stronger right side, how will this affect your penalty kick strategy?

The intuitive answer is that it should make you shoot more frequently toward the right—after all, your shot has improved on that side. However, this intuition is not always correct—you may end up shooting less often to the right. Equivalently, this means the more inaccurate you are to one side, the more you end up aiming in that direction.

Why is this the case? If you want the full explanation, watch the following two videos:

The shorter explanation is as follows. As mentioned at the end of the first section of this blog post, players must consider their opponent’s capabilities as they develop their strategies. When you improve your accuracy to the right side, your opponent reacts by defending the right side more—he can no longer so strongly rely on your inaccuracy as a phantom defense. So if you start aiming more frequently to the right side, you end up with an over-correction—you are kicking too frequently toward a better defended side. Thus, you end up kicking more frequently to the left to account for the goalie wanting to dive right more frequently.

And that’s the game theory of penalty kicks.

Chimps Aren’t Better than Humans at Game Theory

(At least the evidence doesn’t match the claim.)

“Chimps Outsmart Humans When It Comes To Game Theory” has been making the social media rounds today. Unfortunately, this seems to be a case of social media run amok–the paper has some interesting results, but that interpretation is horribly off base.

Below, I will give four reasons why we shouldn’t conclude that chimps are better at game theory than humans. But first, let’s quickly review what happened. A bunch of chimps, Japanese students, and villagers played some basic, zero-sum, simultaneous move games like matching pennies. The mixed strategy algorithm derives equilibrium predictions of what each player should do under these scenarios. As it turns out, chimps played strategies closer to the equilibrium predictions. Therefore, the proposed conclusion is that chimps are better game theorists than humans.

So what’s wrong here? Well…

The Sample Size Is Lacking
Who participated in the study? Six chimps, thirteen female Japanese students, and twelve males from Guinea. We can’t generalize differences between these groups in a meaningful way without a larger sample size.

The Chimps Aren’t a Random Sample
From the study:

Six chimpanzees (Pan Troglodytes) at the Kyoto University Primate Research Institute voluntarily participated in the experiment. Each chimpanzee pair was a mother and her offspring…all six had previously participated in cognitive studies, including social tasks involving food and token sharing.

There are a couple of problems here. First, the pairs of chimps that played are related. It stands to reason that a mother who is good at these games would produce offspring that is also good at these games. So we really aren’t looking at six chimps so much as three. Ouch.

(It should be noted that the Japanese students aren’t really random either since they all come from the same university. However, this is true for many studies of this sort, so I’m going to overlook it.)

Second, these aren’t even your regular chimps. They have played plenty of games before!

Combined, this is like taking a group of University of Rochester Department of Political Scientist (URDPS) students and comparing their results to a group of random Californians. The URDPS group is “related” (they all go to the same school) and they all have plenty of experience playing games (at least three semesters’ worth). They would undoubtedly play more rationally than the random group from California. But you can’t use this to claim that New Yorkers play more rationally than Californians. Yet you are seeing the analogous claim being made.

They Aren’t Playing the Game the Researchers Are Testing Against
Only the researchers knew the game that the players were playing. In contrast, the players only knew their payoffs, not their opponents’. The mixed strategy algorithm only makes predictions about how players should play given that all facets of the game are common knowledge. That’s clearly not the case here.

Instead, the real game here is spending a number of iterations of the game trying decipher what your opponent’s payoffs are and then figuring out how to strategize accordingly. It’s not clear how to interpret the results in this light, though it is interesting that the (small, biased sample of) chimps figured this out more quickly.

The Game Was Not Inter-species
If you want to say that chimps are better at these games than humans, you need to have chimps playing humans. You would then have them play some number of iteration and see who received more apples/yen by the end of the game. Instead, it was chimps versus chimps and humans against humans. With that data, you cannot claim one party is better than the other.

Nash Equilibrium Isn’t a Good Baseline
“Fine,” you might say in response to the last point, “but the chimps still played closer to the Nash equilibrium strategies than humans. Therefore, chimps are better game theorists than humans are.” That’s still not want we want to know, though. Who cares if the players were playing Nash? If I played this game tomorrow, would I play Nash? Yes–if I thought the other player was clever enough to do the same. If not, I would try to beat them.

This is a nuanced problem, so let’s look at an example. If you ask people about soccer penalty kicks, they will likely tell you that you should kick more frequently to your stronger side as it becomes more and more accurate. This is wrong: you should increase your reliance on your weaker side. Knowing this, if I played the role of the goalie, I would start diving to the kicker’s stronger side more frequently. The kicker would do poorly and I would do very well.

How would the study interpret this? It would say that we are both bad at game theory! But that’s not what’s going on here. We have one bad strategist and one sophisticated one. The interpretation of the study would get half of it right but completely blow the other half. Worse, a sophisticated goalie taking advantage of the kicker’s incompetence would outperform a goalie who played Nash instead.

Nash equilibrium is useful for many reasons; testing whether one species is better than another with it as a baseline is not one of them.

I’d have to go through the paper more closely than I have so far to give an overall impression of it. However, even without that, it is clear that the way social media is describing the results is very questionable.

Cheap Talk Causes Peace: Policy Bargaining and International Conflict

(Paper here.)

Here are two observations about international diplomacy:

First, crises are often the result of uncertainty about policy preferences. Currently, this is most apparent with the United States, Russia, and Ukraine. It remains unclear exactly how much influence Putin wants over Ukrainian politics. He might have expansionary aims or he may just want moderate control, aware that too much sway over Ukraine will cost Russia too much in subsidies in the long term. In the former case, the United States has reason to worry. In the latter case, the United States can relax.

Second, diplomatic conferences often discuss preferred policies. That is, the parties sit in a room and talk about what they want and what they don’t want. For scholars of crisis bargaining, this is weird. War, after all, is supposed to be the result of uncertainty about power or resolve or credible commitment. These types of discussions are seemingly cheap talk and are therefore supposed to have no effect on bargaining behavior.

In a new working paper, Peter Bils and I help explain these stylized facts. The first observation leads us to set aside the traditional sources of uncertainty–power and resolve–and instead focus on uncertainty over policy preferences on a real line, similar to the spatial model in American politics. The second observation suggests that we should study how cheap talk affects bargaining outcomes in such a world.

Our results are striking and run in contrast to the standard bargaining model of war. Rather than standardize policy preferences on a [0, 1] interval, we allow the winner of a war to endogenously decide what policy to implement afterward. This forces the parties to not only think about how likely they are to win a war and how much it will cost but also consider the quality of their post-conflict outcomes.

Without communication, we find that war may be inevitable–if your opponent’s preferred policy could range from moderate to very extreme, it is impossible to construct an offer to simultaneously appease all types. But if the range possibilities is smaller, peace can be inefficient. This is because the proposer may want to offer an amount that all types are willing to accept. Yet, in doing so, both the proposer and some types of the opponent would be both better off implementing a more moderate policy instead.

We then allow for cheap talk pre-play communication. Normally, cheap talk fails to cause meaningful change because weaker types have incentives to misrepresent; that is, they want to mimic stronger types to receive more generous demands. In some situations, this remains true in our setup. However, cheap talk can occasionally work when the uncertainty is about policy preferences. This is because moderate types sometimes do not want to bluff extremism since doing so would result in an intolerably extremist offer. As a result, where war was previously inevitable and peace was inefficient, peace always works and is efficient as well.

Empirically, this suggests that diplomacy is useful, which helps explain why states spend so much time and effort on it. And despite all of the incentives to lie, cheat, and bluff, those exchanges can sometimes be taken at face value.

Here’s the abstract from the paper:

Studies of bargaining and war generally focus on two sources of incomplete information: uncertainty about the probability of victory and uncertainty about the costs of fighting. We introduce a third: ideological preferences of a spatial policy. Under these conditions, standard results from the bargaining model of war break down: peace can be inefficient and it may be impossible to avoid war. We then extend the model to allow for cheap talk pre-play communications. Whereas incentives to misrepresent normally render cheap talk irrelevant, here communication can cause peace and ensure that agreements are efficient. Moreover, peace can become more likely when the proposer becomes more uncertain about the opposing state. Our results indicate one major purpose of diplomacy during a crisis is simply to communicate preferences and that such communications can be credible.

Game Theory Is Really Counterintuitive

Every now and then, I hear someone say that game theory doesn’t tell us anything we don’t already know. In a sense, they are right—game theory is a methodology, so it’s not really telling us anything that our assumptions are not. However, I challenge someone to tell me that they would have believed most of the things below if we didn’t have formal modeling.

  • People often take aggressive postures that lead to mutually bad outcomes even though mutual cooperation is mutually preferable. Source.
  • Even if everyone agrees that an outcome is everyone’s favorite, they might not get that outcome. Source.
  • Sometimes having fewer options is better than having more options. Source.
  • On a penalty kick, soccer players might wish to kick more frequently toward their weaker side as their weaker side becomes increasingly inaccurate. Source.
  • In a duel, both gunslingers should shoot at the same time, even if one is a worse shot and would seem to benefit by walking closer to his target. Source.
  • There’s a reason why gas stations are on the same corner and politicians adopt very similar platforms. And it’s the same reason. Source.
  • Closing roads can improve everyone’s commute time. Source.
  • Fewer witnesses to a crime might be preferable to more. Source.
  • You should bid how much you value the good at stake in a second price auction. Source.
  • If you pay the value you think something is worth, you are going to end up with a negative net profit. Source.
  • Lighting money on fire is often profitable. Source.
  • Going to college can be valuable even if college doesn’t teach you anything. Source.
  • An animal might be better off jumping high in the air repeatedly than running away from a predator. Source.
  • Knowing just slightly more about the value of your car than a potential buyer can make it impossible to sell it. Source.
  • Nigerian email scammers should say they are from Nigeria even though just about everyone is familiar with the scam. Source.
  • Everyone might mimic everyone else just because two people chose to do the same thing. Source.
  • A biased media may be better than an unbiased media. Source.
  • Every voting system is manipulable. Source.
  • You might want to abstain from voting even though you strictly prefer one candidate to another. Source.
  • Unanimous jury rulings are more likely to convict the innocent than simple majority rule if jurors vote intelligently. Source.
  • The House of Representatives caters to the median member of the majority party, not the median member of the institution overall. Source.
  • Plurality, first-past-the-post voting leads to two-party systems. Source.
  • United Nations Security Council members sometimes do not veto resolutions even though they strongly dislike them. Source.
  • Without the ability to propose offers, you receive very few benefits from bargaining. Source.
  • Settlements always exist that are mutually preferable to war. Source.
  • Fighting wars removes the need for war. Source.
  • You might want to shoot to miss in war. Source.
  • Nonproliferation agreements can be credible. Source.
  • Weapons inspections are useful even if they never find anything. Source.
  • Economic sanctions are useful even though they often fail in application. Source.
  • Pitchers shouldn’t change their pitch selection with a runner on third base, even though curveballs are more likely to result in wild pitches. Source.
  • Sports teams can benefit from a lack of player safety in contract negotiations. Source.
  • You shouldn’t try to maximize your score in Words with Friends/Scrabble. Source.
  • In speed sailing, competitors deliberately choose paths they believe will be slower. Source.
  • The first player wins in Connect Four. Checkers ends in a draw. Source.
  • Chess has a solution, though we don’t know it yet. Source. (Or maybe not.)
  • Warren Buffett was never going to pay $1 billion the winner of the March Madness bracket challenge. Source.
  • Park Place is worthless in McDonald’s Monopoly. Source.
  • Losing pays. Source.
  • As drug tests become more accurate, they should be implemented less often. Source.

Am I missing anything?

Roger Craig’s Daily Double Strategy: Smart Play, Bad Luck

Jeopardy! is in the finals of its Battle of the Decades, with Brad Rutter, Ken Jennings, and Roger Craig squaring off. The players have ridiculous résumés. Brad is the all-time Jeopardy! king, having never lost to a human and racking up $3 million in the process. Ken has won more games than anyone else. And Roger has the single-day earnings record.

That sets the scene for the middle of Double Jeopardy. Roger had accumulated a modest lead through the course of play and hit a Daily Double. He then made the riskiest play possible–he wagered everything. The plan backfired, and he lost all of his money. He was in the negatives by the end of the round and had to sit out of Final Jeopardy.

Did we witness the dumbest play in the history of Jeopardy? I don’t think so–Roger’s play actually demonstrated quite a bit of savvy. Although Roger is a phenomenal player, Brad and Ken are leaps and bounds better than everyone else. (And Brad might be leaps and bounds better than Ken as well.) If Roger had made a safe wager, Brad and Ken would have likely eventually marched past his score as time went on–they are the best for a reason, after all. So safe wagers aren’t likely to win. Neither is wagering everything and getting it wrong. But wagering everything and getting it right would have given him a fighting chance. He just got unlucky.

All too often, weaker Jeopardy! players make all the safest plays in the world, doing everything they can to keep themselves from losing immediately. They are like football coaches who bring in the punting unit down 10 with five minutes left in the fourth. Yes, punting is safe. Yes, punting will keep you from outright losing in the next two minutes. But there is little difference between losing now and losing by the end of the game. If there is only one chance to win–to go for it on fourth down–you have to take it. And if there is only one way to beat Brad and Ken–to bet it all on a Daily Double and hope for the best–you have to make it a true Daily Double.

Edit: Roger Craig pretty much explicitly said that this was the reason for his Daily Double strategy on the following night’s episode. Also, this “truncated punishment” mechanism also has real world consequences, such as the start of war.

Edit #2: Julia Collins in the midst of an impressive run, having won 14 times (the third most consecutive games of all time) and earned more money than any other woman in regular play. She is also fortunate that many of her opponents are doing very dumb things like betting $1000 on a Daily Double that desperately needs to be a true Daily Double. People did the same thing during Ken Jennings’ run, and it is mindbogglingly painful to watch.