I am a PhD candidate at the University of Rochester specializing in international relations and formal theory. I am currently on the academic job market. If you want to know more about me, feel free to look around, check out my CV, email me at wspaniel@ur.rochester.edu, or use the links below as a cheat sheet:


My APSA 2014 Presentation: Policy Bargaining and International Conflict

If you are looking for something to do on Friday from 10:15 to noon, head over to the Marriott Jefferson room to see my presentation on Ideology Matters: Policy Bargaining and International Conflict. It is based on a joint project with Peter Bils. Here is the abstract:

Studies of bargaining and war generally focus on two sources of incomplete information: uncertainty about the probability of victory and uncertainty about the costs of fighting. We introduce a third: ideological preferences of a spatial policy. Under these conditions, standard results from the bargaining model of war break down: peace can be inefficient and it may be impossible to avoid war. We then extend the model to allow for cheap talk pre-play communications. Whereas incentives to misrepresent normally render cheap talk irrelevant, here communication can cause peace and ensure that agreements are efficient. Moreover, peace can become more likely when the proposer becomes more uncertain about the opposing state. Our results indicate one major purpose of diplomacy during a crisis is simply to communicate preferences and that such communications can be credible.

If you can’t make it, you can download the paper here or watch the presentation below:

Multi-Method Research: The Case for Formal Theory

Hein Goemans and I have collaborated on a new research note on formal theory and case studies. Here’s the abstract:

We argue that formal theory and historical case studies, in particular those that use process-tracing, are extremely well-suited companions in multi-method research. To bolster future research employing both case studies and formal theory, we suggest some best practices as well as some (common) pitfalls to avoid.

Since the research note is short by nature, I won’t spend too much extra space discussing it here. You’d be better off skimming or reading the note itself. In essence, though, we argue that formal theory and case studies are natural methodological allies. We also advocate for serious interpretation of a model’s cutpoint into the informal analysis. Manuscripts that combine formal theory with case studies too often spend considerable time developing the model only to ignore it when they begin discussing substance. They should be tied together.

Also, and something that I stress heavily in my book project on nuclear proliferation, we must be very careful in how we interpret those cutpoints. For example, a common fallacy takes the following form: the model says w occurs if x > y + z. The case study then goes to great lengths to prove that y was close to 0 or negative, therefore w should occur. This overlooks the values of x and z, however—even with y equal to 0, the inequality could still fail depending on the relationship between the other parameters. Put differently, and with certain notable exceptions detailed in the research note, we must think about the cutpoints holistically.

Again, you can read the full note here.

Mario Kart 8’s Most Popular Tracks

Mario Kart 8 has consumed most of my entertainment hours since it came out a couple of months ago. Its online play is great. When you queue, the game randomly gives you three (of thirty-two possible) tracks to pick from, or you can select random if none are to your liking. Social scientist that I am, I saw an obvious data collecting opportunity. So the last few weeks, I have painstakingly charted every single choice I have observed. This allowed me to create a rough ranking system of all the tracks in the game. Which track do people like the most? The least? Check below:


The numbers reflect the percentage of the time I observed players picking any given track, not the track the game randomly selected from those ballots. For example, over the many, many times Sunshine Airport randomly popped up in the queue, players selected it 48.3% of the time. The tiers simply cut the data into a top bucket of four and four other buckets of seven.

There are a number of important caveats to the image, so please read what follows before boldly declaring that Bone-Dry Dunes is the worst thing Nintendo has ever created.

  • I don’t claim that this is the be-all, end-all to Mario Kart track popularity. Rather, without any other metrics to rank the courses, I think that this is a useful first-cut at the question.
  • While I gathered a lot of data to do this, I am only one man. The number of potential picks ranges from 113 for Water Park to 258 for Bone Dry Dunes. We should expect such randomness from the queue selection system. However, it also means that some of these percentages are secure than others. I plan on continuing to collect data over time.
  • Be careful about making pairwise comparisons. Based on what I have, it is reasonable to conclude that players prefer GBA Mario Circuit (41.8%) to Electrodome (27.7%), but it is not reasonable to conclude that players prefer Electrodome to Mario Kart Stadium (27.5%).
  • With people duo queuing, I included both votes. I can see why people might think this should only count as one, but the choice from a duo queue (in theory) reflects the preferences of two people. So I count it twice. It would be very difficult to count them as one vote anyway; I would have to keep tabs on who is submitting at the same time, which difficult when I am trying to count so many things at once.
  • I collected the data as I rose from 2000 to 3100. So if you believe that preferences are different for this group than a different one, you are not looking at the image you may wish to see.
  • I did not count my votes. We want a measure of what people like the most, not what I like the most.
  • I excluded “forced” votes that occur if players take more than the allotted time to make a selection. These votes are pure noise anyway.
  • An active vote for random counts as a vote against everything else. For example, suppose the choices were Yoshi Valley, Royal Racewway, and Music Park. Three players select Yoshi Valley and one picks random. Then Yoshi has received three of four votes and the other two tracks have received none of the four. In other words, the “random” doesn’t magically disappear from the denominator in the data tabulation.
  • I only played worldwide games.
  • These were all races. No battles.

And now for a little bit of analysis:

  • I did some fancy statistical tests to see if a variety of track qualities (length, difficulty, newness) determines player preferences. All of the results were null. So whatever is driving these votes is highly idiosyncratic.
  • The new Rainbow Road was very disappointing. It was the last track I played when I went through the game for the first time. I was very excited until all I found was boring turn after boring turn.
  • Some might also describe the original N64 Rainbow Road as boring turn after turn, but it seems that Nintendo made a smart decision to turn the course into a straight-shot and not a five lap race.
  • I question Nintendo’s wisdom in putting Music Park, Grumble Volcano, Sherbet Land, and Dry Dry Desert in the game. What’s the point of having classic tracks if no one wants to play them?
  • To be fair, perhaps players actually wanted to see these tracks and just failed in the execution. But that still doesn’t explain why you would put Grumble Volcano back in the game. Its main course feature is that lava randomly shoots up and kills you for no good reason. I understand Mario Kart is full of randomness, but let that come from interactive item blocks and not from the computer.
  • I feel really bad for whoever designed Bone-Dry Dunes.

See you in the queues.

Kindle Unlimited and the Economics of Bundling

Today, Amazon announced Kindle Unlimited, a subscription service for $9.99 per month that gives buyers all-you-can-read access to more than 600,000 books. And it took, oh, five minutes before someone called this the death of publishing.

Calm down. This isn’t the end of publishing—it is a natural extension of market forces and is potentially good for everyone.

Amazon is taking advantage of the economics of bundling—selling multiple products at an identical price regardless of how much the consumer uses each component. Bundles are all over the place; cable TV, Netflix, Spotify, and Microsoft Office are all examples of bundles. These business plans are pervasive because they work, they bring in a lot of money for their providers, and they leave consumers better off as well.

Wait, what!? How is it possible that both providers and consumers are better off by bundling? A while back, I too believed that this was insane and that bundles were a scam to get me to pay more money than I wanted to. (Why should I pay $1 for Home and Gardening when all I want is ESPN?) But then I read up on bundling and understood my folly.

An example will clarify things (and potentially amaze you, as it did for me not too long ago). As usual, I will keep thing simple to illustrate the fundamental logic without getting us bogged down in unnecessarily complicated math. Imagine a world with only two books available for purchase:

Further, let’s assume that there are only two customers in the world. Let’s call them Albert and Barbara. Albert and Barbara have different tastes in books. Albert prefers Hunger Games to Game Theory 101; he would pay at most $4.99 to read Hunger Games but only $1.50 at most for Game Theory 101. Barbara has the opposite preference; she would pay at most $2.25 to read Hunger Games and $3.99 to read Game Theory 101. You might find the following graphical representation more digestible:


Finally, assume that the marginal cost of each book is $0.00. That is, once the book has been written, it costs $0.00 to distribute each book. This is a bit of an exaggeration, but it is close to reality for electronic books. However, it is definitely not true for physical books (printing, shipping, etc.). This distinction will be important later.

With all those preliminaries out of the way, consider how a seller should price those books in a world without bundling. There are two options. First, you can price a book at a low price to capture the entire market share. Second, you can publish the book at a high price; it will sell fewer copies but make more money per unit.

Let’s apply that decision to Hunger Games. Selling at the low price means a cost of $2.25 so that both Albert and Barbara purchase it. (This is because Barbara’s maximum price for it is $2.25). That brings in $4.50 of revenue. Alternatively, you could sell at a high price of $4.99. This ensures that only Albert will buy. But it also brings in $4.99 in revenue, which is more than if you had set a low price. So you would sell Hunger Games for $4.99.

Now consider the price for Game Theory 101. Selling at the low price requires a cost of $1.50 so that both Albert and Barbara purchase it. (This is because Albert’s maximum price for it is $1.50.) That brings in $3.00 of revenue. Alternatively, you could sell at a price of $3.99. Only Barbara would buy it at this price. But it also nets $3.99 in revenue, which is more than if you had set a low price. So you would sell Game Theory 101 for $3.99. (Not coincidentally, if you click on the books above, you will find that they are priced like that in real life.)

Let’s recap the world without bundling. Hunger Games costs $4.99 and Game Theory 101 costs $3.99. The seller brings in $7.98 in revenue. Neither Albert nor Barbara benefit from this arrangement; Albert is paying $4.99 for a book that he values at $4.99, while Barbara is paying $3.99 for a book she values at $3.99.

Now for the magic of bundling. Suppose the seller bundle of both books for $5.99. Who is willing to buy here? Albert values Hunger Games and Game Theory 101 at $4.99 and $1.50 respectively. Thus, he is willing to pay up to $6.49 for the pair. So he will definitely purchase the bundle for $5.99. In fact, he’s much happier than he was before because he internalizing a net gain of $0.50 whereas he had no gain before.

What about Barbara? She was willing to pay respective prices of $2.25 and $3.99. Consequently, she is willing to pay up to $6.24 for the pair. So she will also definitely purchase the bundle for $5.99. And similar to Albert, she is internalizing a net gain of $0.25, up from no gain before.

So Albert and Barbara both win. But so do the producers—rather than bringing in a total of $7.98, the producers now earn $11.98. Every. Body. Wins. (!)

(Yes, I know that Kindle Unlimited costs $9.99 per month. If we added another book to this puzzle, we could get Albert and Barbara to want to pay that price. But that would require more math, and we don’t want more math.)

Why does this work? Bundling has two keys. First, as previewed earlier, the marginal cost of the products must be very small. If they were larger, those costs would make distributing more goods look comparatively less attractive. This would drive up the cost of the bundle and make it less attractive for the consumers, perhaps forcing them to prefer the a la carte pricing. That helps explain why book bundling is just now catching on; electronic books only cost server space whereas physical copies involve UPS.

Second, it helps when customer preferences are negatively correlated. This pushes everyone’s reservation price for the bundle closer together, which in turn makes the producer more likely to want to sell at the bundled price.

Before wrapping up, bundling has an important secondary effect for authors. The main takeaway here is that producers of the materials can make more money through bundling. This gives authors more incentive to create additional materials—an author who would otherwise only make $10,000 from a novel could now make, say, $15,000 instead. So an author on the fence whether to produce the book is more likely to follow through. This further enhances consumer welfare because those buyers can now read a book that would otherwise not exist.

Finally, “producers” here has meant a combination of authors and Amazon. A skeptic might worry that Amazon will end up taking away all of the revenues. That may be an issue in the long run if Amazon becomes a monopoly, but the revenue share is more than fair for now. Indeed, Amazon is giving authors roughly $2 every time a Kindle Unlimited subscriber reads 10% of a book, which is substantial. And with Kindle Unlimited reaching more consumers than a la carte pricing would, writers can earn revenue from a larger share of readers.

If you want to know more about bundling, I highly recommend you read the Marginal Revolution post on the subject.

Calculate Day-of-Week Sales Averages on KDP

For the longest time, KDP aggregated all sales information by week. Now KDP has nice graphical breakdowns of daily sales. Naturally, I wondered if my sales averages differed significantly by the day of the week. I compiled an Excel spreadsheet to give me a quick answer. Apparently the day of the week does not an impact for me, at least not in any significant way.

Still, I figured others would want to know the same information. As such, I did a little bit of extra work on the spreadsheet to make it usable for others. You can download it here. It is very simple to use. Just follow these four steps:

1) Select the tab in Excel that corresponds to the current day of the week. (For example, if today is Tuesday, use the Tuesday tab.)

2) Go to KDP’s sales dashboard. Use one of the pull down menus to open the last 90 days of sales. This will give you the most days to average over.

3) Copy each day of sales from the graph to the spreadsheet. This will require some work because you have to do it manually and need to pay close attention to graph to make sure you are copying down the correct number.

4) Excel will automatically calculate each day of the week’s average sales.

Again, you can download it here. Let me know what you think.


We Shouldn’t Generalize Based on World War I

As you probably already know, today is the 100th anniversary of the assassination of Archduke Franz Ferdinand, which would set off the July Crisis and then World War I. For the next few months, the media will undoubtedly bombard us with World War I history and attempt to teach us something based on it.

This is not a good idea. World War I was exceptional. To make generalizations based on it would be like making generalizations about basketball players based on LeBron James. LeBron is so many standard deviations away from the norm that anything you learn from him hardly carries over to basketball players in general. At best, it carries over to maybe one player per generation. The same is true about World War I. It was so many standard deviations away from the norm that anything you learn from it hardly carries over to wars in general. At best, it carries over to maybe one war per generation.

Anyway, the impetus for this post was a piece on Saturday’s edition of CBS’s This Morning, where a guest said something to the effect of “the lesson of World War I is that sometimes it is difficult to stop fighting once you’ve started.” (Apologies I don’t have the exact quote. They didn’t put the piece on their website, but I will update this post if they do.) I suppose this is true in the strictest sense–sometimes countries fight for a very long time. However, such long wars are rare events. Most armed conflicts between countries are very, very short.

To illustrate this, I did some basic analysis of militarized interstate disputes (MIDs)–armed conflicts that may or may not escalate to full scale war from 1816 to 2010. If we are interested in whether fighting really begets further fighting, this is the dataset that we are interested in analyzing since it represents all instances in which states started butting heads, not just the most salient ones.

So what do we find in the data? Well, the dataset includes a measure of length of conflicts. If fighting begets further fighting in general, we would expect to see very few instances of short conflicts and a much larger distribution of longer conflicts. Yet, looking at a histogram of conflicts by length, we find the exact opposite:


(I used the maximum length measure in the MIDs dataset to create this histogram. Because the length of a conflict can vary depending on who you ask, the MIDs dataset includes a minimum and maximum duration measure. By using the maximum, I have stacked the deck to make it appear that conflicts last longer.)

Each bin in the histogram represents 50 days. A majority of all MIDs fit into that very first bin. More than 90% fall into the first seven bins, or roughly a year in time. Conflicts as long as World War I are barely a blip in the radar. Thus, fighting does not appear to beget further fighting. If anything, it appears to do just the opposite.

One potential confound here is that these conflicts are ending because one side is being completely destroyed militarily. In turn, fighting begets further fighting but stops rather quickly because one side cannot fight any longer. But this is not true. Less than 5% of MIDs result in more than 1000 causualties and even fewer destroy enough resources to prohibit one side from continuing to fight.

So why doesn’t fighting beget further fighting? The simple answer is that war is a learning process. Countries begin fighting when they fail to reach a negotiated settlement, often because one side overestimates its ability to win a war or underestimates how willing to fight the opposing side is. War thus acts as a mechanism for information transmission–the more countries fight, the more they learn about each other, and the easier it is to reach a settlement and avoid further costs of conflict. As a result, we should expect war to beget less war, not more. And the histogram shows that this is true in practice.

Do not take this post to mean that World War I was unimportant. Although it was exceptional, it also represents a disproportionately large percentage of mankind’s casualties in war. It was brutal. It was devastating. It was ugly. But for all those reasons, it was not normal. Consequently, we should not be generalizing based on it.