Category Archives: Uncategorized

Some Thoughts on The Force Awakens (Spoilers)

Massive spoilers below…











1) I suspect some viewers might roll their eyes at the fact that the galaxy is still in the middle of a civil war, but this is somewhat realistic. Civil wars tend to last a loooong time. The civil war in Afghanistan, for example, has been going on since 1978. One could argue that Korea has been in civil war for the last 65 years. (The Force Awakens makes it seem like both the Republic and First Order control and govern territory, like North and South Korea.) The good news, if there is any, is that political scientists have a pretty good idea why civil wars take forever to end.

2) What a strange world we live in where James Bond has more screen time in a Star Wars film then Luke Skywalker. (Daniel Craig is the stormtrooper that Rey pulls the Jedi mind trick on.)

3) The trailers spoiled Han Solo’s death. When Kylo Ren pulls out his lightsaber on the bridge, we have yet to see Ren’s battle with Finn in the snow. This means that Ren can’t be giving himself up here, and it would be weird if he simply re-holstered his lightsaber.

4) At the end of Return of the Jedi, Luke seems to believe Vader can go home and everything will be okay. Similarly, Mr. Solo…I mean, Han…seems to think that Kylo Ren can return home and everything will be okay. Their best case scenario is life imprisonment, and execution seems much more likely.

5) Perhaps the most unrealistic thing about A New Hope and The Force Awakens is that, within a few scant hours of receiving intelligence about the big evil weapon, the rebels have some genius plan to destroy the facility. The only way this could happen is if the vulnerability is obvious. But if the vulnerability is obvious, why don’t the bad guys spot it and fix the problem?

6) Also, when will the bad guys learn that sinking massive amounts of capital into one super weapon is not a good investment strategy?

7) How can Han manually leave light speed within a planet’s atmosphere? Given the speed involved and the small window, this is basically impossible.

Peace Science Presentation!

Brad Smith and I are excited to be presenting our paper on sanctions (conditionally accepted at ISQ) at Peace Science today. Check out the manuscript here or see the slides here. See you at 3 pm in 218 Houston!

How to Remove Beamer Navigaton Buttons

TL;DR: Put \setbeamertemplate{navigation symbols}{} in your preamble.

Presentation slides should be minimalist—the more the viewer has to scan, the more time he will take looking at the slide, and the less time he will spend actually listening to you. Minimalism is learned, and it is something I still struggle with. I’m getting better, but I can still improve.

Today, though, I’m taking a simple step to simplify the rest of my slides forever: I’m removing Beamer’s unnecessary navigation buttons.

What navigation buttons? These navigation buttons:


You have almost certainly seen these before. In fact, there is a chance you put them into your Beamer slides without actually knowing what they do. (I spent a good 18 months using Beamer without ever experimenting with them.) The buttons allow you to navigate between slides, subsections, and sections of your presentation.

For my money, these buttons aren’t particularly useful. Most people use clickers for presentations, which rules out the buttons entirely. Even if you are working from the laptop, you can navigate slides using left and right keys. Meanwhile, jumping subsections or sections is usually too disorienting to work efficiently.

Indeed, I have seen someone click navigation buttons during a presentation exactly once—and that was only because the person evidently did not know you could (more efficiently) use the right key instead.

So, in sum, I hate navigation buttons. If you also never use them, then they have no reason to be in the slides. They are just taking up room for no reason.

Fortunately, the fix is simple. Immediately below your \begin{document} command, simply add the following line of code:

\setbeamertemplate{navigation symbols}{}

Now your slides will look like this:


Much cleaner! Thus, unless I rediscover the navigation buttons as being extremely handy, I’m taking them out of all my future presentations.

And if “Arms Treaties and the Credibility of Preventive War” sounds like too scintillating to ignore, you can see the full presentation here and read the paper here.


I am a political scientist who studies war, nuclear proliferation, and terrorism (mostly) using formal models. Currently, I am an assistant professor in the University of Pittsburgh’s Department of Political Science. Before that, I was a Stanton Nuclear Security Postdoctoral Fellow at Stanford’s Center for International Security and Cooperation. I received a PhD from the University of Rochester in 2015.

If you want to know more, you can check my CV page. You can also email me at

Penalty Kicks Are Random

Here’s a quick followup to my post on the game theory of penalty kicks.

During today’s World Cup match between Switzerland and France, Karim Benzema took a penalty kick versus Swiss goalkeeper Diego Benaglio. Benzema shot left; Benaglio guessed left and successfully stopped the shot. Immediately thereafter, the ESPN broadcasters explained why this outcome occurred: Benaglio “did his homework,” insinuating that Benaglio knew which way the kick was coming and stopped it appropriately.

This is idiotic analysis for two reasons. First is the game theoretical issue. It makes no sense for Benzema to be predictable in this manner. Imagine for a moment that Benzema had a strong tendency to shoot left. The Swiss analytics crew would pick up on this and tell Benaglio. But the French analytics crew can spot this just as easily. At that point, they would tell Benaglio his problem and instruct him to shoot right more frequently. After all, the way things are going, the Swiss goalie is going to guess left, which leaves the right wide open.

In turn, to avoid this kind of nonsense, the players need to be randomizing. The mixed strategy algorithm gives us a way to solve this problem, and it isn’t particularly laborious. Moreover, there is decent empirical evidence to suggest that something to this effect occurs in practice.

The second issue is statistical. Suppose for the moment that the players were not playing equilibrium strategies but still not stupid enough to always take the same action. (That is, the goalie sometimes dives left and sometimes dives right while the striker sometimes aims left and sometimes aims right. However, the probabilities do not match the equilibrium.) Then we only have one observation to study. If you have spent a day in a statistics class, you would then know that the evidence we have does not allow us to differentiate between the following:

  1. a player who successfully outsmarted his opponent
  2. a player who outsmarted his opponent but got unlucky
  3. a player who got outsmarted but got lucky
  4. a player who got outsmarted and lost
  5. players playing equilibrium strategies

I can’t think of a compelling reason to make anything other than (5) the null hypothesis in this case. Jumping to conclusions about (1), (2), (3), or (4) is just bad commentary, pure and simple.

The embarrassing thing about this kind of commentary is that it is pervasive and could be reasonably stopped with just a tiny bit of game theory classroom experience. Even someone who watched the first 58 minutes of my Game Theory 101 (up to and including the mixed strategy algorithm) playlist could provide better analysis.

Memes from My Civil War Class

My class on civil wars is about to wrap up (YouTube playlist here, to be completed later this week). To keep a dark subject matter a little bit lighter, I sprinkled a few /r/AdviceAnimals-style memes throughout my lectures. All of them are below, with their appropriate references.

You’re Gonna Have a Bad Time

Despite the explicit warning, I know a handful of people started it the day before it was due. I think a lot of them dropped the class shortly thereafter.

Lazy College Senior

Explicit warnings only work when people are there to hear them.

Grinds My Gears and Actual Advice Mallard


Again, despite the explicit warnings, I had a few midterms say that Rationalist Explanations for War tells us that war is irrational. (Some of these were from otherwise great midterms, so I wonder if I was just being trolled.)

Annoyed Picard

In reference to the de-Ba’athification of Iraq.

The Most Interesting Man in the World

In reference to the King et al paper on Chinese Internet censorship.

Good Guy Ukraine

The timing really couldn’t have been any better.

Lame Pun Raccoon

In reference to Viktor Yushchenko and Viktor Yanukovych. This one was my favorite.

Good Guy Putin/Scumbag Putin


Again, the timing was impeccable.

Captain Hindsight and Men’s Wearhouse Guy


In reference to UNSCR 1973, which authorized a no-fly zone over Libya. Russia and China both abstained from the vote despite having veto power and publicly deriding the resolution, perhaps because they were voting strategically.

Lame Pun Raccoon

In reference to Bombshell by Mia Bloom. This is my second favorite, and I really wish I could claim that the joke was mine.

MPSA 2014 Presentation: War Exhaustion and the Stability of Arms Treaties

If you are interested in nuclear weapons and negotiations with Iran, consider my panel at MPSA. The panel title is “Models of Violence” and will be on Thursday at 8:30 am. Here’s the abstract:

Why are some arms treaties broken while others remain stable over the long term? This chapter argues that the changing credibility of launching preventive war is an important determinant of arms treaty stability. If preventive war is never an option, states can reach settlements that both prefer to costly arms construction. However, if preventive war is incredible today but will be credible in the future, a commitment problem results: the state considering investment must build the arms or it will not receive concessions later on. Thus, arms treaties fail under these conditions. The chapter then applies the theoretical findings to the Soviet Union’s decision to build nuclear weapons in 1949 and Iran’s ongoing nuclear program today. In both instances, war exhaustion made preventive war incredible for the United States, but lingering concerns about future preventive war caused both states to pursue proliferation.

You can download the full paper here.

Misconceptions about the Syrian Civil War

If I had to guess what the three most common explanations are for the Syrian Civil War, I would go with:

  1. Ethnic fractionalization
  2. Economic inequality
  3. The Arab Spring

The problem is, none of these are good explanations. This post explains why.

First, some background. “Rationalist Explanations for War” is one of IR’s most-cited articles from the past twenty years, and for good reason. In it, James Fearon shows that the costs of war ensure that a range of settlements mutually preferable to war always exists. The takeaway point is very simple: you can have massive grievances against a rival, but those grievances do not explain why you go to war. Many countries have internal strife of this nature. Very few of them actually resolve their problems on the battlefield. After all, the parties could implement whatever the expected end result of the fighting would be before the war starts. No one has incentive to fight at that point, since they would receive an identical outcome in expectation but suffer the costs of war (not to mention risk death).

So what does this have to say about the standard explanations for the Syrian Civil War?

Ethnic Fractionalization
Syria’s population is 60% Sunni and 12% Alawite. The Alawites (i.e., Bashar al-Assad) are in power. War allegedly started because of this massive disparity.

This is a bad explanation for two reasons. First, ethnic fractionalization in Syria has existed all along. So if it caused the war in 2011, why did it not cause the war in 2010, 2009, 2008, or 2007? You can’t explain variation (peace/war) with a constant (fractionalization), yet this is exactly what this argument attempts to do.

Second, fractionalization is only a problem because of political repression. The United States is 63% White and 13% African American with an African American in power but is no where near war because of the lack of oppression. (Technically Obama is half-half, but he identifies as African American.) So if ethnic fractionalization leading to oppression caused the war, you are still left trying to explain why Assad simply didn’t relax the extent of oppression. The majority Sunni population would be pacified, and Assad wouldn’t be risking his life fighting a war. Both sides would appear better off.

Economic Inequality
Economic inequality in Syria is bad. For the latest data I could find, Syria’s Gini coefficient is .358 (2004, World Bank). War allegedly started because the impoverished had grievance.

This is a bad explanation for the same two reasons as above. First, this inequality has persisted for a long time. It’s hard to explain why war did not start in 2010, 2009, 2008, or 2007 but did in 2011. Second, if inequality was such a big deal, why didn’t Assad simply throw money at the impoverished groups? After all, those suffering are fighting (in theory) for better economic opportunities. Assad could just give them those opportunities, avoid the bloody mess, and not be risking his life. Again, all sides would appear better off.

Also, it’s worth noting that the United States’ Gini coefficient is .45 (2007, World Bank), making the U.S. more unequal than Syria.

The Arab Spring
The Arab Spring provides a better explanation than the first two because it didn’t exist in 2007, 2008, 2009 or the first eleven months of 2010 but did have effect after that point. Consequently, variation of the presence of the Arab Spring can explain variation in the peace/war outcome.

On the other hand, there is still a question of why the Assad regime couldn’t appease the protesters’ demands peacefully for the same reasons as above. In fact, Qatar did something to that effect, giving raises to key groups (including 120% increases to military officers) to preempt the need to protest.

The Simplest Explanation
The simple explanation of the Syrian Civil War is as follows. The Arab Spring acted as a coordination mechanism and/or allowed disenfranchised groups to overcome their collective action problem. This gave the protesters a sudden spike in military power. For Assad to resolve the tensions, he would have to credibly commit to giving up concessions in the long term. However, once the protesters all went home and Arab Spring coordination effect died, he would no longer have reason to continue giving those concessions. So the protesters became rebels, knowing that war and regime change were the only way to secure concessions.

The Syrian Civil War is, in effect, a preventive war.

This post is based on a lecture I produced for my Civil Wars MOOC, seen below:

Bluffing, Arms Treaties, and Preventive War

I have a new chapter from my book project available. Here’s the abstract:

With complete information, rising states internalize declining states’ threats to launch preventive war. If that threat is credible, they do not pursue arms programs to avoid conflict. If that threat is incredible, declining states preemptively engage in bargaining to override the need to build those weapons, extracting the surplus in the process. Either way, no arms construction occurs. This paper investigates how negotiations work when that threat to intervene is uncertain. When rising states believe their rivals are strong, weak declining states can convince rising states not to build without offering any concessions by mimicking the strong type. When rising states are skeptical, inefficiency prevails. To keep the weak types honest, rising states sometimes attempt to build arms after receiving unencouraging signals. Weak types allow the power shift to transpire. Strong types respond with preventive war. The results indicate that many “preventive” wars are the result of information problems, \textit{not} commitment problems.

You can read the full chapter here. It has a number of important policy implications about Israeli behavior toward Iran’s nuclear program. I think I will write a post on those next week.

Robustness Checks in Formal Models

One thing I really like about large-n empirical papers is their ability to run robustness checks. Statistical models only produce the ideal results if the author captures the correct data generating process. For example, suppose a researcher theorizes that states with larger economies have higher chances of winning wars. If the size of economies and regime type are the only things that matter for winning a war, then you would want your model of war winning to just include economic size and regime type.

However, someone might object under the belief that industrial capacity matters as well. It might be difficult to know who is correct, but fortunately there is an easy solution: just run both models. If the size of the economy is positively correlated with winning wars in both cases, then the objection is irrelevant, the scholars can agree to disagree, and we can go back on focusing on the economy.

So despite only focusing on one model, empirical researchers usually include a few robustness checks within a paper and often include a much larger online appendix with yet more robustness checks. This should be applauded, as it gives us more confidence that the result is correct.

Yet formal theorists often fail to make such robustness checks–even though the problem is the same one empiricists face. Indeed, formal theorists tend to give us the result of one model. But that model is essentially a knife-edge case of a greater family of models, one in which the order of moves is reversed, players have additional moves, and information operates differently. And just like the empirical problem, it is very difficult for formal theorists to “know” that their version of the model is the correct game that real world actors play. Why, then, should we privilege the single version that the author presents? This is an especially important question given that authors have incentive to present the model with the most interesting results even if those results completely disappear if the author tweaks the assumptions slightly. (Note, again, that this is the empiricists have the same incentives.)

The answer, of course, is that we should not. We should expect formal theorists to think long and hard about the models they create and whether they actually represent a broad, robust finding.

For example, consider my work on nuclear nonproliferation agreements. I show that potential nuclear powers are always willing to accept nonproliferation settlements. That is an extremely broad and strong claim. And, dangerously, it comes from a very simple bargaining game. Why should you trust my results?

If that is all I presented to you, you probably should not. Observers of American/Iranian negotiations know that there are a lot of complexities to this type of bargaining. Consequently, I have received a large number of questions about how the results would change if I tweaked certain assumptions. I collected most of these in the back of my brain in case someone were to ask me those questions again.

But then my presentation at the annual Peace Science Society conference approached. For those unaware, Peace Science tends to slant very empirical. So as I was thinking about how to present the paper, I began considering how a strict empiricist would present the paper. She would probably start with a bit of theory, show the results of the main model, demonstrate robustness (or at least say “this is robust to…”), and then perhaps talk about a couple of cases if time permitted.

Theorists and empiricists may be different in many ways, but I think it is telling that the previous sentence could apply equally to both a theoretical and an empirical presentation. Yet theorists almost always completely skip the robustness step. This time, I did not. So instead of saying “bargaining works in the model I constructed,” I said “bargaining works in the model I constructed and in models with the following different assumptions.” I then showed a slide with the following:

  • Prior investment in nuclear research
  • Prestige
  • Punishment for reneging
  • Negative externalities
  • Non-binary power shifts
  • Nondeterministic proliferation
  • Sanctions
  • Bargaining over objects that influence future bargaining power
  • Non-common discount factors
  • Imperfect monitoring

This is much, much, much stronger than just saying “hey, bargaining works.” I think it won over a few people in the crowd, and I received many comments after the presentation about how it was a nice touch.

All of this is to say that we really should be making robustness checks of our formal models both in our papers and in our presentations. Why isn’t this commonplace already? There are two restricting factors. First, solving alternative model specifications is a time consuming task. An empiricist can just add a few robustness check variables, press a button, and be done. (This assumes that such data already exist. If not, they are in trouble.) Theorists often have to re-solve the entire model, which can take days.

Second, it is space consuming. I mentioned ten robustness checks above. The paper takes ten and a half pages addressing all of them. I can get away with this because I am writing a book; I would be in deep trouble if this were a journal article and I only had 10,000 words to work with.

Still, I don’t think either of these are particularly good excuses. Regarding time: yes, spending time doing these things is annoying. But it is an investment in getting your result right, and you should be willing to pay it. Moreover, if the central result you are finding is decent, then the logic should intuitively carry over in many of the cases. For example, Maya Sen and I have a working paper on judicial nominations that shows under certain conditions Senators randomly reject nominees despite not having any good reason to do so. We use a very simple model, yet (much to our surprise) the results immediately carry over to much more complicated setups, and we can explain why without having to do any more math.

Regarding space: this is a poor excuse. Empiricists solve the space problem by creating online appendices. When was the last time you ever saw an online appendix for a formal article that wasn’t just a proof of the model in the paper? There is no reason theorists can’t copy this solution.

Bottom line: empiricists and theorists face similar robustness challenges. We need our models to be robust, and the only way we can effectively communicate that in scholarly work is to conduct robustness checks. Empiricists do a good job here; theorists have a lot of room for improvement.