I am a PhD candidate at the University of Rochester specializing in international relations and formal theory. I am currently on the academic job market. If you want to know more, feel free to look around, download my CV, email me at wspaniel@ur.rochester.edu, or use the links below as a cheat sheet:



Marshawn Lynch Was Optimal, But So Was a Quick Slant

It seems that social media has lashed out at Pete Carroll for not giving the ball to Marshawn Lynch on second and goal with less than a minute to go. The idea is that Marshawn Lynch is #beastmode, an unstoppable force that would have assuredly scored and won Super Bowl XLIX.

The problem is, the argument makes absolutely no sense from a game theoretical standpoint. The ability to succeed on any given play is a function of the offense’s play call and the defense’s play call. Call a run against a pass blitz with deep coverage, and the offense is in great shape. Run deep routes versus that same defense, though, and you are in trouble. Thus, once you strip everything down, play calling is nothing more than a very complex guessing game. The Seahawks want to guess the Patriots’ play call and pick the correct counter. Vice versa for the Patriots.

Game theory has killed countless trees exploring the strategic properties of such games. Fortunately, there is a simple game that encapsulates the most important finding. It is called matching pennies:

The premise is that we each have a penny and simultaneously choose whether to reveal heads or tails. I win $1 from you if the coin faces match, while you win $1 from me if the coin faces mismatch.

You should quickly work out that there is a single best way to play the game: both of us should reveal heads 50% of the time and tails 50% of the time. If any player chooses one side even slightly more often, the other could select the proper counter strategy and reap a profit. Randomizing at the 50/50 clip guarantees that your opponent cannot exploit you.

In terms of football, you might think of you as the offense and me as the defense. You want to mismatch (i.e., call a run play while I am defending the pass) and I want to match (i.e., defend the pass while you call a pass). What is interesting is that this randomization principle neatly extends to more complicated situations involving hundreds of strategies and counterstrategies. Unless a single strategy is always best for you regardless of what the other side picks, optimal strategy selection requires you to randomize to prevent your opponent from exploiting you.

What does this tell us about the Marshawn Lynch situation? Well, suppose it is so plainly obvious that Pete Carroll must call for a run. Bill Belichick, who many see as the god of the football strategy universe, would anticipate this. He would then call a play specifically designed to stop the run. By that I mean an all-out run blitz, with linebackers completely selling out and cornerbacks ignoring the receivers and going straight for the backfield. After all, they have nothing to lose—the receivers aren’t getting the ball because Lynch is assuredly running it.

Of course, it doesn’t take much to see that this is also a ridiculous outcome. If the Patriots were to certainly sell out because the Seahawks were certainly handing the ball to Lynch, Pete Carroll would switch his strategy. Rather than run the ball, he would call for a pass and an easy touchdown. After all, a pass to a wide-open receiver is a much easier touchdown than hoping Marshawn Lynch can conquer 11 defenders.

The again, Belichick would realize that the Seahawks were going to pass and not sell out on his run defense. But then Carroll would want to run again. So Belichick goes back to defending the run. But then Carroll would pass. And Belichick would call for pass coverage. And so forth.

There is exactly one way to properly defend in this situation: randomize between covering a run and covering a pass. There is also exactly one way to properly attack in this situation: sometimes run the ball and sometimes pass it. This is the only way to keep your team from being exploited, regardless of whether you are on offense or defense.

Okay, so we have established that the teams should be randomizing. What does that say about the outcome of Super Bowl XLIX? Well, clearly the play didn’t work out for the Seahawks. But to judge the play call, we can’t account for what happened. We can only account for what might happen in expectation. And in expectation, passing was optimal in this situation.

If you aren’t convinced, imagine we all hopped into a time machine to second and goal with the knowledge of what happened. Would Pete Carroll call a run? Maybe. Would Bill Belichick sell out on the run? Maybe. But maybe not—Carroll might call a pass precisely because Belichick is anticipating him running the ball. We are back in the guessing game before. And as before, the only way to solve it is to randomize.

That’s the magic of mixed strategy Nash equilibrium. Even if your opponent knows what you are about to do, there is nothing he or she can do to improve your score.

How to Get a Ball at a Game, AKA the Best Thing I Will Ever Write

Right before I left San Diego for Rochester, I wrote a post in one of the Los Angeles Angels’ fan message boards. On the surface, it explains how to catch baseballs at baseball games. In practice, it was a recap of the first 22 years of my life. It apparently struck a chord and popped up on the site’s front page later that night.

(Ironically, I wasn’t home when it was featured—I was at a Padres game.)

I run into it every year or so, and I end up drawing the same conclusion every time: even though it predates all the Game Theory 101 stuff by more than a year, it is the best thing I have ever written and probably the best thing I will ever write. As such, I am preserving it here so I will never lose it.



I have been an Angels fan since the tragedy known as the 1995 season. I grew up in the northern part of Los Angels (sic) County, so I don’t have a very good reason why I wear red instead of blue. It just is what it is. The downside was that I virtually never went to Angels games as a kid due to the fact that my parents did not like sports and we lived a pretty long distance away.

But the rare times I did went, I always dreamed of catching a ball—a foul ball, batting practice ball, home run ball, a ball flipped up to the stands by a groundskeeper, any ball. Of course, we always had cheap seats too far away to get anything during a game. And a batting practice ball? That would have required getting to the game early—and the bottom of the first inning does not qualify.

So I went through childhood with zero, zilch, nada. Undeterred, I went to college. Armed with my own car and my own money, I could go to a lot of games as early as I wanted to. Now I was bigger, faster, and stronger. And, dammit, I wanted a ball.

I kept striking out.

Junior year rolled by, and my then girlfriend bought us tickets to a game. I took her to batting practice. Maybe my luck would change. Maybe I could get a ball. Maybe I could impress her.

And with one flip from a groundskeeper by the bullpen, it did.

Unfortunately, one isn’t satisfying. I thought it would be, but it’s definitely not. You get a rush from getting your first, and you immediately want to get another. So I kept going to batting practice in search of a second high.

It never came.

In college, I studied political science. I was introduced to a tool known as game theory midway through my junior year. Rather than trying to craft a more clever argument than the next guy, you can use game theory to construct models of the political interactions you are trying to describe. The neat part is that, once you have solved the game, your conclusions are mathematically true. If your assumptions are true, then the results must follow as a consequence.

The other cool part is that game theory is applicable to more than just political science. Life is a game. Game theory is just trying to solve it. The trick is figuring out how to properly model situations and what assumptions to make. Take care of those things, and you can find an answer to whatever question you want.

Baseball is a game, but so is hunting down baseballs as a fan. We all want to get them. The question is how to optimally grab one when everyone else is trying to do the same thing.

Fast forward to Opening Day of my senior year. I was standing there, hoping like hell a ball would find its way into my glove. If I stayed there long enough, I am sure one would have eventually gone right to me. But batting practice is short, and I would hate to only get one ball every 100 games I go to.

Then I noticed something a little revealing. It seemed like there would always be a couple people who would get three or four balls every time I went to the ballpark. I would always hear people say “lucky” with a hint of disdain the second, third, and fourth times they caught a baseball. But let’s be honest—it would take a tremendous amount of luck to get four baseballs in a single game if unless you were doing something everyone else wasn’t. You are lucky just to get one. But four? Skill.

That’s when the game theorist slapped the naïve young boy inside me. The people who were getting all of the balls weren’t game theorists, but they sure did understand the game being played better than everyone else there, myself included. I figured out that batting practice isn’t some sort mystical game of luck, it’s a spatial optimization game. Spatial optimization games can be solved. I did some work, came up with an equilibrium (game theoretic jargon for “solved the game”), and came up with a plan. In sum:

Since then, I have never left a session of batting practice with fewer than three balls.

Why am I telling you this? After all, the more people who know the secret, the harder it will be for me to catch a ball.

Well, here is the sad part. It turns out that I am a half-decent game theorist, so the University of Rochester accepted me into their PhD program. I leave on Monday. Yesterday was my last game. But it was a successful day:

That’s Barbara, my favorite usher in Angels Stadium. I can’t count how many times I have heard her tell parents to stop dangling their five year olds over the railing trying to siphon a ball off a fielder. (It baffles me why parents take such a risk in the first place. I’m pretty sure it is because the parents want the ball for themselves more than they want it for their kids.) I couldn’t leave California without getting a picture with her.

What do I do with my collection? I don’t have one. During my initial college years of ball-catching failure, I read an article about the (presumed) record holder for most balls grabbed ever. He keeps all of them. I think he is a jerk. As a kid, it was my dream to get a ball. As an adult, getting a ball is a novelty—a story to relay to your friends, take pictures of, and write silly little posts about on baseball forums. After reading the article, I swore I would give the first ball I caught to a kid trying to live the dream.

That moment had to wait for my junior year. The groundskeeper flipped the ball into my glove. I showed it to my girlfriend and found a mother with her five year old son sitting a few rows behind us. I asked if she would take a picture of us with the ball. She obliged. Although he was clueless, her poor son had no hope of getting a ball. So I thanked her for snapping the photo and tossed the ball over to her son. If that wasn’t the best day of his life so far, it has to rank pretty high.

I have kept that tradition alive all the way to today. As I pack my car this weekend, there won’t be any baseballs in it. I have no batting practice ball collection. I haven’t kept a single ball. I will never be able to make my dream as a kid come true—it’s too late for that—but I can get close every time I toss a ball to someone who reminds me of me as a kid. Perhaps that will be my son one day.

And if you thought my days of getting baseballs was over, think again. The Angels play the Rangers in Arlington on Thursday. I will be driving through Texas that day. Rangers fans won’t stand a chance.

Do Elite Academic Institutions Make Fewer Football Mistakes?

To bury the lede, and by Betteridge’s law of headlines, it appears the answer is no.

This question was the result of some inane football commentary during Saturday’s Stanford/UCLA game. UCLA was down by a lot and attempted a fake field goal from Stanford’s 30 yard line. Stanford intercepted the pass in the end zone, resulting in a touchback. According to the color commentator (paraphrasing), “You just can’t run fake field goals against Stanford. Stanford is an elite academic institution—they don’t make mental mistakes.”

Now, there are two inherently boneheaded parts of this claim. First, immediately after the commentator gave us this knowledge smackdown, he then discussed how the Stanford defensive back should have batted down the pass rather than intercepted it. With the original line of scrimmage being the 30 yard line and a touchback putting the ball on the 20, the interception yielded a 10 yard loss. In other words, on the same play where Stanford was incapable of making a mental mistake because they are Stanford, Stanford made a mental mistake. Hmm.

Second, I would dare say that UCLA is an elite academic institution and they should therefore be immune from making mental mistakes. That being the case, they should have have internalized the fact that Stanford is an elite academic institution incapable of making mistakes and therefore not tried the fake punt. Or did UCLA, an elite academic institution incapable of making mistakes, make a mistake?

But I digress. As a friend pointed out to me, this type of inane commentary is nothing new. I know this all to well from living in Buffalo’s TV region and being repeatedly subjected to lectures on how Ryan Fitzpatrick is infallible because he went to Harvard.

Or something.

In any case, the commentators are giving us an answerable question: do elite academic school football teams make fewer mistakes than their…umm…more average brethren? I decided to test this proposition. Of course, there are all sorts of challenges to teasing out relationship. I cannot hope to give a comprehensive analysis in a single blog post. I can, however, give a first-cut answer using readily available data.

There are two obstacles standing in our way. First, what is an elite academic institution? Fortunately, university rankings has to be a million dollar industry. There simply is no shortage of them. I chose to look at US News and World Report’s rankings this time. Yes, these rankings are horribly flawed for a number of reasons. Yes, you may prefer Washington Monthly’s system.[1] But USNWR’s rankings are the standard bearer that all other rankings are compared to. I can thus measure academic quality by using USNWR’s cardinal score. Note the score is different from the ranking and captures how much better one school (allegedly) is than another.

There are a couple of important caveats here. First, USNWR features a wide variety of lists. To keep from comparing apples to oranges, I am only looking at schools on the national university surveys. Regional schools, liberal arts schools, and military schools that nevertheless have FBS programs are thus excluded. (I’m only looking at FBS schools, again to compare apples to apples.) In addition, USNWR does not give specific scores to any school earning a value below 25 on their 100 point scale. I have also excluded them from the analysis below.

Second, and perhaps more difficult, is that we need a way to measure mental mistake propensity. I can conceptualize “mistakes” in a variety of ways, but the trouble is hammering out something that is (1) somewhat objective and (2) quantitatively available. Penalties seem like an appropriate choice because they are (usually) mistakes and often leave everyone shaking their heads.[2] If these elite schools really have their academic prowess rub off on football players, we would expect them to incur fewer penalties. It’s not the same type of mental mistake that occurred during the UCLA/Stanford game, but it’s a decent substitute.

(If you have any alternative measures in mind, please post them in the comments!)

So, to recap, we can plot the academic quality scores against penalties. If there is a negative correlation, we would have confirmation that elite academic schools make fewer mistakes on the gridiron. The problem is, no matter whether you look at penalties or penalty yards, there is no relationship whatsoever. Here is the plot for yards:


And penalties:


Both of those red trendlines are as flat as they get. There simply is no relationship.[3] And why would there be? Football is specialized knowledge. Players are recruited for that knowledge, not their ability to master biochemistry. And biochemistry is not going to teach you whether grabbing an opponent’s face mask is appropriate behavior.

Also worth noting: the two most penalized teams in the country are UC Berkeley and UCLA, two of the best public schools in the country. (I am happy to report that UCSD, the third highest ranked school in the UC system, has not committed a single penalty all season.[4])

In sum, unless penalties are a poor way to measure mistakes, academic prowess has nothing to do with football IQ. What we have here is a case of commentators filling airtime with silly platitudes.

[1] Which is obviously superior based on the #1 school on that list.

[2] The real concern here—as any game theorist would love to point out—is that penalties aren’t really against the rules so much as an alternative way of playing the game. For example, if an offensive lineman is beat off the snap has the choice between giving up a 10 yard holding penalty and having the quarterback be sacked for a loss of 12 yards, the “penalty” really isn’t much of a penalty at all. While I don’t discount that intentional penalties occur and are the result of smart play, these seem to be the exception to rule. Indeed, we rarely hear of penalties as being a good thing, and when we do, it’s precisely because it is so rare.

[3] For the statistically inclined, here is the regression output for penalty yards per game using OLS.


I didn’t control for anything else, and I’m not sure there is anything to control for here anyway. Let me know if I’m wrong in the comments.

The observations deleted due to missingness are the schools that lack a USNWR score on the national universities list.

[4] Still undefeated!

A Wii Bit of an Error? Price Matching as Price Fixing

Yesterday, Sears made a wiiiiii bit of an error, selling a new Wii U for the bargain price of $60, a sharp markdown from the standard $300 price tag. People caught on and immediately bought as many as they could. That was smart. Sears eventually pulled it, though. So smarter people went one step further: they visited other retailers and bought $60 systems using price match guarantees, i.e., promises retailers make to sell like-goods at the lowest price of their competitors. Particularly crafty individuals allegedly went from Wal-Mart to Wal-Mart clearly out the storerooms.

This raised a moral question: is it right for consumers to take advantage of Sears’ mistake, manipulate the system, and trick (“trick”?) other retailers into also selling their products at a loss? Some people on Reddit felt that way:

I think I’d feel guilty doing that. But dang that’s a good deal.

Am I the only one who thinks it’s kinda [bad] to make another store pay for Sears error? You know it’s a error so why make it someone else’s problem?

Is there a difference between a legitimate offer / deal and taking advantage of a mistake? I’ll see you all in hell, which is where I’ll be down-voted into, where I can play Mario Kart with you thieving [lovely individuals].

I can certainly see their point. However, I don’t think that anyone should lose sleep for taking advantage of Target and friends for price matching. Why? Because one of the main reasons to create price match guarantees is to screw you over.

Wait, what? How can price matching possibly be bad for consumers? After all, it allows consumers to pay smaller prices. It could not possibly hurt consumers, could it?

Unfortunately, it can. Price matching is a form of price fixing, cleverly disguised as a nice gesture toward consumers. The key is how companies act in the bigger picture with price matching in place.

Imagine that you are a company and you have widgets to sell to consumers. You would like to charge your consumers a lot of money to pay for your widgets. However, there is a rival company that sells identical widgets. So if you charge a high price, all of the consumers will go to your rival, and you will make no money. Of course, your competitor has the exact same incentives. As such, you both end up charging very low prices. All of potential profit to be made from widgets has gone up in smoke.

In game theory, we call this situation a prisoner’s dilemma. Broadly speaking, this is a situation where both actors must individually choose whether to act kindly to the other (raise prices) or act uncooperatively (lower prices). Regardless of what the other side does, you have incentive to take the uncooperative action—this is because you can take all of the profits if the other side raises prices and still maintain parity in case they also lower theirs. However, the other side has the exact same incentives. So both of you take the uncooperative action even though this leaves you collectively worse off than if you both took the cooperative action.

If this is confusing, it might help to look at the problem visually:


Still with me? Okay. The point of the pricing prisoner’s dilemma is that it sucks up all of the revenue for widgets and leaves it in the pockets of consumers. This obviously makes those consumers very happy. But the companies would bend over backwards to figure out a way to collude to raise prices to monopoly levels. Yet successful collusion requires preventing the other side from undercutting one’s own price. After all, I don’t want to charge $10 for widgets if you are just going to screw me over by charging $9.

See where this is going yet? Price matching serves as this precise enforcement mechanism. Imagine that I announce that I will match any price you offer. I then charge $10 for my widgets. What are your incentives? Obviously, charging more than $10 is a bad idea, as I will take all of your business. So what if you undercut me instead? Well, you can’t. If you sell your product for $9, discerning customers have no reason to flock to your business because they can also get the widget for $9 from me thanks to my price match guarantee.

What to do? Well, you could also charge $10 and institute your own price match guarantee. For the same reasons as before, I don’t have incentive to undercut you either. We can both sustain the price of $10, well above what we would charging in a competitive environment.

So, despite appearances and Federal Trade Commission approval, price matching is a form of price fixing. It is intentionally designed to reduce competition and increase prices.

This makes the $60 Wii U price matching incident all the better: consumers used a policy designed to screw over consumers to screw over those who instituted the anti-competitive price fixing.

TL;DR: Karma.

Bribery and Cartel Violence in Mexico

Mexico has a massive murder problem. 2012 alone saw more than 26,000 homicides in the country, the fourth most of any state in the world. Why?

Drug violence and the interaction between cartels is a major factor. In a new working paper, Paul Zachary (UCSD) and I argue that uncertainty about local leaders has a great impact on that cartel violence. Cartels benefit from using violence to eliminate rivals. Politicians, however, have a vested interest in limiting that violence. This causes tension between cartels and officials, which cartels often attempt to resolve by bribing the officials to look the other way.

Why does uncertainty matter here? Paul and I investigate a model involving two rival cartels—a status quo and a challenger—and a local politician. The cartels want to capture as much of the drug rents as feasible; the local politician wants to minimize violence, but he is willing to look the other way if he receives a large enough bribe. The game begins with the status quo cartel offering a bribe to the politician to minimize enforcement. If the politician rejects, he chooses an amount of effort to exert to reduce the effectiveness of violence, which undermines the status quo cartel’s ability to maintain its drug rents. After, both cartels choose an amount of costly violence, which determines what percentage of the drug rents each receives.

We find that successful bribes lead to higher levels of violence. This is for two reasons. First, and most obviously, additional enforcement intercepts an additional percentage of violence. But there is an important second-order effect as well. The interception of violence functionally increases the marginal cost of violence for the status quo cartel. Consequently, more enforcement not only quashes violence in action but deters some of its production as well.

The above logic leads us to investigate what might lead to bargaining failure during the bribery stage. We show that the status quo cartel’s knowledge of the politician’s level of corruption is key here. When the cartel knows the politician’s minimally sufficient bribe (which is a function of the level of corruption), it can very easily come to terms. But when the cartel can only guess from a wide range of possibilities, it might ultimately offer a bribe that isn’t big enough for the politician to bite. In expectation, this leads to higher levels of enforcement and less subsequent violence.

Our theoretical argument has a noteworthy empirical prediction. Uncertainty leads to less violence, and some work in IR indicates that there is more uncertainty about newer leaders than older leaders. Thus, in Mexico, we would expect municipalities that the same political party has controlled for longer periods to be more violent than municipalities with greater turnover. (In the absence of our argument, this would be odd: the retrospective voting literature would suggest that less violence should correlate with greater tenure, as voters should be rewarding politicians who keep the streets safe.) The data support our theory. Indeed, we estimate that an increase of one year in tenure is associated with roughly one additional murder within a municipality. Although this might not seem like much, with so many municipalities in Mexico, a countrywide increase of one year in tenure matches up with about 2300 more murders. This number is on par with the 2011 murder totals in France, Germany, United Kingdom, Netherlands, and Belgium combined but still only a fraction of the overall number of murders in Mexico in every given year.

Again, you can download the paper here. This is the abstract:

What role do politicians have in bargaining with violent non-state actors to determine the level of violence in their districts? Although some studies address this question in the context of civil war, it is unclear whether their findings generalize to organizations that do not want to overthrow the state. Unlike political actors, criminal groups monopolize markets by using violence to eliminate rivals. In the context of the Mexican Drug War, we argue that increased time in office increases cartels’ knowledge about local political elites’ willingness to accept bribes. With bribes accepted and levels of police enforcement low, cartels endogenously ratchet up levels of violence because its marginal value is greater under these conditions. We formalize our claims with a model and then test its implications with a novel dataset on violent incidents and political tenure in Mexico. We find that each additional year after an official initially takes office is associated with an additional 2,300 violent deaths countrywide.

It’s a working paper, so we’d love your feedback!

Subtle Clues in Final Jeopardy

Tonight’s Final Jeopardy had another excellent example of how the wording of the clue sometimes gives a hint of the answer. The category was The Bible. And the clue:

The first conversation recounted in the Bible is in Genesis 3, between these 2; it leads to trouble.

Given that conversation takes place early on in Genesis, you should be able to narrow it down to two possibilities. The players did; they were split between “Who are Adam and Eve?” and “Who are Eve and the serpent?” If you aren’t sure which is the correct response, go back and reread the clue.

Do you see it?

Pretend you are the writer for a moment and the correct response is “Who are Adam and Eve?” How would you write the clue? Probably like this:

The first conversation recounted in the Bible is in Genesis 3, between these 2 people; it leads to trouble.

But the clue doesn’t say people! It leaves it vague, and deliberately so. Jeopardy writers don’t like to be unnecessarily vague when they don’t have to. Here, however, they most certainly need to be. The correct response is “Who are Eve and the serpent?” But if they phrased the clue as:

The first conversation recounted in the Bible is in Genesis 3, between this person and this creature; it leads to trouble.

They can’t do that, though—it makes the clue painfully obvious. So they make it vague. But the fact that it is vague when it wouldn’t have to be otherwise actually makes it perfectly clear!

I’ve actually opined on this once before. About a year and a half ago, the clue was:

The circulation of the Times of New York & London totals about half the “Times of” this place, the largest of any English daily.

Do you see it? My analysis was in this video:

Jeopardy’s Game Theory Irony

Tonight’s Jeopardy had a big high and a big low for game theorists.

The High
For most of the game, challenger Matthew LaMagna held a large lead. During Double Jeopardy, other challenger Angela Chuang hit a Daily Double in the “I Have a Theory” category. At only ~$4000 and facing Matthew at ~$18,000, Angela had only one option: make it a true Daily Double. She did. That part was sweet.

So was the clue (paraphrasing):

Beautiful Mind John Nash is credited with launching this field in economics.

Obviously, the correct response was “What is game theory?” Angela nailed it. Again, sweet. Maybe she knows game theory!

The Low
Now the sour part. Despite her best efforts, Matthew pulled away. The scores entering Final Jeopardy were $20,800 for Matthew, $8400 for Angela, and $1200 for the returning champion. Wagers are trivial at this point. Matthew has first place locked. Angela has second place locked as well because she doubles up the third place’s dollar figure. It does not take a game theorist to see this, but it helps.

(Critically, the end dollar figures are irrelevant for second and third place. Second receives a fixed $2000; third place, $1000.)

However, despite Angela’s familiarity with game theory, she wagers $8300. The returning champion wagers nothing. Final Jeopardy’s clue is triple stumper. Angela drops to $100 and third place, when all she had to do was write $0 and guarantee herself $2000. Instead, she went home with a check for $1000.

To be fair, there might be reason to not wager $0 here even though you can guarantee second place by doing so. Everyone’s favorite love-to-hate champion Arthur Chu famously wagered enough so that he would draw with second place if second place wagered everything. But Angela wasn’t even going for that. The $8300 wager could do nothing but harm her. That was sour.