Tag Archives: Sports

Marshawn Lynch Was Optimal, But So Was a Quick Slant

It seems that social media has lashed out at Pete Carroll for not giving the ball to Marshawn Lynch on second and goal with less than a minute to go. The idea is that Marshawn Lynch is #beastmode, an unstoppable force that would have assuredly scored and won Super Bowl XLIX.

The problem is, the argument makes absolutely no sense from a game theoretical standpoint. The ability to succeed on any given play is a function of the offense’s play call and the defense’s play call. Call a run against a pass blitz with deep coverage, and the offense is in great shape. Run deep routes versus that same defense, though, and you are in trouble. Thus, once you strip everything down, play calling is nothing more than a very complex guessing game. The Seahawks want to guess the Patriots’ play call and pick the correct counter. Vice versa for the Patriots.

Game theory has killed countless trees exploring the strategic properties of such games. Fortunately, there is a simple game that encapsulates the most important finding. It is called matching pennies:

The premise is that we each have a penny and simultaneously choose whether to reveal heads or tails. I win $1 from you if the coin faces match, while you win $1 from me if the coin faces mismatch.

You should quickly work out that there is a single best way to play the game: both of us should reveal heads 50% of the time and tails 50% of the time. If any player chooses one side even slightly more often, the other could select the proper counter strategy and reap a profit. Randomizing at the 50/50 clip guarantees that your opponent cannot exploit you.

In terms of football, you might think of you as the offense and me as the defense. You want to mismatch (i.e., call a run play while I am defending the pass) and I want to match (i.e., defend the pass while you call a pass). What is interesting is that this randomization principle neatly extends to more complicated situations involving hundreds of strategies and counterstrategies. Unless a single strategy is always best for you regardless of what the other side picks, optimal strategy selection requires you to randomize to prevent your opponent from exploiting you.

What does this tell us about the Marshawn Lynch situation? Well, suppose it is so plainly obvious that Pete Carroll must call for a run. Bill Belichick, who many see as the god of the football strategy universe, would anticipate this. He would then call a play specifically designed to stop the run. By that I mean an all-out run blitz, with linebackers completely selling out and cornerbacks ignoring the receivers and going straight for the backfield. After all, they have nothing to lose—the receivers aren’t getting the ball because Lynch is assuredly running it.

Of course, it doesn’t take much to see that this is also a ridiculous outcome. If the Patriots were to certainly sell out because the Seahawks were certainly handing the ball to Lynch, Pete Carroll would switch his strategy. Rather than run the ball, he would call for a pass and an easy touchdown. After all, a pass to a wide-open receiver is a much easier touchdown than hoping Marshawn Lynch can conquer 11 defenders.

The again, Belichick would realize that the Seahawks were going to pass and not sell out on his run defense. But then Carroll would want to run again. So Belichick goes back to defending the run. But then Carroll would pass. And Belichick would call for pass coverage. And so forth.

There is exactly one way to properly defend in this situation: randomize between covering a run and covering a pass. There is also exactly one way to properly attack in this situation: sometimes run the ball and sometimes pass it. This is the only way to keep your team from being exploited, regardless of whether you are on offense or defense.

Okay, so we have established that the teams should be randomizing. What does that say about the outcome of Super Bowl XLIX? Well, clearly the play didn’t work out for the Seahawks. But to judge the play call, we can’t account for what happened. We can only account for what might happen in expectation. And in expectation, passing was optimal in this situation.

If you aren’t convinced, imagine we all hopped into a time machine to second and goal with the knowledge of what happened. Would Pete Carroll call a run? Maybe. Would Bill Belichick sell out on the run? Maybe. But maybe not—Carroll might call a pass precisely because Belichick is anticipating him running the ball. We are back in the guessing game before. And as before, the only way to solve it is to randomize.

That’s the magic of mixed strategy Nash equilibrium. Even if your opponent knows what you are about to do, there is nothing he or she can do to improve your score.

Do Elite Academic Institutions Make Fewer Football Mistakes?

To bury the lede, and by Betteridge’s law of headlines, it appears the answer is no.

This question was the result of some inane football commentary during Saturday’s Stanford/UCLA game. UCLA was down by a lot and attempted a fake field goal from Stanford’s 30 yard line. Stanford intercepted the pass in the end zone, resulting in a touchback. According to the color commentator (paraphrasing), “You just can’t run fake field goals against Stanford. Stanford is an elite academic institution—they don’t make mental mistakes.”

Now, there are two inherently boneheaded parts of this claim. First, immediately after the commentator gave us this knowledge smackdown, he then discussed how the Stanford defensive back should have batted down the pass rather than intercepted it. With the original line of scrimmage being the 30 yard line and a touchback putting the ball on the 20, the interception yielded a 10 yard loss. In other words, on the same play where Stanford was incapable of making a mental mistake because they are Stanford, Stanford made a mental mistake. Hmm.

Second, I would dare say that UCLA is an elite academic institution and they should therefore be immune from making mental mistakes. That being the case, they should have have internalized the fact that Stanford is an elite academic institution incapable of making mistakes and therefore not tried the fake punt. Or did UCLA, an elite academic institution incapable of making mistakes, make a mistake?

But I digress. As a friend pointed out to me, this type of inane commentary is nothing new. I know this all to well from living in Buffalo’s TV region and being repeatedly subjected to lectures on how Ryan Fitzpatrick is infallible because he went to Harvard.

Or something.

In any case, the commentators are giving us an answerable question: do elite academic school football teams make fewer mistakes than their…umm…more average brethren? I decided to test this proposition. Of course, there are all sorts of challenges to teasing out relationship. I cannot hope to give a comprehensive analysis in a single blog post. I can, however, give a first-cut answer using readily available data.

There are two obstacles standing in our way. First, what is an elite academic institution? Fortunately, university rankings has to be a million dollar industry. There simply is no shortage of them. I chose to look at US News and World Report’s rankings this time. Yes, these rankings are horribly flawed for a number of reasons. Yes, you may prefer Washington Monthly’s system.[1] But USNWR’s rankings are the standard bearer that all other rankings are compared to. I can thus measure academic quality by using USNWR’s cardinal score. Note the score is different from the ranking and captures how much better one school (allegedly) is than another.

There are a couple of important caveats here. First, USNWR features a wide variety of lists. To keep from comparing apples to oranges, I am only looking at schools on the national university surveys. Regional schools, liberal arts schools, and military schools that nevertheless have FBS programs are thus excluded. (I’m only looking at FBS schools, again to compare apples to apples.) In addition, USNWR does not give specific scores to any school earning a value below 25 on their 100 point scale. I have also excluded them from the analysis below.

Second, and perhaps more difficult, is that we need a way to measure mental mistake propensity. I can conceptualize “mistakes” in a variety of ways, but the trouble is hammering out something that is (1) somewhat objective and (2) quantitatively available. Penalties seem like an appropriate choice because they are (usually) mistakes and often leave everyone shaking their heads.[2] If these elite schools really have their academic prowess rub off on football players, we would expect them to incur fewer penalties. It’s not the same type of mental mistake that occurred during the UCLA/Stanford game, but it’s a decent substitute.

(If you have any alternative measures in mind, please post them in the comments!)

So, to recap, we can plot the academic quality scores against penalties. If there is a negative correlation, we would have confirmation that elite academic schools make fewer mistakes on the gridiron. The problem is, no matter whether you look at penalties or penalty yards, there is no relationship whatsoever. Here is the plot for yards:


And penalties:


Both of those red trendlines are as flat as they get. There simply is no relationship.[3] And why would there be? Football is specialized knowledge. Players are recruited for that knowledge, not their ability to master biochemistry. And biochemistry is not going to teach you whether grabbing an opponent’s face mask is appropriate behavior.

Also worth noting: the two most penalized teams in the country are UC Berkeley and UCLA, two of the best public schools in the country. (I am happy to report that UCSD, the third highest ranked school in the UC system, has not committed a single penalty all season.[4])

In sum, unless penalties are a poor way to measure mistakes, academic prowess has nothing to do with football IQ. What we have here is a case of commentators filling airtime with silly platitudes.

[1] Which is obviously superior based on the #1 school on that list.

[2] The real concern here—as any game theorist would love to point out—is that penalties aren’t really against the rules so much as an alternative way of playing the game. For example, if an offensive lineman is beat off the snap has the choice between giving up a 10 yard holding penalty and having the quarterback be sacked for a loss of 12 yards, the “penalty” really isn’t much of a penalty at all. While I don’t discount that intentional penalties occur and are the result of smart play, these seem to be the exception to rule. Indeed, we rarely hear of penalties as being a good thing, and when we do, it’s precisely because it is so rare.

[3] For the statistically inclined, here is the regression output for penalty yards per game using OLS.


I didn’t control for anything else, and I’m not sure there is anything to control for here anyway. Let me know if I’m wrong in the comments.

The observations deleted due to missingness are the schools that lack a USNWR score on the national universities list.

[4] Still undefeated!

The Game Theory of Soccer Penalty Kicks

With the World Cup starting today, now is a great time to discuss the game theory behind soccer penalty kicks. This blog post will do three things: (1) show that penalty kicks is a very common type of game and one that game theory can solve very easily, (2) players behave more or less as game theory would predict, and (3) a striker becoming more accurate to one side makes him less likely to kick to that side. Why? Read on.

The Basics: Matching Pennies
Penalty kicks are straightforward. A striker lines up with the ball in front of him. He runs forwards and kicks the ball toward the net. The goalie tries to stop it.

Despite the ordering I just listed, the players essentially move simultaneously. Although the goalie dives after the striker has kicked the ball, he cannot actually wait to the ball comes off the foot to decide which way to dive—because the ball moves so fast, it will already be behind him by the time he finishes his dive. So the goalie must pick his strategy before observing any relevant information from the striker.

This type of game is actually very common. Both players pick a side. One player wants to match sides (the goalie), while the other wants to mismatch (the striker). That is, from the striker’s perspective, the goalie wants to dive left when the striker kicks left and dive right when the striker kicks right; the striker wants to kick left when the goalie dives right and kick right when the goalie dives left. This is like a baseball batter trying to guess what pitch the pitcher will throw while the pitcher tries to confuse the batter. Similarly, a basketball shooter wants a defender to break the wrong way to give him an open lane to the basket, while the defender wants to stay lined up with the ball handler.

Because the game is so common, it should not be surprised that game theorists have studied this type of game at length. (Game theory, after all, is the mathematical study of strategy.) The common name for the game is matching pennies. When the sides are equally powerful, the solution is very simple:

If you skipped the video, the solution is for both players to pick each side with equal probability. For penalty kicks, that means the striker kicks left half the time and right half the time; the goalie dives left half the time and dives right half the time.

Why are these optimal strategies? The answer is simple: neither party can be exploited under these circumstances. This might be easier to see by looking at why all other strategies are not optimal. If the striker kicked left 100% of the time, it would be very easy for the goalie to stop the shot—he would simply dive left 100% of the time. In essence, the striker’s predictability allows the goalie to exploit him. This is also true if the striker is aiming left 99% of the time, or 98% of the time, and so forth—the goalie would still want to always dive left, and the striker would not perform as well as he could by randomizing in a less predictable manner.

In contrast, if the striker is kicking left half the time and kicking right half the time, it does not matter which direction the goalie dives—he is equally likely to stop the ball at that point. Likewise, if the goalie is diving left half the time and diving right half the time, it does not matter which direction he striker kicks—he is equally likely to score at that point.

The key takeaways here are twofold: (1) you have to randomize to not be exploited and (2) you need to think of your opponent’s strategic constraints when choosing your move.

Real Life Penalty Kicks
So that’s the basic theory of penalty kicks. How does it play out in reality?

Fortunately, we have a decent idea. A group of economists (including Freakonomics’ Steve Levitt) once studied the strategies and results of penalty kicks from the French and Italian leagues. They found that players strategize roughly how they ought to.

How did they figure this out? To begin, they used a more sophisticated model than the one I introduced above. Real life penalty kicks differ in two key ways. First, kicking to the left is not the same thing as kicking to the right. A right-footed striker naturally hits the ball harder and more accurately to the left than the right. This means that a ball aimed to the right is more likely to miss the goal completely and more likely to be stopped if the goalie also dives that way. And second, a third strategy for both players is also reasonable: aim to the middle/defend the middle.

Regardless of the additional complications, there are a couple of key generalizations that hold from the logic of the first section. First, a striker’s probability of scoring should be equal regardless of whether he kicks left, straight, or right. Why? Suppose this were not true. Then someone is being unnecessarily exploited in this situation. For example, imagine that strikers are kicking very frequently to the left. Realizing this, goalies are also diving very frequently to the left. This leaves the striker with a small scoring percentage to the left and a much higher scoring percentage when he aims to the undefended right. Thus, the striker should be correcting his strategy by aiming right more frequently. So if everyone is playing optimally, his scoring percentage needs to be equal across all his strategies, otherwise some sort of exploitation is available.

Second, a goalie’s probability of not being scored against must be equal across all of his defending strategies. This follows from the same reason as above: if diving toward one side is less likely to result in a goal, then someone is being exploited who should not be.

All told, this means that we should observe equal probabilities among all strategies. And, sure enough, this is more or less what goes on. Here’s Figure 4 from the article, which gives the percentage of shots that go in for any combination of strategies:


The key places to look are the “total” column and row. The total column for the goalie on the right shows that he is very close to giving up a goal 75% of the time regardless of his strategy. The total row for the striker at the bottom shows more variance—in the data, he scores 81% of the time aiming toward the middle but only 70.1% of the time aiming to the right—but those differences are not statistically significant. In other words, we would expect that sort of variation to occur purely due to chance.

Thus, as far as we can tell, the players are playing optimal strategies as we would suspect. (Take that, you damn dirty apes!)

Relying on Your Weakness
One thing I glossed over in the second part is specifically how a striker’s strategy should change due to the weakness of the right side versus the left. Let’s take care of that now.

Imagine you are a striker with an amazingly accurate left side but a very inaccurate right side. More concretely, you will always hit the target if you shoot left, but you will miss some percentage of the time on the right side. Realizing your weakness, you spend months practicing your right shot and double its accuracy. Now that you have a stronger right side, how will this affect your penalty kick strategy?

The intuitive answer is that it should make you shoot more frequently toward the right—after all, your shot has improved on that side. However, this intuition is not always correct—you may end up shooting less often to the right. Equivalently, this means the more inaccurate you are to one side, the more you end up aiming in that direction.

Why is this the case? If you want the full explanation, watch the following two videos:

The shorter explanation is as follows. As mentioned at the end of the first section of this blog post, players must consider their opponent’s capabilities as they develop their strategies. When you improve your accuracy to the right side, your opponent reacts by defending the right side more—he can no longer so strongly rely on your inaccuracy as a phantom defense. So if you start aiming more frequently to the right side, you end up with an over-correction—you are kicking too frequently toward a better defended side. Thus, you end up kicking more frequently to the left to account for the goalie wanting to dive right more frequently.

And that’s the game theory of penalty kicks.

Optimal Flopping: The Game Theory of Foul Fakery

I was watching the NBA Finals last night. While the series has been good, watching professional basketball requires a certain tolerance for flopping–i.e., players pretending like they got hit by a freight train when in reality the defender barely made incidental contact. Observe LeBron James in action:

And that’s just from this postseason!

No one likes flopping, but it is not going away anytime soon. This post explains the rationality of flopping. The logic is as you might think–players flop to dupe officials into mistakenly calling fouls. There is a surprising result, however. When flopping optimally, “good” officiating becomes impossible–referees are completely helpless in deciding whether to call a foul. Worse for the integrity of the game, a flopper’s actions force referees to occasionally ignore legitimate fouls.

The Model
This being a blog post, let’s construct a simple model of flopping. (See figure below.) The game begins with an opponent barreling into a defender. Nature sends a noisy signal to the official whether contact was foul worthy or not. If it is truly a foul, the defender falls to the ground without a strategic decision. If it is not a foul, the player must decide whether to flop or not.

The referee makes two observations. First, he receives the noisy signal. With probability p, he believes it was a hard foul; with probability 1-p, it was not. He also observes whether the defender fell to the ground. Since the defender cannot keep standing if the offensive player commits a hard foul, the referee knows with certainty that the play was clean if the defender remains standing. However, if the player falls, the referee must make an inference whether the play was a foul.

Payoffs are as follows. The referee only cares about making the right call; he receives 1 if he is correct and -1 if he is incorrect. The player receives 1 if the referee calls a foul, 0 if he does not flop and the referee does not call a foul, and -1 if he flops and the referee does not call a foul. Put differently, the defender’s best outcome is what minimizes the offense’s chance at scoring while his worst outcome is what maximizes the offense’s chance.


(click image to enlarge)

Since legitimately fouled defenders have no strategic choices, we only have to solve for the non-fouled defender’s action. Therefore, throughout this proof, “defender” means a defender who was not fouled. (Rare exceptions to this will be obvious.)

We break down the parameter space into three cases:

For p = 0
Flopping does not work, since the referee knows no foul took place. This is why players don’t randomly fall to the ground when the nearest opponent is ten feet away from them.

For p > 1/2
Note the the referee will call a foul if he believes that the probability the play was a foul is greater than 1/2. Thus, if the defender flops, he knows the referee will call a foul. As such, the defender always flops, and the referee calls a foul. This is intuitive: on plays that look a lot like a foul, defenders will embellish the contact regardless of how hard they are hit.

For 0 < p < 1/2
Because mixing probabilities are messy, I will appeal to Nash’s theorem to prove that both the defender and referee mix in equilibrium. Recall that Nash’s theorem says that an equilibrium exists for all finite games. Therefore, we can show both players mix by proving that neither can play a pure strategy in equilibrium. (In other words, we expect players to sometimes flop and sometimes not to, while the referees to sometimes call a foul and sometimes not to when they aren’t sure.)

First, can the defender flop as a pure strategy? If he does, the referee’s best response would be to not call a foul, as the referee believes the probability a foul occurred is less than 1/2. But given that the referee is not calling a foul, the defender should deviate to not flopping, since he will not get the call anyway.

Second, can the defender not flop as a pure strategy? If he does, the referee’s best response is to call a foul if he observes the defender falling, as he knows that the play was a legitimate foul. But this means the defender would want to deviate to flopping, since he knows he will get the foul called. This exhausts the defender’s pure strategies, so the defender must be mixing in equilibrium.

Third, can the referee call a foul as a pure strategy? If he does, the defender’s best response is to flop. But then the referee would not want to call a foul, since his belief that the play was actually a foul is less than 1/2.

Fourth, can the referee not call a foul as a pure strategy? If he does, the defender’s best response is to not flop. But this means the referee should call a foul upon observing the defender fall, as he believes the only way this could occur is if the foul was legitimate. This exhausts the referee’s pure strategies, so the referee must mix in equilibrium.

Strategically, these parameters are the most interesting. In equilibrium, the defender sometimes bluffs (by flopping) and sometimes does not. Upon observing a fall, the referee sometimes ignores what he perceives might be a flop and sometimes makes the call.

The real loser is the legitimately fouled defender. He can’t do anything to keep himself from falling over, yet sometimes the referee does not make the call. Why? The referee can’t know for sure whether the foul was legitimate or not and must protect against floppers.

While this seems unfortunate, be glad the referees act strategically in this way–the alternative would be that defenders would always flop regardless of how incidental the contact is and the referees would always give them the benefit of the doubt.

One of game theory’s strengths is drawing connections between two different situations. Although this post centered on flopping in the NBA, note that the model was not specific to basketball. The interaction could have very well described other sports–particularly soccer. As long as fouls provide defenders with benefits, there will always be floppers waiting to exploit the referee’s information discrepancy.

If I ever expand my game theory textbook to cover Bayesian games, I think I will include this one. This also makes decent fodder when random people ask “what can game theory do for us?”


Noise about Noise: The Good Coach/Bad Coach Fallacy

It is 4th and inches from the 50 yard line. The defense lines up with nine in the box, with a cornerback covering the loan wide receiver and the safety playing a bit closer than usual. The quarterback snaps the ball. The safety breaks in to blitz. The running back executes a play fake. The quarterback bombs it to his wide receiver, who has the safety beat. Touchdown.

“What a great call!” exclaims the color commentator.

Your first reaction might be to agree. After all, the play worked. The safety blitzed, leaving the wide receiver with one-on-one coverage. The quarterback came through, delivering a well-placed ball for a quick score. Credit the offensive coach for the play, and discredit the opposing coach for choosing to blitz.

Well, maybe not.

Let’s investigate how perfect coaches would play this situation. To simplify the situation greatly, suppose the offense can choose whether to call a run or a pass. The defense can choose whether to defend the run or the pass. The defense wants to match, while the offense wants to mismatch. To further simplify things, suppose the defensive benefits for matching are the same whether it is pass/pass or run/run. Likewise, the offensive advantages for mis-matching are the same whether it is pass/run or run/pass.

(These are strong assumptions, but the claims I will make hold for environments with richer play calling and differing benefits for guessing correctly/incorrectly.)

If all that holds, then the game is identical to matching pennies:

In equilibrium, both players flip their coins. Note that as long as the opponent is flipping his coin, the other player earns a fixed amount (zero in this case) regardless of which strategy he selects.

This is a necessary condition to reach equilibrium. If one strategy was even slightly better in expectation given the opposing strategy, then the player would always want to play the superior strategy. For example, if running was even slightly better than passing given the offensive’s expectations about the defense, then the offense must choose to run. But then the defensive coach’s strategy is exploitable. He could switch to defending the run and expect to do better. But the defensive coach is supposed to be superhuman, so he would never do something so foolish.

As it turns out, the only strategies that don’t leave open the possibility of exploitation are the equilibrium strategies. Thus, the superhuman coaches should play according to equilibrium expectations.

Now consider how this situation looks to the observer. We only see outcome of one play. But note that all outcomes occur with positive probability in equilibrium! Sometimes the offense does well. Sometimes the defense does well. But any given outcome is essentially chosen at random.

This makes it impossible to pass judgment in favor or against any coach. Certainly all real world coaches are not perfect. But on any given play, one superhuman coach looks foolish while the other superhuman coach looks great. Consequently, on any given real world play, we cannot tell whether the result was a consequence of terrific coaching on one side (and bad coaching on the other) or just pure randomness.

Thus, we have the good coach/bad coach fallacy. Commentators are quick to praise the genius of the fortunate and lambast the idiocy of the unfortunate, but there simply is no way of knowing what is truly gone on given the information. On-air silence might be awkward, but it beats noise about…noise.

Gambling and Corruption with Replacement NFL Referees

If you have watched an NFL game over the last six weeks, you doubtlessly know that NFL referees are in a labor dispute, and the NFL is using replacement referees for the time being. USA Today has an interesting story about the incentives these replacement refs face. Specifically, they are more vulnerable to being bought off by illicit gambling manipulation.

Among gamblers, there is obvious demand for referees willing to take bribes to alter the outcome of the game. For example, suppose the Chargers and Falcons are an even line. (All you have to do is pick the winner to win the bet.) A gambling crew could place a large sum of money on the Chargers, say $1,000,000. They could then pay $100,000 to the referee to ensure the calls go the Chargers’ way such that San Diego wins. The gamblers stand to make hundreds of thousands of dollars.

Besides the threat of criminal punishment, referees have incentive to refuse these bribes due to future benefits from continued officiating. Making terrible calls or being getting caught will get you fired, thus denying you the benefits of continued employment. All other things being equal, if you expect the NFL to continue employing you, you are less likely to take the bribe. Regular NFL officials have this type of long time horizon. They may not be completely unbribe-able, but they are darn resistant.

The replacement refs? Not so much. Their time horizon is extremely small. Once the NFL and the referees resolve their labor dispute, the replacement refs will be gone for good. Rather than years, this time horizon is probably better calculated in weeks or months. Taking a $100,000 bribe doesn’t sound so bad when you are very likely to be unemployed by Halloween, especially when you are making at most $3500 a game.

I find this argument is intuitive and compelling. Moreover, it made me rethink the reasonableness of the referees’ previous contract, which paid about $150,000 for roughly fifty days’ work last year. Such a salary seems ridiculously high given the large supply of potential referee labor. However, the NFL needs to keep the actions of the referees in line with the NFL’s wishes. We can’t just ask potential referees how much they need to be paid to not accept bribes, and then employ the cheapest labor. One way to resolve this issue is to promise continued high pay all referees. Put differently, the high salaries bridge the principal-agent problem.

Unintended Consequences, Pt. 2: College Football Edition

Back during the Olympics, I wrote about badminton players intentionally playing to lose. Despite the absurdity of the situation, the Olympians were merely following one of political science’s most important laws:

Law: People will strategize according to the institutional features put in front of them.

We can now add college football players to the list of people who follow the rule. Over the off-season, the NCAA created a rule which forces a player whose helmet comes off during a play (incidental or otherwise) to sit out the following play. To the surprise of no one, defenders are now taking advantage of it. Here is the new game plan, in three simple steps:

  1. Get the opposing quarterback into a large pile.
  2. Take off his helmet.
  3. Profit.

The rule seems inherently bizarre. It’s understandable to force a player to sit out a play if his helmet explodes off of his head on a major hit; concussions are a major issue in football. But if the helmet just slides off (maliciously or otherwise) away from the action, there doesn’t seem to be much reason to force such a player out of the game temporarily.

Do More Accurate Tests Lead to More Frequent Drug Testing?

This Olympics has been special due to bizarre cases of “cheating” and cunningly strange strategic behavior. But regardless of the year, allegations of doping are always around. So far, four athletes have been disqualified, and a fifth was booted for failing a retest from 2004. (The Olympic statute of limitations is eight years.) More will probably get caught, as half of all competitors will be sending samples to a laboratory.

Doping has some interesting strategic dimensions. The interaction is a guessing game. Dopers only want to take drugs if they aren’t going to be tested. Athletic organizations only want to test dopers; each test costs money, so every clean test is like flushing cash down a toilet. From “matching pennies,” we know that these kinds of guessing games require the players to mix. Sometimes the dopers dope, sometimes they don’t. Sometimes they are tested, sometimes they aren’t.

But tests aren’t perfect. Sometimes a doper will shoot himself up, yet the test will come back negative. Even if we ignore false positives for this post, adding this dynamic makes each actor’s optimal strategy more difficult to find. Do more accurate drug tests lead to more frequent testing or less frequent testing? There are decent arguments both ways:

Pro-Testing: More accurate drug tests will lead to increased testing, since the organization does not have to worry about paying for bad tests, i.e. tests that come back negative but should have come up positive.

Anti-Testing: More accurate drug tests will lead to decreased testing, because athletes will be more scared of them. That leads to less incentive to dope, which in turn makes the tests less necessary.

Arguments for both sides could go on forever. Fortunately, game theory can accurately sort out the actors’ incentives and counter-strategies. As it turns out, the anti-testing side is right. The proof is in the video:

Basically, the pro-testers are wrong because they fail to account for the strategic aspect of the game. The athletic organization has to adopt its strategies based off of the player’s incentives. Increasing the accuracy of the test only changes the welfare of the player when he dopes and the organization tests. So if the organization kept testing at the same rate as the quality of the tests improved, the player would never want to dope. As such, the organization cuts back on its testing as the quality of the test increases.

Olympic Rules Shenanigans: Dolphin Kick Edition

Fresh off the silliness of the badminton play-to-lose scandal comes this lovely piece on dolphin kicks. Last weekend, South African Cameron van der Burgh won gold in the 100m breaststroke.

However, Australia’s Olympic committee is putting up a fuss, as video footage of van der Burgh clearly shows him executing three dolphin kicks after diving into the water. (An Australian swimmer finished in second.) Breaststroke competitions allow only one.

And van der Burgh does not give a damn. From the link:

If you’re not doing it, you’re falling behind. It’s not obviously–shall we say–the moral thing to do, but I’m not willing to sacrifice my personal performance and four years of hard work for someone that is willing to do it and get away with it.

You see, FINA (the governing body of swimming) does not use cameras underwater to check for illegal dolphin kicks. Moreover, Australia cannot formally appeal van der Burgh’s finish, as there is no formal appeal process.

Of course, an appeal probably wouldn’t do much good, considering the Australian swimmer did the exact same thing.

As with the badminton scandal, the real moral of the story is about institutional design. If you build a bad institution, it will lead to more bad things. Here, you should not create rules that you do not plan to enforce. The players who wish to abide by those rules face a stark choice: play “fair” or let the “unfair” win. So even those wishing to play fair break the rules, and we end up in a situation as though the rule does not exist.

Strangely, the dolphin kick rule could be enforced. FINA used underwater technology at the swimming World Cup in 2010. Everyone knew that dolphin kicks were prohibited and breaking the rules would not go unnoticed, so no one broke them.

Derp! Badminton Could Learn from Political Science (Or, Winning By Losing)

Political science doesn’t have many “laws” the way physics does. But here’s one of them:

Law: People will strategize according to the institutional features put in front of them.

Here’s a corollary that I think should follow from that:

Corollary: If one creates stupid institutional rules, one loses the right to object to people taking advantage of them.

Apparently the Olympic organizing group of badminton could learn from this law and its corollary. Yesterday, you see, eight players intentionally played to lose. Full story here.

The gist of it is this: Early in the day, the #2 team in the world lost their last group game, sending them to the bottom of the teams qualified for the quarterfinals. Later on, teams that were already qualified for the quarterfinals played to lose, concerned that a win would propel them to a high seed that force them to play the #2 team sooner in the elimination bracket. Oops.

Badminton officials were shocked–shocked!–that the players would resort to such a cunningly intelligent strategy. Furthermore, the officials complained that the players had violated a rule that protects against athletes “not using one’s best efforts to win a match”–as though one could reasonably discern what qualifies as “best effort” versus “a little bit less than best effort, but still enough effort to convince everyone that we actually care even though we don’t.”

Here are a couple of solutions for the Olympic badminton committee. First, you could schedule all of the final games group play simultaneously, to make it harder for teams to know to throw matches from the start. (Soccer pulls a similar trick in the Euro and World Cup, albeit for slightly different purposes.) Or you could have a single elimination tournament from the start.

Just don’t be surprised when players try to win…by losing.

Update: The players have been disqualified. Next time, I suggest feigning an injury.

The USA Today story also reports that the Japanese women’s soccer team intentionally sought to draw yesterday, as to avoid playing the United States in the quarterfinals.