Tag Archives: Game Theory

Book Review: Bargaining Theory with Applications

Book: Bargaining Theory with Applications by Abhinay Muthoo.
Four stars out of five.

Let me start by start by saying that this book is actually four stars or zero stars, depending on the audience. This is true for most books, though, so I err on the positive side.

Let’s start with the reasons you should not read this book. You might read the title and think “Gee, I have always wanted to learn about bargaining theory” and therefore decide to read the book. Bad move. This book is completely inaccessible. In the introduction, the author says that the reader only needs a decent understanding of subgame perfect equilibrium to get a lot out of it. This is a gross underestimate–you need at least a full year of game theory to get anything substantial out of the book and two years if you really want to understand it. Even then you will probably scratch your head from time to time. (In the conclusion, the author also says the book “has centered on some basic, elementary, models.” I found that quite humorous.)

The phrase “death by notation” comes to mind as you read this. The author says he intends the book for graduate level economists, and it shows. Variables are often defined once and then never interpreted a second time during a proof. You will often find yourself going back to try to figure out what all of the notation means. (This is a problem for just about all game theory texts, though, which is why I stick to mostly English in my textbook.)

The book also lacks adequate illustrations and figures. Game trees and strategic form matrices help readers understand the flow of the interaction. Figures here are rare and are often baffling. Without them, you will be left to look back at the notation, which has its own problems. (Like before, lack of sufficient illustration is a problem for just about all game theory texts.)

On a personal level, the author spent way too much time discussing the Nash bargaining solution. Personally, I find the Nash bargaining solution to be uninteresting except at the very basic level. Your mileage may vary. And, if you are like me, then you can just skip those sections like I did. So I can’t really fault it for this.

Despite all that, you should read this if you are interested in bargaining and have a good understanding of game theory. I don’t know of any books that are more thorough than it. I originally picked it up for some background on my risk aversion and sports contracts paper, and it was extremely useful. The author covers just about every type of bargaining game you will find in the literature with many variations of each model. So if you want to learn about bargaining, you should spend a few hours reading through it.

For practical purposes, chapter four (bargaining with risk of breakdown), chapter seven (bargaining over bargaining), and chapter nine (incomplete information) are the most useful. Four and seven probably have the most interesting application possibilities. I might reread the seventh chapter again at some point and think about how to relate it to international relations. We seem to have a lot of bargaining models in IR without much discussion of why bargaining protocols should take one particular form and not another. Perhaps this will lead to some publishable research.

I leave you with the following takeaway point: if you follow my work, you would probably enjoy reading this book, and it may qualify as required reading for you; if you found this page by randomly searching the internet for reviews of the book, you should think twice.

Park Place Is Worthless: The Game Theory of McDonald’s Monopoly

McDonald’s Monopoly is back. As always, if you collect Park Place and Boardwalk, you win a million dollars. I just got a Park Place. That’s worth about $500,000, right?

Actually, as I show in my book on bargaining, it is worth nothing. Not close to nothing, but absolutely, positively nothing.

It helps to know how McDonald’s structures the game. Despite the apparent value of Park Place, McDonald’s floods the market with Park Place pieces, probably to trick naive players into thinking they are close to riches. I do not have an exact number, but I would imagine there are easily tens of thousands of Park Places floating around. However, they only one or two Boardwalks available. (Again, I do not know the exact number, but it is equal to the number of million dollar prizes McDonald’s want to give out.)

Even with that disparity, you might think Park Place maintains some value. Yet, it is easy to show that this intuition is wrong. Imagine you have a Boardwalk piece and you corral two Park Place holders into a room. (This works if you gathered thousands of them as well, but you only need two of them for this to work.) You tell them that you are looking to buy a Park Place piece. Each of them must write their sell price on a piece of paper. You will complete the transaction at the lowest price. For example, if one person wrote $500,000 and the other wrote $400,000, you would buy it from the second at $400,000.

Assume that sell prices are continuous and weakly positive, and that ties are broken by coin flip. How much should you expect to pay?

The answer is $0.

The proof is extremely simple. It is clear that both bidding $0 is a Nash equilibrium. (Check out my textbook or watch my YouTube videos if you do not know what a Nash equilibrium is.) If either Park Place owner deviates to a positive amount, that deviator would lose, since the other guy is bidding 0. So neither player can profitably deviate. Thus, both bidding 0 is a Nash equilibrium.

What if one bid $x greater than or equal to 0 and the other bid $y > x? Then the person bidding y could profitably deviate to any amount between y and x. He still wins the piece, but he pays less for it. Thus, this is a profitable deviation and bids x and y are not an equilibrium.

The final case is when both players bid the same amount z > 0. In expectation, both earn z/2. Regardless of the tiebreaking mechanism, one player must lose at least half the time. That player can profitably deviate to 3z/8 and win outright. This sell price is larger than the expectation.

This exhausts all possibilities. So both bidding $0 is the unique Nash equilibrium. Despite requiring another piece, your Boardwalk is worth a full million dollars.

What is going wrong for the Park Place holders? Supply simply outstrips demand. Any person with a Park Place but no Boardwalk walks away with nothing, which ultimately drives down the price of Park Place down to nothing as well.

Moral of the story: Don’t get excited if you get a Park Place piece.

Note 1: If money is discrete down to the cent, then the winning bid could be $0 or $0.01. (With the right tie breaker, it could also be $0.02.) Either way, this is not good for owners of Park Place.

Note 2: In practice, we might see Park Place sell for some marginally higher value. That is because it is (slightly) costly for a Boardwalk owner to seek out and solicit bids from more Park Place holders. However, Park Place itself is not creating any value here—it’s purely the transaction cost.

Note 3: An enterprising Park Place owner could purchase all other Park Place pieces and destroy them. This would force the Boardwalk controller to split the million dollars. While that is reasonable to do when there are only two individuals like the example, good luck buying all Park Places in reality. (Transaction costs strike again!)

Dear Iran, Your Threat Is Incredible. Love, America

Apparently “Iran threatens attack” is the top trending search on Yahoo right now. Here’s a news story of what is going on. Apparently some general in the Iranian air force (Amir Ali Hajizadeh) said that if Israel strikes Iran, Iran will retaliate by attacking American bases in the region.

Umm. Okay.

Iran will do no such thing. The American public does not have the will to engage Iran at the moment. If someone will launch a preventive strike on the Iranian nuclear program, it will be Israel, not the United States. (And, as Israeli officials are finally conceding, this is an unlikely outcome.) But do you know what would give the American public the will to fight? I don’t know, how about an attack on American bases? If Iran initiates on the United States, it undoubtedly ends badly for the Iranians. In turn, anyone who has spent two minutes learning backward induction (see video below) knows how preposterous Iran’s original threat is.

This news story reflects a curious and disturbing trend in American news media. Whenever some crazy person from another country says something inflammatory, it gets reported as though it is serious business, even if it is in no way the actual policy of the regime in charge. Then rhetoric explodes for no particular reason.

The only thing Americans should take away from this news story is that Amir Ali Hajizadeh is a complete idiot.

(Of course, we have some silly people in our country who say silly things, and I am sure that the Iranian media also reports them as though they are serious. This goes both ways.)

Gambling and Corruption with Replacement NFL Referees

If you have watched an NFL game over the last six weeks, you doubtlessly know that NFL referees are in a labor dispute, and the NFL is using replacement referees for the time being. USA Today has an interesting story about the incentives these replacement refs face. Specifically, they are more vulnerable to being bought off by illicit gambling manipulation.

Among gamblers, there is obvious demand for referees willing to take bribes to alter the outcome of the game. For example, suppose the Chargers and Falcons are an even line. (All you have to do is pick the winner to win the bet.) A gambling crew could place a large sum of money on the Chargers, say $1,000,000. They could then pay $100,000 to the referee to ensure the calls go the Chargers’ way such that San Diego wins. The gamblers stand to make hundreds of thousands of dollars.

Besides the threat of criminal punishment, referees have incentive to refuse these bribes due to future benefits from continued officiating. Making terrible calls or being getting caught will get you fired, thus denying you the benefits of continued employment. All other things being equal, if you expect the NFL to continue employing you, you are less likely to take the bribe. Regular NFL officials have this type of long time horizon. They may not be completely unbribe-able, but they are darn resistant.

The replacement refs? Not so much. Their time horizon is extremely small. Once the NFL and the referees resolve their labor dispute, the replacement refs will be gone for good. Rather than years, this time horizon is probably better calculated in weeks or months. Taking a $100,000 bribe doesn’t sound so bad when you are very likely to be unemployed by Halloween, especially when you are making at most $3500 a game.

I find this argument is intuitive and compelling. Moreover, it made me rethink the reasonableness of the referees’ previous contract, which paid about $150,000 for roughly fifty days’ work last year. Such a salary seems ridiculously high given the large supply of potential referee labor. However, the NFL needs to keep the actions of the referees in line with the NFL’s wishes. We can’t just ask potential referees how much they need to be paid to not accept bribes, and then employ the cheapest labor. One way to resolve this issue is to promise continued high pay all referees. Put differently, the high salaries bridge the principal-agent problem.

Avatar: Full of Commitment Problems

At the insistence of many of my friends, I started watching Avatar: The Last Airbender (the TV series, not the dreadful film). The show appears to take place on a post-apocalyptic Earth, where humans have been divided into four tribes (fire, water, earth, and air), which can “bend” their particular element as a means of weaponization.

The world is constantly at war. The show’s narration blames this on the disappearance of the disappearance of the Avatar, the traditional peacekeeper and only person capable of wielding all four elements.

However, the lack of the Avatar fails to explain the underlying incentive for war. Today’s pre-apocalyptic world does not have an avatar, and yet most countries most of the time are not at war with most other countries. Moreover, the Avatar theory does not address war’s inefficiency puzzle, i.e. how the costs of fighting imply the existence of negotiated settlements that are mutually preferable to war. Why not reach such an agreement and end the war that has completely devastated the world economy? The Avatar might be sufficient for peace but is by no means necessary.

In contrast, I propose that the underlying cause of war is the presence of rapid, exogenous power shifts. As described in the episode The Library, the fire tribe’s ability to bend fire disappears during a solar eclipse. Likewise, the water tribe’s ability to water bend disappears during a lunar eclipse. These rare events leave their respective tribes temporarily powerless. In turn, that tribe faces a commitment problem. For example, on the eve of a solar eclipse, the fire tribe would much enjoy reaching a peaceful settlement. In fact, they would be willing to promise virtually everything to achieve a resolution, since they will certainly be destroyed if a war is fought on the solar eclipse.

But such an agreement is inherently incredible. Suppose the other tribes accepted the fire tribe’s surrender. The solar eclipse passes uneventfully. Suddenly, the fire tribe has no incentive to abide by the terms of the peace treaty. After all, their power is fully restored, and they no longer face the threat of a solar eclipse. They will therefore demand an equitable share of the world’s bargaining pie.

Now consider the incentives the other tribes face. If they fail to destroy the fire tribe during the solar eclipse, the fire tribe will demand that equitable stake. But the other tribes could destroy the fire tribe during the eclipse and steal their share. That is a tempting proposition. Indeed, the other tribes likely cannot credibly commit to not taking advantage of the fire tribe’s temporary weakness.

Finally, think one further step back, once again from the perspective of the fire tribe. If the fire tribe does not successfully destroy the other tribes before the solar eclipse, they run the risk of being destroyed on that day. From that perspective, it is perfectly understandable why the fire tribe fights.

Thus, there are commitment problems abound in the world of Avatar. The fire tribe cannot credibly commit to remaining enfeebled after the solar eclipse. The other tribes cannot credibly commit to not attack the fire nation during the eclipse. War seems perfectly rational.

Interestingly, one way out of the problem is for the fire and water tribes to agree to protect one another during their eclipses. Given that, neither side has incentive to attack during the eclipse; if that tribe did join the other tribes in an attack, then it would be left without any protection during the next eclipse. (This resembles a trual–a dual with three people.) Yet, in the series, the fire and water tribes appear to be the most bitter enemies.

One wonders if the library contained a copy of Fearon 1995 or In the Shadow of Power. In any case, you can read more about preventive war in the third chapter of The Rationality of War or watch the below video:

New Working Paper: The Invisible Fist

Download the paper here.

Let’s start with a quote from President Obama, circa September 2009:

Iran must comply with U.N. Security Council resolutions…we have offered Iran a clear path toward greater international integration if it lives up to its obligations…but the Iranian government must now demonstrate…its peaceful intentions or be held accountable to…international law.

We’ve been dealing with the Iranian nuclear “crisis” for a while now. As the quote indicates, President Obama’s method of diplomacy is to offer Iran concessions and hope these carrots convince Iran not to build. His opponents have called such a plan naive; after all, why wouldn’t Iran takes those concessions, say thanks, and then build a nuclear weapon anyway? (Of course, his opponents have also suggested that we threaten to invade Iran to convince Iran not to build, even though such a threat is not credible in the least.)

When I first heard this quote, I fell into the opposing group. We don’t have any models that explain this type of bargaining behavior. In crises, fully realized power drives concessions. Yet, here, unrealized power is causing concessions, and Obama hopes that those concessions in turn mean that the power remains unrealized. I set out to develop a model to show that this type of agreement can never withstand the test of time.

I was wrong. The Invisible Fist shows that such agreements can hold up, even if a rising state can freely renege on the offers. Specifically, declining states offer most of what rising states would receive if they ever built the weapons. This is sufficient to buy off the rising states; while the rising states could build and receive more concessions, those additional concessions do not cover the cost of building. Meanwhile, the declining states are happy to engage in such agreements, because they can extract this building cost out of the rising states.

In any case, I think both the model and the paper’s substantive applications are interesting, so it is worth a look. Let me know what you think.

P.S. Slides here. Paper presentation below:

Excerpt from Game Theory 101: The Complete Textbook

With school starting once again, I thought it was time to do some updating to the greater Game Theory 101 enterprise. Here’s the updated version of lesson 1.1 of Game Theory 101: The Complete Textbook. Enjoy.

Chapter 2 of The Rationality of War

The Rationality of War is now out! (Buy it on Amazon or Barnes & Noble.) You can download chapter two of the book as a free PDF by clicking here. This chapter explains the fundamental puzzle of war: if fighting is costly, why can’t two states agree to a peaceful settlement? With that puzzle in mind, the rest of the book shows why states sometimes end up in war.

Do More Accurate Tests Lead to More Frequent Drug Testing?

This Olympics has been special due to bizarre cases of “cheating” and cunningly strange strategic behavior. But regardless of the year, allegations of doping are always around. So far, four athletes have been disqualified, and a fifth was booted for failing a retest from 2004. (The Olympic statute of limitations is eight years.) More will probably get caught, as half of all competitors will be sending samples to a laboratory.

Doping has some interesting strategic dimensions. The interaction is a guessing game. Dopers only want to take drugs if they aren’t going to be tested. Athletic organizations only want to test dopers; each test costs money, so every clean test is like flushing cash down a toilet. From “matching pennies,” we know that these kinds of guessing games require the players to mix. Sometimes the dopers dope, sometimes they don’t. Sometimes they are tested, sometimes they aren’t.

But tests aren’t perfect. Sometimes a doper will shoot himself up, yet the test will come back negative. Even if we ignore false positives for this post, adding this dynamic makes each actor’s optimal strategy more difficult to find. Do more accurate drug tests lead to more frequent testing or less frequent testing? There are decent arguments both ways:

Pro-Testing: More accurate drug tests will lead to increased testing, since the organization does not have to worry about paying for bad tests, i.e. tests that come back negative but should have come up positive.

Anti-Testing: More accurate drug tests will lead to decreased testing, because athletes will be more scared of them. That leads to less incentive to dope, which in turn makes the tests less necessary.

Arguments for both sides could go on forever. Fortunately, game theory can accurately sort out the actors’ incentives and counter-strategies. As it turns out, the anti-testing side is right. The proof is in the video:

Basically, the pro-testers are wrong because they fail to account for the strategic aspect of the game. The athletic organization has to adopt its strategies based off of the player’s incentives. Increasing the accuracy of the test only changes the welfare of the player when he dopes and the organization tests. So if the organization kept testing at the same rate as the quality of the tests improved, the player would never want to dope. As such, the organization cuts back on its testing as the quality of the test increases.

Romney Should Release His Returns

If you have been following the 2012 campaign, you know that Mitt Romney has not released his tax returns from before 2010. This has caused speculation that Romney did something wrong during 2008 or 2009–anything from finding interesting (but legal) tax shelters for his Bain Capital income to participating in the IRS’s 2009 amnesty program. Romney has held firm, claiming that rivals are attempting to divert attention from important campaign issues to his tax life. Of course, this has backfired, inadvertently causing more speculation.

I did not fully appreciate the strategic aspects of the situation until reading this blog post from Daniel Shaviro. In it, he makes an important insight:

Romney’s reluctance to release any pre-2010 tax return might be that what it would show is worse than all the heat he is taking for non-disclosure.

I thought that was an interesting intuition. However, I have realized that it wrong, at least in the long run. No matter how bad Romney’s pre-2010 tax returns are, they cannot be worse than the speculation he will eventually receive.

To see why, imagine that one of three things is true about Romney: (1) he did nothing wrong, (2) he did something politically embarrassing (like find ridiculous tax loopholes), or (3) he did something blatantly illegal (like something that would have made him participate in the 2009 IRS program). Option 1 is a non-issue. Option 2 is not politically expedient but not altogether damning; Romney could even take some of the heat off by claiming to be the best candidate to shut down these loopholes given his first-hand knowledge of the tax system. Option 3 is game over.

The public intuitively understands that if (1) were true, Romney would have come forward already. (Let’s define the “public” as independents who haven’t already decided who to vote for. We know that ideologues from both sides are lost causes here.) Maybe he is reluctant to let everyone know he made gazillions of dollars those years, but that is a hell of a lot better than the beating he is receiving right now. So we can eliminate option 1 from the list. This is inference #1.

That leaves us with option 2 and option 3. But if both are possible, then our rational expectation of Romney’s sleaziness falls somewhere in between 2 and 3. In other words, we believe on average that Romney is worse than the guy who found tax loopholes but better than the guy who did something illegal. However, that implies that Romney should come forward if he is type 2; releasing his returns will prove that he is not as bad as the average between 2 and 3 and thus leave Romney’s reputation in better shape.

As such, the only type who does not come forward is the worst type. Put differently, by not releasing his returns, the public should rationally infer that the worst thing is true. This is inference #2.

For now, the public seems to understand inference #1 but not inference #2. However, it is only matter of time before they draw that conclusion; after all, it is logically valid. So, for now, Romney can get away without releasing the documents. But over time, things will only get worse as the public slowly reaches inference #2.

If I were Romney’s advisor, I would have him release the documents immediately. Whatever is on them cannot be as bad as where this public speculation is headed. The decision is whether to handle the blow-back now–three months before the election–or wait until October. Clearly the former is the better option.

Video explanation below: