# Theory, Assumptions, and a God-Awful Final Jeopardy

In case you missed it, last night’s Final Jeopardy was flat terrible. This was the semifinal game in the teen tournament; only the top (strictly positive) scorer advanced to the next round, and no one keeps any money. The scores were \$16,400, \$12,000, and \$1,200. The Final Jeopardy category was capital cities. Pretend you are the leader and place your wager.

It’s criss-crossed by dozens of “peace walls” that separate its Catholic & Protestant neighborhoods

Was your response Dublin? Mine was, as was all of the contestants’. Dublin is also wrong. The correct response was Belfast.

Nothing wrong with a triple stumper, though. The wagering strategies, on the other hand, were horrible. Every contestant wagered everything. With no one coming up with the correct response, no one had any money and thus no one qualified for the finals.

This made me go insane. The leader had no reason to wager more than \$7,601; such a wager ensures that the leader wins with certainty if he receives the correct response and also gives him a win against a wider variety of opposing bids, including the set of bids from the game. In game theory terms, wagering \$7,601 weakly dominates wagering everything.

I then vented in YouTube form:

Here’s a comment from the YouTube view page:

This is why the idea that people are intelligent self interested agents makes me laugh. People do this kind of thing ALL THE﻿ TIME, and it’s why economic theories that don’t account for this can’t predict [stuff].

Only he didn’t say stuff.

There are two big problems with this logic. First, rational self-interest is an assumption. We use assumptions to build theories not for their accuracy but for their usefulness. The better metric for modeling is a simple question: is this model more useful than the alternative? If yes, the model is satisfactory. If not, then use the alternative. We could discard certain reality and instead use some probability distribution over rational agents and automaton agents. While this would certainly be a more realistic model, it would come at the expense of being substantially more computationally intensive without much obvious reward. We should find no inherent shame in simplicity.

Second, a good theory explains and predicts behavior. Theories are not laws–we should not require a theory to hold 100% of the time for us to find a theory useful. Contrary to what the commentator wrote, we can use “intelligent, self-interested agents” as an assumption and predict quite a lot. In fact, the reason Final Jeopardy last night caused such a stir is because it egregiously violated what intelligent individuals should do. Intelligent individuals make up about 99.9% of the Jeopardy players, which is what made last night so extraordinary.

If models are useless because of the .1%, then all of academia–hard and soft science alike–needs to close up shop immediately.