(At least the evidence doesn’t match the claim.)
“Chimps Outsmart Humans When It Comes To Game Theory” has been making the social media rounds today. Unfortunately, this seems to be a case of social media run amok–the paper has some interesting results, but that interpretation is horribly off base.
Below, I will give four reasons why we shouldn’t conclude that chimps are better at game theory than humans. But first, let’s quickly review what happened. A bunch of chimps, Japanese students, and villagers played some basic, zero-sum, simultaneous move games like matching pennies. The mixed strategy algorithm derives equilibrium predictions of what each player should do under these scenarios. As it turns out, chimps played strategies closer to the equilibrium predictions. Therefore, the proposed conclusion is that chimps are better game theorists than humans.
So what’s wrong here? Well…
The Sample Size Is Lacking
Who participated in the study? Six chimps, thirteen female Japanese students, and twelve males from Guinea. We can’t generalize differences between these groups in a meaningful way without a larger sample size.
The Chimps Aren’t a Random Sample
From the study:
Six chimpanzees (Pan Troglodytes) at the Kyoto University Primate Research Institute voluntarily participated in the experiment. Each chimpanzee pair was a mother and her offspring…all six had previously participated in cognitive studies, including social tasks involving food and token sharing.
There are a couple of problems here. First, the pairs of chimps that played are related. It stands to reason that a mother who is good at these games would produce offspring that is also good at these games. So we really aren’t looking at six chimps so much as three. Ouch.
(It should be noted that the Japanese students aren’t really random either since they all come from the same university. However, this is true for many studies of this sort, so I’m going to overlook it.)
Second, these aren’t even your regular chimps. They have played plenty of games before!
Combined, this is like taking a group of University of Rochester Department of Political Scientist (URDPS) students and comparing their results to a group of random Californians. The URDPS group is “related” (they all go to the same school) and they all have plenty of experience playing games (at least three semesters’ worth). They would undoubtedly play more rationally than the random group from California. But you can’t use this to claim that New Yorkers play more rationally than Californians. Yet you are seeing the analogous claim being made.
They Aren’t Playing the Game the Researchers Are Testing Against
Only the researchers knew the game that the players were playing. In contrast, the players only knew their payoffs, not their opponents’. The mixed strategy algorithm only makes predictions about how players should play given that all facets of the game are common knowledge. That’s clearly not the case here.
Instead, the real game here is spending a number of iterations of the game trying decipher what your opponent’s payoffs are and then figuring out how to strategize accordingly. It’s not clear how to interpret the results in this light, though it is interesting that the (small, biased sample of) chimps figured this out more quickly.
The Game Was Not Inter-species
If you want to say that chimps are better at these games than humans, you need to have chimps playing humans. You would then have them play some number of iteration and see who received more apples/yen by the end of the game. Instead, it was chimps versus chimps and humans against humans. With that data, you cannot claim one party is better than the other.
Nash Equilibrium Isn’t a Good Baseline
“Fine,” you might say in response to the last point, “but the chimps still played closer to the Nash equilibrium strategies than humans. Therefore, chimps are better game theorists than humans are.” That’s still not want we want to know, though. Who cares if the players were playing Nash? If I played this game tomorrow, would I play Nash? Yes–if I thought the other player was clever enough to do the same. If not, I would try to beat them.
This is a nuanced problem, so let’s look at an example. If you ask people about soccer penalty kicks, they will likely tell you that you should kick more frequently to your stronger side as it becomes more and more accurate. This is wrong: you should increase your reliance on your weaker side. Knowing this, if I played the role of the goalie, I would start diving to the kicker’s stronger side more frequently. The kicker would do poorly and I would do very well.
How would the study interpret this? It would say that we are both bad at game theory! But that’s not what’s going on here. We have one bad strategist and one sophisticated one. The interpretation of the study would get half of it right but completely blow the other half. Worse, a sophisticated goalie taking advantage of the kicker’s incompetence would outperform a goalie who played Nash instead.
Nash equilibrium is useful for many reasons; testing whether one species is better than another with it as a baseline is not one of them.
I’d have to go through the paper more closely than I have so far to give an overall impression of it. However, even without that, it is clear that the way social media is describing the results is very questionable.