Are Humans Smarter Than Chimps? Think Again
Author Frans de Waal deconstructs the notion that a chimpanzee’s intelligence is inferior to a human’s.
The following is an excerpt from Are We Smart Enough To Know How Smart Animals Are? by Frans de Waal.
Ayumu had no time for me while he was working on his computer. He lives with other chimps in an outdoor area at the Primate Research Institute (PRI) of Kyoto University. At any moment, an ape can run into one of several cubicles—like little phone booths—equipped with a computer. The chimp can also leave the cubicle whenever he wants. This way playing computer games is entirely up to them, which guarantees sound motivation. Since the cubicles are transparent and low, I could lean on one to look over Ayumu’s shoulder. I watched his incredibly rapid decision making the way I admire my students typing ten times faster than me.
Ayumu, is a young male who, in 2007, put human memory to shame. Trained on a touchscreen, he can recall a series of numbers from 1 through 9 and tap them in the right order, even though the numbers appear randomly on the screen and are replaced by white squares as soon as he starts tapping. Having memorized the numbers, Ayumu touches the squares in the correct order. Reducing the amount of time the numbers flash on the screen doesn’t seem to matter to Ayumu, even though humans become less accurate the shorter the time interval. Trying the task myself, I was unable to keep track of more than five numbers after staring at the screen for many seconds, while Ayumu can do the same after seeing the numbers for just 210 milliseconds. This is one-fifth of a second, literally the bat of an eye. One follow-up study managed to train humans up to Ayumu’s level with five numbers, but the ape remembers up to nine with 80 percent accuracy, something no human has managed so far. Taking on a British memory champion known for his ability to memorize an entire stack of cards, Ayumu emerged the “chimpion.”
The distress Ayumu’s photographic memory caused in the scientific community was of the same order as when, half a century ago, DNA studies revealed that humans barely differ enough from bonobos and chimpanzees to deserve their own genus. It is only for historical reasons that taxonomists have let us keep the Homo genus all to ourselves. The DNA comparison caused hand-wringing in anthropology departments, where until then skulls and bones had ruled supremely as the gauge of relatedness. To determine what is important in a skeleton takes judgment, though, which allows the subjective coloring of traits that we deem crucial. We make a big deal of our bipedal locomotion, for example, while ignoring the many animals, from chickens to hopping kangaroos, that move the same way. At some savanna sites, bonobos walk entire distances upright through tall grass, making confident strides like humans. Bipedalism is really not as special as it has been made out to be. The good thing about DNA is that it is immune to prejudice, making it a more objective measure.
With regard to Ayumu, however, it was the turn of psychology departments to be upset. Since Ayumu is now training on a much larger set of numbers, and his photographic memory is being tried on ever shorter time intervals, the limits of what he can do are as yet unknown. But this ape has already violated the dictum that, without exception, tests of intelligence ought to confirm human superiority. As expressed by David Premack, “Humans command all cognitive abilities, and all of them are domain general, whereas animals, by contrast, command very few abilities, and all of them are adaptations restricted to a single goal or activity.” Humans, in other words, are a singular bright light in the dark intellectual firmament that is the rest of nature. Other species are conveniently swept together as “animals” or “the animal”—not to mention “the brute” or “the nonhuman”—as if there were no point differentiating among them. It is an us-versus-them world. As the American primatologist Marc Hauser, inventor of the term humaniqueness, once said: “My guess is that we will eventually come to see that the gap between human and animal cognition, even a chimpanzee, is greater than the gap between a chimp and a beetle.”
You read it right: an insect with a brain too small for the naked eye is put on a par with a primate with a central nervous system that, albeit smaller than ours, is identical in every detail. Our brain is almost exactly like an ape’s, from its various regions, nerves, and neurotransmitters to its ventricles and blood supply. From an evolutionary perspective, Hauser’s statement is mind-boggling. There can be only one outlier in this particular trio of species: the beetle.
Given that the discontinuity stance is essentially pre-evolutionary, let me call a spade a spade, and dub it Neo-Creationism. Neo-Creationism is not to be confused with Intelligent Design, which is merely old creationism in a new bottle. Neo-Creationism is subtler in that it accepts evolution but only half of it. Its central tenet is that we descend from the apes in body but not in mind. Without saying so explicitly, it assumes that evolution stopped at the human head. This idea remains prevalent in much of the social sciences, philosophy, and the humanities. It views our mind as so original that there is no point comparing it to other minds except to confirm its exceptional status. Why care about what other species can do if there is literally no comparison with what we do? This saltatory view (from saltus, or “leap”) rests on the conviction that something major must have happened after we split off from the apes: an abrupt change in the last few million years or perhaps even more recently. While this miraculous event remains shrouded in mystery, it is honored with an exclusive term—hominization—mentioned in one breath with words such as spark, gap, and chasm. Obviously, no modern scholar would dare mention a divine spark, let alone special creation, but the religious background of this position is hard to deny.
In biology, the evolution-stops-at-the-head notion is known as Wallace’s Problem. Alfred Russel Wallace was a great English naturalist who lived at the same time as Charles Darwin and is considered the co-conceiver of evolution by means of natural selection. In fact, this idea is also known as the Darwin-Wallace Theory. Whereas Wallace definitely had no trouble with the notion of evolution, he drew a line at the human mind. He was so impressed by what he called human dignity that he couldn’t stomach comparisons with apes. Darwin believed that all traits were utilitarian, being only as good as strictly necessary for survival, but Wallace felt there must be one exception to this rule: the human mind. Why would people who live simple lives need a brain capable of composing symphonies or doing math? “Natural selection,” he wrote, “could only have endowed the savage with a brain a little superior to that of an ape, whereas he actually possesses one but very little inferior to that of the average member of our learned societies.” During his travels in Southeast Asia, Wallace had gained great respect for nonliterate people, so for him to call them only “very little inferior” was a big step up over the prevailing racist views of his time, according to which their intellect was halfway between that of an ape and Western man. Although he was nonreligious, Wallace attributed humanity’s surplus brain power to the “unseen universe of Spirit.” Nothing less could account for the human soul. Unsurprisingly, Darwin was deeply disturbed to see his respected colleague invoke the hand of God, in however camouflaged a way. There was absolutely no need for supernatural explanations, he felt. Nevertheless, Wallace’s Problem still looms large in academic circles eager to keep the human mind out of the clutches of biology.
I recently attended a lecture by a prominent philosopher who enthralled us with his take on consciousness, until he added, almost like an afterthought, that “obviously” humans possess infinitely more of it than any other species. I scratched my head—a sign of internal conflict in primates—because until then the philosopher had given the impression that he was looking for an evolutionary account. He had mentioned massive interconnectivity in the brain, saying that consciousness arises from the number and complexity of neural connections. I have heard similar accounts from robot experts, who feel that if enough microchips connect within a computer, consciousness is bound to emerge. I am willing to believe it, even though no one seems to know how interconnectivity produces consciousness nor even what consciousness exactly is.
The emphasis on neural connections, however, made me wonder what to do with animals with brains larger than our 1.35-kilogram brain. What about the dolphin’s 1.5-kilogram brain, the elephant’s 4-kilogram brain, and the sperm whale’s 8-kilogram brain? Are these animals per- haps more conscious than we are? Or does it depend on the number of neurons? In this regard, the picture is less clear. It was long thought that our brain contained more neurons than any other on the planet, regard- less of its size, but we now know that the elephant brain has three times as many neurons—257 billion, to be exact. These neurons are differently distributed, though, with most of the elephant’s in its cerebellum. It has also been speculated that the pachyderm brain, being so huge, has many connections between far-flung areas, almost like an extra highway system, which adds complexity. In our own brain, we tend to emphasize the frontal lobes—hailed as the seat of rationality—but according to the latest anatomical reports, they are not truly exceptional. The human brain has been called a “linearly scaled-up primate brain,” meaning that no areas are disproportionally large. All in all, the neural differences seem insufficient for human uniqueness to be a foregone conclusion. If we ever find a way of measuring it, consciousness could well turn out to be widespread. But until then some of Darwin’s ideas will remain just a tad too dangerous.
This is not to deny that humans are special—in some ways we evidently are—but if this becomes the a priori assumption for every cognitive capacity under the sun, we are leaving the realm of science and entering that of belief. Being a biologist who teaches in a psychology department, I am used to the different ways disciplines approach this issue. In biology, neuroscience, and the medical sciences, continuity is the default assumption. It couldn’t be otherwise, because why would anyone study fear in the rat amygdala in order to treat human phobias if not for the premise that all mammalian brains are similar? Continuity across life-forms is taken for granted in these disciplines, and however important humans may be, they are a mere speck of dust in the larger picture of nature.
Increasingly, psychology is moving in the same direction, but in other social sciences and the humanities discontinuity remains the typical assumption. I am reminded of this every time I address these audiences. After a lecture that inevitably (even if I don’t always mention humans) reveals similarities between us and the other Hominoids, the question invariably arises: “But what then does it mean to be human?” The but opening is telling as it sweeps all the similarities aside in order to get to the all-important question of what sets us apart. I usually answer with the iceberg metaphor, according to which there is a vast mass of cognitive, emotional, and behavioral similarities between us and our primate kin. But there is also a tip containing a few dozen differences. The natural sciences try to come to grips with the whole iceberg, whereas the rest of academia is happy to stare at the tip.
In the West, fascination with this tip is old and unending. Our unique traits are invariably judged to be positive, noble even, although it wouldn’t be hard to come up with a few unflattering ones as well. We are always looking for the one big difference, whether it is opposable thumbs, cooperation, humor, pure altruism, sexual orgasm, language, or the anatomy of the larynx. It started perhaps with the debate between Plato and Diogenes about the most succinct definition of the human species. Plato proposed that humans were the only creatures at once naked and walking on two legs. This definition proved flawed, however, when Diogenes brought a plucked fowl to the lecture room, setting it loose with the words “Here is Plato’s man.” From then on the definition added “having broad nails.”
In 1784 Johann Wolfgang von Goethe triumphantly announced that he had discovered the biological roots of humanity: a tiny piece of bone in the human upper jaw known as the os intermaxillare. Though present in other mammals, including apes, the bone had never before been detected in our species and had therefore been labeled “primitive” by anatomists. Its absence in humans had been taken as something we should be proud of. Apart from being a poet, Goethe was a natural scientist, which is why he was delighted to link our species to the rest of nature by showing that we shared this ancient bone. That he did so a century before Darwin reveals how long the idea of evolution had been around.
The same tension between continuity and exceptionalism persists today, with claim after claim about how we differ, followed by the subsequent erosion of these claims. Like the os intermaxillare, uniqueness claims typically cycle through four stages: they are repeated over and over, they are challenged by new findings, they hobble toward retirement, and then they are dumped into an ignominious grave. I am always struck by their arbitrary nature. Coming out of nowhere, uniqueness claims draw lots of attention while everyone seems to forget that there was no issue before. For example, in the English language (and quite a few others), behavioral copying is denoted by a verb that refers to our closest relatives, hinting at a time when imitation was no big deal and was considered something we shared with the apes. But when imitation was redefined as cognitively complex, dubbed “true imitation,” all of a sudden we became the only ones capable of it. It made for the peculiar consensus that we are the only aping apes. Another example is theory of mind, a concept that in fact derives from primate research. At some point, however, it was redefined in such a manner that it seemed, at least for a while, absent in apes. All these definitions and redefinitions take me back to a character played by Jon Lovitz on Saturday Night Live, who conjured unlikely justifications of his own behavior. He kept digging and searching until he believed his own fabricated reasons, exclaiming with a self-satisfied smirk, “Yeah! That’s the ticket!”
With regard to technical skills, the same thing happened despite the fact that ancient gravures and paintings commonly depicted apes with a walking cane or some other instrument, most memorably in Carl Linnaeus’s Systema Natura in 1735. Ape tool use was well known and not the least bit controversial at the time. The artists probably put tools in the apes’ hands to make them look more humanlike, hence for exactly the opposite reason anthropologists in the twentieth century elevated tools to a sign of brainpower. From then on, the technology of apes was subjected to scrutiny and doubt, ridicule even, while ours was held up as proof of mental preeminence. It is against this backdrop that the discovery (or rediscovery) of ape tool use in the wild was so shocking. In their attempts to downplay its importance, I have heard anthropologists suggest that perhaps chimpanzees learned how to use tools from humans, as if this would be any more likely than having them develop tools on their own. This proposal obviously goes back to a time when imitation had not yet been declared uniquely human. It is hard to keep all those claims consistent. When Leakey suggested that we must either call chimpanzees human, redefine what it is to be human, or redefine tools, scientists predictably embraced the second option. Redefining man will never go out of fashion, and every new characterization will be greeted with “Yeah! That’s the ticket!”
Even more egregious than human chest beating—another primate pattern—is the tendency to disparage other species. Well, not just other species, because there is a long history of the Caucasian male declaring himself genetically superior to everyone else. Ethnic triumphalism is extended outside our species when we make fun of Neanderthals as brutes devoid of sophistication. We now know, however, that Neanderthal brains were slightly larger than ours, that some of their genes were absorbed into our own genome, and that they knew fire, burials, hand-axes, musical instruments, and so on. Perhaps our brothers will finally get some respect. When it comes to the apes, however, contempt persists. When in 2013 the BBC website asked Are You as Stupid as a Chimpanzee? I was curious to learn how they had pinpointed the level of chimpanzee intelligence. But the website (since removed) merely offered a test of human knowledge about world affairs, which had nothing to do with apes. The apes merely served to draw a contrast with our species. But why focus on apes in this regard rather than, say, grasshoppers or goldfish? The reason is, of course, that everyone is ready to believe that we are smarter than these animals, yet we are not entirely sure about species closer to us. It is out of insecurity that we love the contrast with other Hominoids, as is also reflected in angry book titles such as Not a Chimp or Just Another Ape?
The same insecurity marked the reaction to Ayumu. People watching his videotaped performance on the Internet either did not believe it, saying it must be a hoax, or had comments such as “I can’t believe I am dumber than a chimp!” The whole experiment was taken as so offensive that American scientists felt they had to go into special training to beat the chimp. When Tetsuro Matsuzawa, the Japanese scientist who led the Ayumu project, first heard of this reaction, he put his head in his hands. In her charming behind-the-scenes look at the field of evolutionary cognition, Virginia Morrell recounts Matsuzawa’s reaction:
Really, I cannot believe this. With Ayumu, as you saw, we discovered that chimpanzees are better than humans at one type of memory test. It is something a chimpanzee can do immediately, and it is one thing—one thing—that they are better at than humans. I know this has upset people. And now there are researchers who have practiced to become as good as a chimpanzee. I really don’t understand this need for us to always be superior in all domains.
Even though the iceberg’s tip has been melting for decades, attitudes barely seem to budge. Instead of discussing them any further here or going over the latest uniqueness claims, I will explore a few claims that are now close to retirement. They illustrate the methodology behind intelligence testing, which is crucial to what we find. How do you give a chimp—or an elephant or an octopus or a horse—an IQ test? It may sound like the setup to a joke, but it is actually one of the thorniest questions facing science. Human IQ may be controversial, especially while we are comparing cultural or ethnic groups, but when it comes to distinct species, the problems are a magnitude greater.
I am willing to believe a recent study that found cat lovers to be more intelligent than dog lovers, but this comparison is a piece of cake relative to one drawing a contrast between actual cats and dogs. Both species are so different that it would be hard to design an intelligence test that both of them perceive and approach similarly. At issue, however, is not just how two animal species compare but—the big gorilla in the room—how they compare to us. And in this regard, we often abandon all scrutiny. Just as science is critical of any new finding in animal cognition, it is often equally uncritical with regard to claims about our own intelligence. It swallows them hook, line, and sinker, especially if they—unlike Ayumu’s feat—are in the expected direction. In the meantime, the general public gets confused, because inevitably any such claims provoke studies that challenge them. Variation in outcome is often a matter of methodology, which may sound boring but goes to the heart of the question of whether we are smart enough to know how smart animals are.
Methodology is all we have as scientists, so we pay close attention to it. When our capuchin monkeys underperformed on a face-recognition task on a touchscreen, we kept staring at the data until we discovered that it was always on a particular day of the week that the monkeys fared so poorly. It turned out that one of our student volunteers, who carefully followed the script during testing, had a distracting presence. This student was fidgety and nervous, always changing her body postures or adjusting her hair, which apparently made the monkeys nervous, too. Performance improved dramatically once we removed this young woman from the project. Or take the recent finding that male but not female experimenters induce so much stress in mice that it affects their responses. Placing a T-shirt worn by a man in the room has the same effect, suggesting that olfaction is key. This means, of course, that mouse studies conducted by men may have different outcomes than those conducted by women. Methodological details matter much more than we tend to admit, which is particularly relevant when we compare species.
Excerpted from Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal. Copyright © 2016 by Frans de Waal. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.