Science Is All Anthropology Now
We're using the powers of reason to try to understand the last great mystery
America is a science-loving culture, and proud of it. We’re the nation that once plunked astronauts down on the moon and vaccinated our way to the eradication of smallpox. In our spare time, we invented the internet. Our self-image as the nation of Progress™ depends on wrangling research and technology to dispatch with ever-greater challenges, master more and more of the recalcitrant natural world. But as with most things, our interest in science is also bashed around by ephemeral trends. Just as grunge replaced 80s synth-pop, then somehow gave way to boy bands and ultimately Taylor Swift in the 1990s and 2000s, different scientific disciplines take turns in the pop-science limelight. One comes, another goes.
For a long time, physics was king. In the first half of the 20th century, discoveries in fundamental physics came so thick and fast — relativity, quantum mechanics, nuclear fission — that even experts could barely keep up. Physicists were celebrities, cover models of Time. Then other sciences began to horn in. With the discovery of DNA, popular interest in genetics and biology exploded, making writers like Richard Dawkins famous by the 1970s. Not long after that, neuroscience experienced a revolution. Brain scientist Oliver Sacks became the new pop-science superstar in the 1980s and ‘90s.
The 2000s and early 2010s were, in turn, the era of pop psychology and economics — Nudge and Freakonomics and Daniel Kahneman1 taught us how to hack our brains, correct our biases, and maximize productivity. Business schools and corporate C-suites loved it.
Now, in the heady 2020s, there’s a new king on the throne: artificial intelligence. Practically every science journalist is writing about it. Every YouTube channel is YouTubing about it. This month’s issue of Scientific American, assuming anyone still reads that august periodical, is all about it. AI is sucking up all the oxygen in the room, and with hundreds of billions of dollars now being poured into developing new AI technologies, it’s not likely to slow down in the near future.
But this essay isn’t about AI. Not exactly.2
It’s about the fact that all of science is essentially transforming into anthropology.
I don’t mean cultural anthropology, the kind of research where PhD students camp out in a rainforest or on a tropical island with locals for a year or two, learn their language, and come home to publish Marxist dissertations and try to overthrow capitalism.
I mean anthropology in its truer, etymological sense: anthropos (man) + logos (knowledge, understanding). The study of humans, or human nature.
What I’m getting at is that, over the decades, the energy and momentum in the sciences has increasingly focused on questions of human nature, not just raw physical forces and particles. It’s been decades since physics was the coolest science on the block. Genetics, neuroscience, then pop psychology, then AI — in countless ways, it’s as if we’re more and more turning inward, trying to figure ourselves out instead of plumbing the distant reaches of earth and space. After all, artificial intelligence is really applied cognitive science, a way of working out and testing theories of how the human mind works. It’s almost as if AI is the culmination of a great shift in science, away from out there and towards in here.
From Out There to In Here
It certainly wasn’t always like this. From Galileo’s heliocentric solar system to Huygens’ clocks and Newton’s inverse-square law of gravity, the early advancements in modern science mostly helped us to understand the physical world as a neutral system of causes and effects out there, in the objective world. Even early medical breakthroughs, like William Harvey’s 17th-century work on the circulation of blood, depicted the human body as an essentially mechanical-functional system, a series of pumps, levers, and valves.
During this mechanistic era, the human mind was seen as either a byproduct of physical interactions between parts of the body and brain or — as for philosophers such as René Descartes and Immanuel Kant — something altogether foreign to physical reality, a kind of ghost in the machine that simply wasn’t amenable to scientific inquiry. What was amenable to science was the physical world. Darwin’s publication of On the Origin of Species in 1859, which brought biology into the modern era, only seemed to increase the dominance of mechanistic materialism. Darwin’s theory portrayed the biological world as essentially an enormous, impersonal mechanism, mindlessly sifting biological designs that succeeded from those that didn’t. It was a theory about life, but it wasn’t centered on life.
Physics Gives Up the Ghost
Yet in the background, here and there, some scientists began to take up study of the supposed ghost in the machine. Wilhelm Wundt and William James founded the first dedicated psychology labs in Germany and America, measuring reflex responses and investigating consciousness scientifically. Hermann von Helmholtz, best known in physics for establishing the principle of the conservation of energy, formulated important (and still very relevant) hypotheses about how perception works in the brain. Quietly, serious researchers were getting interested in Mind.
Over the course of the 20th century, this scientific curiosity about Mind continued to grow. At first, it produced a lot of theory and comparatively little rigorous science. Freud’s and Jung’s extraordinarily influential theories of the unconscious revolutionized human self-understanding, despite being critiqued for being untestable. For a long time, the only game in town for rigorous lab psychology was behaviorism, which ignored mental states and treated behavior as an input-output system, not the product of a mind. It was only with the cognitive revolution of the 1950s that scientists started taking mental processes seriously.
Interestingly enough, this was about the time that physics began losing steam. After their extraordinary heyday in the early twentieth century, physicists just couldn’t keep producing discoveries at the same clip as they used to. Even today, working physicists overwhelmingly see the period between 1910 and 1940 as the most fruitful time in recent physics history. When surveyed, they rate every single decade since then as comparatively less impressive in terms of the importance and quality of Nobel Prize–winning discoveries. After World War II, “(t)he very best discoveries in physics, as judged by physicists themselves, became less important.”
By contrast, advances in the human sciences, including the cognitive- and neurosciences, started coming thick and fast after the cognitive revolution. After the invention of reliable neuroimaging techniques in the 1990s, neuroscience took only a couple of decades to arrive at in-depth models of how the brain works at the neurochemical, cellular, neuroanatomical, and systemic levels, shedding light on both pathologies and normal brain functioning we never had before. In social cognitive neuroscience, discoveries have illuminated key insights into what makes humans unique, including how we understand symbolic communication and cooperate with each other.3 Many of these fields, especially neuroscience and cognitive science, contributed heavily to the development of AI programs such as ChatGPT. Today, physics, while still lavishly funded and making incremental progress in some areas, is nowhere near as vibrant as AI and the other sciences of Mind.
Why This Is Happening
Maybe progress in physics slowed down simply because we’ve made most of the crucial discoveries, and there’s just not much low-hanging fruit left. You can only formulate an inverse-square law of gravity once. You can only subsume that law under general relativity once, too. Maybe someday physicists will come up with an overarching theory that unifies relativity and quantum mechanics (the Holy Grail of late 20th-century physics), but that mission has been stalled out for nearly an entire human lifetime. You might be forgiven for losing a bit of confidence in it. It could be that there are just fundamental limits to what we can learn about the physical and material world, and we’re inching closer to that boundary, hitting the point of diminishing returns.
But the human sciences are a different story. As we’ve conquered more and more of the external, physical world since the 17th century, countless hidden mysteries about human nature awaited our discovery. There was always going to come a point when it was time to turn our attention inward, toward the enigma of ourselves. It seems that time is now.
For one thing, our current civilizational challenges seem to call for such a shift. A lot of the problems that face us today aren’t merely physical puzzles, like building a moon rocket or engineering the interstate highway system. They’re social puzzles: how do governments regain legitimacy after blowing their credibility with common people? How do we coordinate internationally to minimize the risks of climate change or nuclear war? How do we reverse the trend of low birthrates and upside-down population pyramids? These aren’t problems that physics can solve. They call for insight and wisdom into how humans work, what makes us tick. The recent popularity of fields like game theory and cultural evolution, which study how humans cooperate and solve collective action problems, is a sign of the times.
But at an even deeper level, I think we’re being driven by a sometimes unconscious but always insatiable curiosity about what humans minds are, how creatures with our range of capacities and intelligence could possibly arise in a world of matter in motion. To put it crudely, we’ve figured pretty much everything else out. The only truly great mystery that remains is ourselves.
AI Is Anthropology
The recent advances in AI, particularly large language models (LLMs) such as ChatGPT, seem to suggest that we’re making some progress. LLMs work by predicting the next word in any sequence, then updating their predictive models with each hit or miss, gradually improving their models as they ingest (or “train” on) more data. As I’ve written before, this is not far off from the most cutting-edge models of how biological brains and nervous systems work. According to so-called predictive processing models of cognition, we animals are flesh-and-blood prediction machines, constantly positing and revising hierarchical predictive models of the world that we experience through sight, sound, taste, touch, and smell. With every experience we have, every mistake we make, we slightly improve our models, all for the purpose of surviving and adapting to the uncertain world around us. LLMs do something similar, but for a world of text, not space and time.
In a way, then, the success of LLMs is a profound vindication for prediction-focused theories of the mind. This means that predictive AI has hallmarks of scientific gold: it’s extraordinarily elegant, needing relatively few assumptions to realistically reproduce features of a complex phenomenon we’re trying to explain — in this case, intelligence. By implementing predictive-processing theories of the mind in silico, AI research has become able to build things that looks increasingly like models of minds.
Ultimately, AI is anthropology. We’re building toy models of ourselves. We wanted to understand ourselves from a scientific standpoint, and this is the method we’ve hit on. The best test of our theories was never going to be just prediction and control of human intelligence and behavior,4 but replication of it. We’d know we were penetrating the deep mystery of ourselves when we could reverse-engineer ourselves. In a real way, that’s what we’re doing with LLMs and other AI technologies.
So the recent successes in AI, while in themselves exciting and terrifying (depending on who you ask), are only the culmination of a long turn inward, toward the last — or maybe just the most mysterious — frontier: human nature itself. Maybe AI will eventually help kick-start a new era of outward-focused science, helping us design warp engines to travel to the stars. But for now it heralds the ascendancy of a scientific world almost single-mindedly focused in here.5
The Mind in the Dark
Everywhere you look, the energy and momentum in the sciences is now centered on life and Mind. After centuries of struggling — and usually succeeding, with enough effort — to master the mind-independent, physical world, we’re now training all our scientific brainpower and commitment on the very thing that makes that scientific work possible to begin with: ourselves. We’re testing our theories by building simplified versions of ourselves either in computers or, as our sophistication grows, in robots.6 We’re dead set on getting it right. Maybe we’ll even pin down the architecture of Mind to such an extent that we can engineer intelligences even better, more breathtaking than our own.
In fact, this might be an inevitability, because science since the time of Francis Bacon has never been a neutral exercise in finding things out. It was always supposed to give us power: the ability to manipulate and control, to game the universe in our favor, to expand our sphere of action. As science becomes basically anthropological in every direction, our agenda of Baconian prediction and control is expanding to the ghost in the machine. We’ll eventually want to improve on that original version, simply because we feel that we can.
C.S. Lewis warned long ago in The Abolition of Man that to reduce human beings to objects of scientific control is to turn us into mere Nature: that is, to transform us into objects of use, probably by other humans. There’s no reason to think Lewis was wrong about that. Exhibit A might be the transformation of the internet into an extraordinarily sophisticated system of surveillance and manipulation of users, whether for marketing purposes or to even more questionable ends, all enabled by data analytics and behavioral science.
But what’s happening now in the sciences goes beyond even what Lewis predicted. We’re not just trying to control populations or remake human beings in some fanciful utopian or technocratic image, as he feared would happen as the human sciences grew in sophistication (although, of course, lots of people are using the human sciences to try to do those things). We’re trying to reproduce ourselves, to prove that we’ve got human nature figured out by reverse-engineering it in a lab and spawning off countless varied replications of it across the digital and physical worlds. This isn’t just technocratic managerialism run amok, which was essentially what Lewis feared. It’s curiositas: a kind of insatiable, gnawing hunger to understand, to penetrate the deepest mysteries and transform them into known quantities. To scatter the shadows, so to speak, with the luminescence of Mind.
In this case, though, the shadows were (are?) in the Mind itself. They are the Mind. It’s hard not to envision a sort of great circle closing in history, in the story of humanity. We just spent five hundred years, give or take, peering outwards, ransacking the physical world for its secrets, learning all we could know about how its parts fitted together, what made it hum and sing. Now the Mind is turning inward, directing its great light on itself, frustrated with the one remaining spot of shadow in the bright world now illuminated by reason.
Once, rigorous use of Mind was Carl Sagan’s lone candle in the dark, a tiny wavering flame thrusting back the shadows of superstition and ignorance. But bright and dark are relative. With the apparent triumph of the sciences in the 20th century, the world now seemed bright, but the Mind was stubbornly dark. The light could not seem to illumine itself, because it was the source of the light. Now, with the rise of sophisticated AI and the growth of the human sciences, we feel that we’re finally squaring that circle (to mix metaphors). The light is turning backward on itself, seeing itself, finding ways to tinker with what it sees. The age of science as anthropology is upon us, and Mind is laying itself bare. Whether it will fare better than the natural world, which Baconian mastery has reduced to a strip mine, remains to be seen.
May his memory be for a blessing. (I just discovered he’d recently died when researching for this essay.)
Although I hope to follow up on last year’s post about LLMs and autopoiesis soon.
Some of this research shows that cooperation depends on creating and sustaining higher-level concepts of a shared goal and, as psychologist Michael Tomasello puts it, an idea of “we.” It’s hard to overstate how crucial this “we” is to what makes humans human. Even our close relatives, chimpanzees, which are very smart, seem to represent and select actions based solely on their own point of view: the “I.” They live in a world with “I” and “you” (mostly I) but no “we.” Something that’s unique about human brains, then, is our ability to share perspectives and represent not only the other’s point of view, but an objective point of view that subsumes and transcends all everyone who is contributing to the shared project: the view of the group agent, or “we.” This process is supported by careful, recursive attention-tracking, neurocognitively grounded in the network of brain regions dedicated to complex social cognition, such as the dorsomedial prefrontal cortex. These are all things we didn’t know as recently as the 1990s. The point is that recent progress in research on humans as a social and symbolic species has been truly remarkable.
Although you’d better believe that prediction and control is an important goal for the agencies and interests that fund research in the human sciences. If you haven’t read C.S. Lewis’s The Abolition of Man and That Hideous Strength yet, you should drop whatever you’re reading and pick up those books now. They’re going to come in handy.
It’s not just AI, either. If you don’t live under a rock, you know that data science is hoovering up a tremendous amount of scientific talent (including physics PhDs.) And what kind of data do data scientists crunch? Our behavior online, our smartphone location data, our spending habits, epidemiology and health care, criminal statistics. Human stuff. Very large datasets also get used in non–human science fields, too, such as astronomy and particle physics. But most of the funding and focus is on softer fields like marketing and user analytics, and even tools originally designed by physicists to process ultra-large datasets are increasingly being used for applications in human-focused fields like finance.
Robotics, another of today’s hot fields, is so closely linked with AI that you could see the two fields as a single continuum — especially as machines become more biologically realistic and autonomous, and as robotics researchers incorporate more and more cutting-edge theories and techniques from AI and cognitive science into their work. But whether you see robotics as part of AI or not, it’s clear that roboticists are increasingly trying to solve problems that organisms with minds figured out eons ago: perception, movement, learning, adaptation.
"What I’m getting at is that, over the decades, the energy and momentum in the sciences has increasingly focused on questions of human nature, not just raw physical forces and particles. "Thanks for this -- I've thought this myself for a while (cognitive scientist here) but haven't seen this argument laid out clearly as you have done here.
Back in 2008 or, eminent Marine Biologist Les Kaufman, speaking at Boston University, noted that the most important crises facing humans, and the ones scientists are trying to solve, all have their solutions centered in human behavior. This woke me up since this was a biologist speaking, not a psychologist.
So, humans are no longer trying to figure out nature in order to control it and reap its benefits, we are trying to figure out human nature. Les Kaufman implies the goal is not simply plenty of low-hanging fruit exists. But, to minimize the human-caused damage, including damage to ourselves, given that human behavior is the main obstacle to human physical health, a new thing in human history.