Our Intelligence Springs From Our Vulnerability
To make technology smarter, we may have to make it more fragile.
When the moment of revelation arrived, Kingson Man was out walking his dog. Man, an assistant research professor of psychology at the University of Southern California, was strolling down the sidewalk, leash in hand, mulling over his work. The question on his mind that day grew out of his collaboration with a USC colleague, the eminent neuroscientist Antonio Damasio: Why, Man wondered, weren’t the products of artificial intelligence more . . . intelligent?
Lost in thought, his eyes fell upon his goldendoodle, Mocha. “I started thinking about how soft and furry dogs are — how vulnerable,” says Man. It was his pet’s very susceptibility to harm, he mused, that motivated Mocha to behave in intelligently adaptive ways: to avoid a menacing fellow canine at the dog park or to lap up water on a hot day. As he and Mocha turned a corner toward home, his thoughts moved on to an appliance plugged into an outlet in his living room: his Roomba robot vacuum cleaner. “That thing sucks,” Man observes. “It gets stuck under chairs, it falls down the steps.” Its hard plastic case makes it impervious to harm, however, and so, once rescued, it goes blithely on its way. “It occurred to me then: Maybe vulnerability is exactly what AI is missing,” he recalls.
Man’s insight ultimately contributed to a startling new theory he developed along with Damasio, a theory about what makes organisms — both biological and engineered — smart. Their proposal could help usher in a generation of machines that are more intelligent and more useful. It could also help us gain a deeper understanding and appreciation of ourselves. For the characteristics that the two researchers seek to introduce into our technology are precisely those aspects of our humanity that we find most lamentable: our weakness, our frailty, our susceptibility to pain and injury, our mortality. Even as most engineers and computer scientists strive to “harden” their systems against failure and attack, Man and Damasio are exploring the potential benefits of building vulnerable robots.
Two types of intelligence
For Damasio, this work is a natural outgrowth of a long career focused on the centrality of sensation, feeling, and emotion to cognition. In a new book to be published this month, “Feeling & Knowing,” Damasio notes of human beings: “We are governed by two types of intelligence, relying on two kinds of cognition. The first is the one humans have long studied and cherished. It is based on reasoning and creativity and depends on the manipulation of explicit patterns of information.” This is also the kind of intelligence that AI developers have long sought to approximate or even surpass with their machines.
The second type of intelligence is less celebrated; it’s the kind we share with other living creatures, the kind that keeps us alive via a system of self-regulation. This system, known as homeostasis, provides us with a sense of our internal state, and it generates signals — hunger, thirst, pain, fatigue, desire — that prompt us to act in ways that keep this state balanced. In Damasio’s view, this survival-ensuring apparatus is the primordial fount of intelligence. “The very first examples of intelligent behavior emerged out of the governance of life,” he said in an interview. “Everything springs from that.”
Yet it is just this kind of self-regulatory system, sensitive to pleasure and pain, that is missing from the marvels of AI. As impressive as these systems may seem, “their abilities are parasitic on their human creators,” notes Man. “The machines are responding to the metabolic and homeostatic requirements of graduate students working away in labs. It’s the students’ survival needs that drive the machines’ behavior: their need to secure grants, their need to publish papers, their need to graduate and get jobs.” In this scenario, the humans are the only parties that care whether the technology succeeds or fails, sticks around or ceases to exist. But what if robots and other products of artificial intelligence research could be made to care about their own survival?
The desire to stay alive
In a paper published in the journal Nature Machine Intelligence in 2019, Man and Damasio spelled out just such a bold vision of self-interested robots. “Attempts to create machines that behave intelligently often conceptualize intelligence as the ability to achieve goals, leaving unanswered a crucial question: whose goals?” the article began. “In a dynamic and unpredictable world, an intelligent agent should hold its own meta-goal of self-preservation.” The simple desire to stay alive, which so powerfully structures the behavior of every living creature, would lend its motivating force to the decisions made by the artificial agent. The agent would now have a reason to act intelligently, pursuing aims and solving problems in ways that promote its survival and perhaps even its flourishing.
The characteristics of the “feeling machines” imagined by Man and Damasio may sound curiously familiar to the human reader. These agents would have to possess physical bodies — bodies that need to be maintained “within a narrow range of viability states.” An internal monitoring system is required, and this system must generate signals, positive and negative, to which the agent is compelled to respond. The stakes for the agent must be high — existential, in fact — if they are to generate genuinely intelligent behavior. As the authors bluntly declare, “Rewards are not rewarding and losses do not hurt unless they are rooted in life and death.”
Though Man and Damasio explicitly take biological intelligence as their template, they don’t believe it’s the only kind in existence, or the only kind worth cultivating. “I don’t mean to say that robots need to imitate us in every particular in order to be intelligent,” says Damasio. “If we insisted that robots have to be exactly like us, we would be denying ourselves the possible development of a rich and effective intelligence that is not based on life.” Of course, he adds, much of what we want robots to do is to interact with us in human-friendly ways, helping us solve human problems. It may well be useful, therefore, to invent a class of robots that resemble us in some significant respects: less Roomba and more C-3PO.
Programming common sense
Adopting natural models for artificial intelligence might also address some of the shortcomings that have long plagued AI. While computers excel at tasks that require following rules, executing logic, and crunching numbers, they are notably deficient at carrying out those that require fluid social interaction, the navigation of physical landscapes, and the application of what humans call common sense. Though these activities may seem relatively mundane to us, they turn out to be extraordinarily difficult to program into a machine. “We lack conscious access to the complexity of our own thought processes, and that has led us to underestimate the richness of our own human intelligence,” says computer scientist Melanie Mitchell.
Mitchell, a professor at the Santa Fe Institute, in New Mexico, traces the cycles of excitement and disappointment that have characterized her field in a paper posted earlier this year on the digital platform arXiv. Titled “Why AI Is Harder Than We Think,” the article notes that the early pioneers in AI set out to create an elevated, platonic kind of intelligence, divorced from the grubby details of human existence. Yet “nothing in our knowledge of psychology or neuroscience supports the possibility that ‘pure rationality’ is separable from the emotions and cultural biases that shape our cognition and our objectives,” Mitchell writes. “Instead, what we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world.” Indeed, she adds, these attributes “are actually key to what makes intelligence possible.”
The embodied perspective — that is, the view that intelligence arises from interactions between an agent’s body and its world — has always been a minority voice within AI, albeit a persistent and an increasingly outspoken one. Machines that more closely approximate biological bodies have also been made possible of late by the emerging field of soft robotics. Scientists are now able to clothe robots in a kind of engineered flesh — artificial tissues that are flexible and stretchable, in which an array of sensors can be implanted. Such soft materials provide a richer flow of feedback from the agent’s body and environment than the rigid materials from which robots have conventionally been built; they also allow the agent to sustain bodily harm. A robot can now be hurt.
A negative response
Man and Damasio’s proposal to build such “feeling machines,” such “vulnerable robots,” elicited a negative response from some members of the AI community. When the authors presented their ideas at scientific conferences, they heard from commentators troubled by the possibility of introducing more suffering into the world. Media reports on their article reflected a similar apprehension. “To make robots perform better, make them constantly fear death” was the takeaway offered by Dan Robitzski, a senior reporter at the technology news website Futurism, who added, “This is grim.”
Man was surprised by the reaction the paper generated, though he acknowledges that perhaps he shouldn’t have been. “It’s fair for people to focus first on the pain aspect, because pain is such a salient signal. Our minds seize upon even the possibility of pain,” he says. “But I do want to ask people to look also at the other side of the coin, which is pleasure and love of life. We’d be bringing more of that into the world as well.” Man warms to the possibilities: “Imagine how amazing it would be if we could engineer machines of loving grace, beings capable of empathy beyond anything humans can conceive of,” he says. “We could be building bodhisattvas.” In Buddhism, a bodhisattva is an enlightened being who offers kindness and compassion to others who are suffering.
It’s a beautiful vision. But might it represent another instance of trying to create an idealized human being out of silicon, to engineer out the parts of human nature that we find unattractive or unappealing? Perhaps our distinctive genius is also fed by our rage, our selfishness, our envy, and our greed. Just think of the great many brilliant authors, artists, and scientists who were far from paragons of charity and virtue. To recreate humanlike intelligence, we may have to accept and emulate ourselves entirely as we are.
What do you think—does artificial intelligence need to be vulnerable like us in order to be smart like us?
I had a teacher say to me once “a picture is worth a thousand words, a taste is worth a thousand pictures”. The teacher was referring to taste as experience. Learning to trust the sensation of things not working and facing it as clearly as we face the sensations of when things are working. Things get interesting because a computer can use a set of rules to learn from millions of pictures, yet never experience sensations.
Would it lead to "A Space Odyssey"'s Hal?