Minds that May Matter

0
1349

MRI_Head_5_slicesIn a glass petri dish, a roundworm no bigger than a human hair squirms away from a droplet of acid. In a white coat above it, a researcher watches patiently, documenting the worm’s every move. And in a series of labs around the world, Dr. Stephen Larson and the OpenWorm team are trying to figure out how to automate the entire process.
“OpenWorm is an open science project dedicated to creating the world’s first digital organism: specifically, C. elegans,” Larson told the HPR, referring to the simple roundworm that is the workhorse of much biomedical research. From tip to tip, C. elegans measures about a millimeter, and its whole body is composed of a scant 959 cells. OpenWorm has set the ambitious goal of building a wriggling, virtual model of a worm down to these cells, an unprecedented feat in biology and computer science.
In the short term, Larson explained, OpenWorm will improve our understanding of how the brain’s cells work by providing an authentic replica of the roundworm’s 302 brain cells. In the long term, he said, “the closer the model gets to reproducing what the real biology can do, the more the worm can be a virtual research animal.” The prospect of studying a virtual animal is an enticing one for scientists, as it could broaden the scope of biological research by creating an opportunity to investigate more complicated and diverse relationships between variables.
But in striving towards an accurate computer copy of a living creature—in particular, its brain—OpenWorm raises the thorny issue of whether to regulate research on simulated animals as on real ones. Coming to an answer will require settling the debates about the nature of life and the mind that have dogged philosophy and neuroscience for decades.
The Problem of Suffering
Contemporary discussion of what makes nonhuman animals deserving of protection centers on an intuitive principle. “I would say it’s sentience: subjective awareness of the world,” Justin Goodman, the director of the Laboratory Investigations Department at PETA, told the HPR. Kathleen Conlee, the Vice President of Animal Research Issues at the Humane Society of the United States, agreed, adding that “any being who is capable of experiencing pain, distress, anxiety, [or] suffering” deserves moral consideration.
PETA and the Humane Society both oppose the use of nonhuman animals in most scientific research on the grounds that they suffer and are too different from humans to meaningfully contribute to scientists’ understanding of human biology. While virtual research wouldn’t allay the second concern, it could address the first by replacing many aspects of live animal testing. Larson explained that, by using virtual experiments, scientists could exert pinpoint control over their tests, develop intricate new hypotheses, and study thousands of virtual animals instead of dozens of real ones.
Thus, on one hand, virtual research could avert the suffering of many lab animals and perhaps improve the quality of research. On the other, if virtual animals do suffer, the number of them suffering in a computer lab could easily eclipse the number of live animals researchers could experiment on, raising serious ethical concerns.

The OpenWorm project is striving to create a highly accurate simulation of the simple roundworm.
The OpenWorm project is striving to create a highly accurate simulation of the simple roundworm.

Minds That May Matter
The crucial task is determining whether virtual animals can suffer. Under one set of assumptions, the answer is a definitive no. In an interview with the HPR, Université du Québec à Montréal professor Stevan Harnad maintained that simulated animals could never think or feel, explaining that the difference between a virtual and a real organism is “the same as the difference between a simulated waterfall and a real waterfall: a real waterfall is wet. A simulated waterfall is just squiggles.”
Harnad’s explanation, that computer programs are nothing more than lines of code—squiggles—draws a firm distinction between computers that process information and nervous systems that feel. When computers execute programs, Harnad argued, they apply a set of rules that converts inputs to outputs. But pushing around numbers and evaluating equations the way computers do is, by definition, a mindless task. Whether those outputs are this week’s weather forecasts or a model of a roundworm, the program itself is just a set of rules for a computer to follow, not a mind capable of feeling.
In Harnad’s account, asking whether virtual animals can feel pain is like asking whether Google Translate can understand language—it’s the wrong kind of question. Even if the most convincing computer model of an animal behaved as if it felt pain, it would not truly be suffering; it would only look that way.
Though confident in his reasoning, Harnad acknowledged that there are other cognitive scientists who would reject his conclusion. “If you want somebody that will disagree with me strongly,” he said, “call Dan Dennett. We’re very good friends. Unfortunately, we completely disagree on this.”
“Stevan and I have gone around on this for several decades—no progress,” Tufts professor Daniel Dennett told the HPR, with a warm chuckle. Taking a more serious tone, he argued that a strong indication of whether organisms feel pain is precisely how they act. “Behavior should matter,” Dennett said. “One of the reasons we’re sure that some people are congenitally insensitive to pain is [that] they don’t just say they are, [but] they [also] don’t respond to painful stimuli the way they ought to. They leave their hands on burning hot stoves.” Simply, pain hurts, and animals that can feel it try their hardest to avoid it, leading to clear and observable patterns of action.
Harvard professor Jeffrey Lichtman picked up this thread, pointing out that a focus on the feeling of pain may even be misguided, as pain is just a feature of the broader behavior of harm avoidance. Minds drum up the feeling of pain to keep the body safe, he continued, but the animal kingdom is full of self-preserving behaviors. Lichtman even suspected “that the first living animals that could affect their position in the world by moving very quickly developed ways to avoid putting themselves at risk, and that is equivalent to pain.” He and Dennett agreed that even microbes have incentives to sense their surroundings and avoid harm analogously to how humans avoid pain.
Lichtman then turned to the question of the complex computer programs, the same ones that Harnad rejected. “Is that equation then the entity? Does it feel pain?” He mused briefly, before pausing and laughing, almost in disbelief of the answer he was about to give. “But I have no clear sense of why that is so fundamentally different. My gut tells me yes—even the equation in some sense feels pain because it’s behaving like it feels pain, and that’s all we’re doing: behaving like it really hurt.” Dennett concurred, concluding that a sufficiently complex model “would be not importantly different from the real thing.”
Mental Blocks
The lack of consensus between Harnad, Lichtman, and Dennett—all respected cognitive scientists—points to the immense difficulty, and perhaps impossibility, of determining whether simulated beings would have a meaningful sense of pain. That Harnad and Dennett’s debate has spanned decades without exhausting itself hints at the depth and breadth of arguments they have mustered. And Lichtman’s surprise at his own conclusion indicates that even thinking about artificial organisms can be highly counterintuitive.
Some reasons that the debate is hard to resolve are psychological, stemming from the difficulty of thinking about other how minds may work. But at least part of the confusion comes from the fact that Harnad, Lichtman, and Dennett all articulated clear arguments; they just made different starting assumptions. If you grant Harnad that computers only follow instructions, then it is hard to argue that simulations can feel anything at all. Yet if you start as Lichtman and Dennett did, from the principle that behavior is what matters, then it is just as hard to disregard simulations that are designed to behave like animals.
It seems that in thinking about virtual animals, it’s not so much the arguments we make, but where we start, that determines our conclusions. The disagreement between these thinkers, then, is on axioms, the foundations of belief. That means that debating the ethics of virtual research will require more than making strong arguments; it may require us to clearly identify our starting principles.
Harder than setting these principles is standing by them, even when they lead to difficult conclusions. Following Harnad’s reasoning, we are left with the puzzling question of what makes our brains different from computer programs. The assumption that computers do not think is intuitive since we know exactly what rules they follow. Yet in a real sense, our brains are machines as well: they get input from the senses, process it through a complex series of chemical reactions, and produce behavioral output. Where computers use binary, we use biology. To assert that computer programs don’t feel, we would have to find an aspect of our physical brains that somehow breaks from this current understanding of neurobiology.
If instead we follow Dennett and Lichtman and argue that behavior is what matters, then the world is full of things that look like they’re avoiding pain. To say that some of them deserve ethical treatment but others do not would require explaining why some harms matter but others do not. But this is another axiomatic decision. Whether we say that human pain is somehow special or that any harm-avoiding creature deserves protection, we are pointing back to our core beliefs about what matters, not making logical arguments about what should.
Animal rights activists and policymakers may have a way out without taking either side by adopting a stance similar to PETA’s current one. “Our position is we always give animals the benefit of the doubt,” Goodman explained, cautioning, “When you draw a line, you always find you’re moving that line.” The extension of this do-no-harm ethic would be to think of simulations as at risk of being sentient, and therefore harmed by experiments. However, doing so would problematize virtual research, and may simply encourage scientists to continue testing on live animals.
Neurobiologists are currently working towards mapping the human brain. Developing and understanding a complete wiring diagram, as OpenWorm does for the roundworm, will take decades.
Neurobiologists are currently working towards creating a wiring diagram of the human brain. Developing and beginning to understand the complete diagram, as OpenWorm does for the roundworm, will take decades.

A Distant Horizon
It is easy to get caught up in the cycle of debate and miss the reality that virtual biology research is a budding field at best. Dennett, Lichtman and Larson all stressed that modeling anything beyond the roundworm is impossible with current technologies. For example, OpenWorm has been fortunate to benefit from a complete wiring diagram of the roundworm brain. Lichtman estimated that, using current techniques, creating a similar diagram for the human brain would require millions of petabytes of data—roughly as much digital content as humans have ever produced. Managing that much data is currently an immense challenge.
Furthermore, accurately simulating a brain requires much more than just a wiring diagram. Lichtman explained that the brain’s function also depends on the types of connections cells make and which molecules they use to communicate, among other factors, all of which would have to enter the model. “We’re maybe in first grade when it comes to that kind of modeling, even on a single piece of membrane. To do that for a whole brain is still science fiction,” he stated.
Even the fruit fly, another workhorse of biological research that is far less complex than humans, has hundreds of thousands of cells in its brain, dwarfing the roundworm’s 302. Larson guessed that, barring a massive increase in funding or some spectacular breakthrough, it would take something like two decades to draw the wiring diagram, collect the biological data, and develop the kinds of modeling techniques necessary to build a virtual fly.
This is all to say there will be plenty of time to hash out the details and choose axioms, and that time will certainly be helpful. While philosophers have been discussing the ethics of simulated minds for decades, animal rights organizations have yet to take up the discussion. Conlee told the HPR that, to her knowledge, the Humane Society has not considered the case of simulated animals, and Goodman said the same of PETA. Both were receptive to the idea of exploring the topic, but for now they plan to focus on ongoing animal testing in the real world.
For all the questions it raises, OpenWorm will hopefully provide some answers as well. Said Larson, “The more that we can nail down the mapping between neural activity and mental behaviors as we know it, the easier it will be for us to have a really detailed discussion about this.” So as the biologist moves closer to answers as her worm wriggles through its petri dish, we too may have to wait as OpenWorm inches to completion.
Image Sources: Wikimedia Commons / Genesis12, Flickr / Understanding Animal Research, OpenWorm, Flickr / Maia Valenzuela