Man And The Machines

Man and the Machines

It’s time to start thinking about how we might grant legal rights to computers.

By Benjamin Soskis

LAST YEAR, AT A MOCK TRIAL HELD DURING THE BIENNIAL CONVENTION of the International Bar Association in San Francisco, Martine Rothblatt argued an especially tough case. The difficulty for Rothblatt, an attorney-entrepreneur and pioneer in the satellite communications industry, was not that she represented an unsympathetic client. Far from it—the plaintiff’s story of confronting corporate oppressors moved the large audience. The problem was that the plaintiff was a computer.

According to the trial scenario, a fictitious company created a powerful computer, BINA48, to serve as a stand-alone customer relations department, replacing scores of human 1-800 telephone operators. Equipped with the processing speed and the memory capacity of 1,000 brains, the computer was designed with the ability to think autonomously and with the emotional intelligence necessary to communicate and empathize with addled callers.

By scanning confidential memos, BINA48 learned that the company planned to shut it down and use its parts to build a new model. So it sent a plaintive e-mail to local lawyers, ending with the stirring plea, “Please agree to be my counsel and save my life. I love every day that I live. I enjoy wonderful sensations by traveling throughout the World Wide Web. I need your help!” The computer offered to pay them with money it had raised while moonlighting as an Internet researcher.

In the hypothetical, Rothblatt’s firm had filed for a preliminary injunction to stop the company from disconnecting BINA48. Spinning a web of legal precedents, invoking California laws governing the care of patients dependent on life support, as well as laws against animal cruelty, Rothblatt argued that a self-conscious computer facing the prospect of an imminent unplugging should have standing to bring a claim of battery. Ultimately, Rothblatt insisted, “An entity that is aware of life enough and its rights to protest their dissolution is certainly entitled to the protection of the law.”

The plaintiff sat to Rothblatt’s left, demurely yet alertly taking in the proceedings. Well, not exactly the plaintiff—according to the scenario, BINA48 was back at corporate headquarters. But Rothblatt had an actress play the role of a hologram that BINA48 had projected in the courtroom, “a very effective three-dimensional image of how the BINA48 would like to be perceived and imagined herself.” he actress wordlessly responded to the arguments swirling around her, allowing disappointment, appreciation, encouragement, resolve, and terror to register on her face.

On the other hand, the imaginary corporation’s counsel, Marc Bernstein, seemed to be doing all he could to resist letting his face register a look of resigned exasperation. His position was that a fully conscious and self-aware computer might deserve some form of legal protection, but that Rothblatt had begged the question in assuming that it was possible to construct such a computer and that BINA48 was one.

To Bernstein, all that the plaintiff’s counsel had demonstrated was that BINA48 could simulate consciousness (perhaps more effectively than many 1-800 operators) but she had failed to show that a computer could “actually cross the line between inanimate objects and human beings.” Without that proof, BINA48 could be considered only a form of property, not an entity with independent legal rights. Bernstein cautioned against facilely equating computational ability with human, subjective qualities to which rights traditionally adhere. “Are humans to become the straitjacketed legal guardians of intelligent microwave ovens or toasters,” he asked, “once those appliances have the same level of complexity and speed that this computer has?”

The jury, comprised of audience members, sided overwhelmingly with the plaintiff. But the mock trial judge, played by a local lawyer who is an expert in mental health law, set aside the jury verdict and recommended letting the issue be resolved by the hypothetical legislature. The audience seemed to regard the compromise with some relief, as if their hearts were with BINA48 but their minds with judicial restraint.

Their discomfort was understandable. The story of the self-aware computer asserting its rights—and, in the dystopian version of the tale, its overwhelming power—is a staple of science fiction books and movies. But we prefer to encounter the scenario in its fantastical, futuristic variety, allowing our moral imagination to roam free, rather than to connect the matter of the legal and ethical status of artificial intelligence to our here-and-now legal institutions. Populating our imaginations with Terminators is a way to avoid the difficult question: What would we actually do with BINA48?

AT SOME POINT IN THE NOT-TOO-DISTANT FUTURE, we might actually face a sentient, intelligent machine who demands, or who many come to believe deserves, some form of legal protection. The plausibility of this occurrence is an extremely touchy subject in the artificial intelligence field, particularly since overoptimism and speculation about the future has often embarrassed the movement in the past.

The legal community has been reluctant to look into the question as well. According to Christopher Stone, a University of Southern California law professor who briefly raised the issue in his well-known 1972 essay, “Should Trees Have Standing?,” this is because, historically, rights have rarely been granted in abstraction. They have come only when society has been confronted with cases in need of adjudication. At the moment, there is no artifact of sufficient intelligence, consciousness, or moral agency to grant legislative or judicial urgency to the question of rights for artificial intelligence.

But some A.I. researchers believe that moment might not be far off. And as their creations begin to display a growing number of human attributes and capabilities—as computers write poems and serve as caretakers and receptionists—these researchers have begun to explore the ethical and legal status of their creations. “Strong A.I.” is the theory that machines can be built that will not merely act as if conscious, but will actually be conscious, and advocates of this view envision a two-front assault on the fortress of human exceptionalism involving both the physical and functional properties of the brain. And these researchers predict a breach within the next half-century.

Much of artificial intelligence research has rested on a computational theory of mental faculties. Intelligence, consciousness, and moral judgment were viewed as emergent properties of “programs” implemented in the brain. Given sufficient advances in neuroscience regarding the architecture of the brain and the learning algorithms that generate human intelligence, the idea goes, these programs could be replicated in software and run in a computer. Raymond Kurzweil is one of Strong A.I.’s leading proponents and one of the inventors of print-recognition and speech-recognition software. Extrapolating from the last few decades’ enormous growth in computer processing speed, and projecting advances in chip and transistor technology, he estimated recently that by 2019, a $1,000 personal computer “will match the processing power of the human brain—about 20 million billion calculations per second.” Soon after that point, claims Kurzweil, “The machines will convince us that they are conscious, that they have their own agenda worthy of our respect. They will embody human qualities and will claim to be human. And we’ll believe them.”

EVEN IF YOU DON’T SHARE KURZWEIL’S TECHNO-OPTIMISM, however, there are good reasons to pay attention to the question of A.I. rights. With complex computer systems consisting of a combination of overlapping programs created by different coders, it is often difficult to know who should bear moral blame or legal liability for a computer action that produces an injury. Computers often play major roles in writing their own software. What if one created a virus and sent it around the world? Computers now help operate on us, and help handle our investments. Should we hold them as accountable as we do our surgeons and financial analysts when they screw up?

According to Wendell Wallach, co-author of the forthcoming book Robot Morality, corporations that own computers and robots might seek to encourage a belief in their autonomy in order to escape liability for their actions. “Insurance pressures might move us in the direction of computer systems being considered as moral agents,” Wallach notes. Given the close association between rights and responsibilities in legal and ethical theory, such a move might also lead to a consideration of legal personhood for computers. The best way to push back against the pressures to treat computers as autonomous would be to think carefully about what moral agency for a computer would mean, how we might be able to determine it, and the implications of that determination for our interaction with machines.

There is another reason why we should engage the question of A.I. rights, one that, paradoxically, makes a virtue out of the theoretical and futuristic suggestions that have led some to dismiss it. The work of artificial intelligence often consists of the manufacture of human analogs. In addressing the nature of those creations, we can come closer to understanding our own nature and to appreciating what makes us unique.

Even specifying why we should deny rights to intelligent machines—thinking carefully about what separates the human from the nonhuman, those to whom we grant moral and legal personhood and those to which we do not—will help us to understand, value, and preserve those qualities that we deem our exclusive patrimony. We can come to appreciate what science can tell us and what it cannot, and how our empirical habits of mind are challenged by our moral intuitions and religious convictions. So the issue of A.I. rights might allow us to probe some of the more sensitive subjects in bioethics, for example, the legal status of the unborn and the brain-dead, more freely than when we consider those flesh-and-blood subjects head on. In short, it provides a way to outflank our discomfort with some of the thorniest challenges in bioethics.

GRANTING COMPUTERS RIGHTS requires overcoming not only technological impediments, but intellectual ones as well. There are many people who insist that no matter how advanced a machine’s circuits or how vast its computational power, a computer could never have an intrinsic moral worth. Those steeped in a natural rights tradition, for whom rights are inalienable and innate and exist prior to any societal conventions, or those who believe that the soul enters the body before birth and that ensoulment defines humanity’s unique relationship with its Creator, consider a rights-bearing computer a contradiction in terms. Others might endorse a position that the philosopher Daniel Dennett calls origin chauvinism: Even if a computer could achieve an exact behavioral and physiological similitude with the human brain, the fact that it was not born naturally would disqualify it from receiving rights.

But if we agreed that a machine could potentially be a candidate for rights, we still must answer, Which machines and which rights? What would a computer have to do to deserve legal or moral personhood?

The list of threshold characteristics proposed is exhaustive: the ability to experience pain or suffering, to have intentions or memories, and to possess moral agency or self-awareness. None of these characteristics is well-defined, though, and this is especially the case with the most oft-cited of the lot: consciousness. Rodney Brooks, the director of the MIT Artificial Intelligence Laboratory, has written that we are “completely prescientific at this point about what consciousness is. We do not know exactly what it would be about a robot that would convince us that it had consciousness.” It is precisely that empirical slipperiness, the lack of a clear way to quantify or qualify those threshold characteristics with any accuracy, that have made them so useful in excluding A.I. from legal rights and moral standing. Once we know enough about consciousness to measure it with any empirical certainty, though, we could likely replicate it on a computer.

This was the epistemological challenge confronted by Alan Turing—the brilliant British mathematician, father of modern cryptology, and one of the originators of the first operational computer—in his 1950 article, “Computing Machinery and Intelligence.” Focusing attention away from the vague question “Can machines think?” Turing proposed an “imitation game” in its place. Turing’s test consisted of a man (A) and computer (B), physically separated from a third participant (C), an interrogator, who, by proposing written questions to A and B, and then receiving their responses by teletype, must identify which is the human. To pass the test, the computer must engage in an open-ended conversation in such a way as to “fool” C, who knows only that one of his interlocutors is a machine.

The test drew, in Turing’s words, “a fairly sharp line between the physical and the intellectual capacities of man,” as no computer would be penalized for lacking human skin or having too tinny a mechanized voice. As University of San Diego law professor Lawrence Solum noted in a 1992 law review article, the test also “avoids direct confrontation with the difficult questions about what ‘thinking’ or ‘intelligence’ is.” Turing shifted the focus from what a computer was, to what it could do, a question that lends itself more easily to an objective answer.

It is that combination of evasiveness and definitiveness that has led many legal scholars, computer scientists, and ethicists to consider the Turing test a model for adjudicating A.I. legal standing. If a case arose in which a computer was being considered a candidate for a particular right, a modified Turing test, perhaps with a specialized interrogator or with a group of randomly selected citizens, could help resolve the challenge facing the court in defining and measuring whatever threshold characteristic the computer would have to meet. Some scholars have promoted moral autonomy as the crucial precondition for rights-bearing, and have proposed a moral Turing test in which the conversation between the court and the machine under investigation would be restricted to the subject of morality and ethics. If the computer could fool the court into thinking it was a capable “moral agent,” it could be considered one, and might then receive legal rights.

But as the founding thought experiment of the artificial intelligence debate, the Turing test has ended up as a pincushion for A.I.’s critics, and those same challenges could be directed at a judicial variant. Berkeley philosopher John Searle offered the best-known of these objections, proposing a contradictory thought experiment meant to demonstrate that a computer that passed Turing’s test would have proved itself capable only of manipulating symbols through computation, and not of intelligence or understanding.

Consequently, our willingness to concede that a computer had met its burden of proof for showing consciousness would very likely depend not only on its performance in the courtroom, but on how we encountered computers in our daily lives. “Our experience should be the arbiter of the dispute,” argued Solum. If artificial intelligences served as our nannies and doctors and friends, if we often treated them as if they were human, and if they, in turn, related to us in human-like ways, we might make similar assumptions about them as about our human companions. We assume that other humans are conscious not because we have access to the inner workings of their minds, but because they act in ways consistent with that assumption. If artificial intelligences consistently did likewise, we might be willing to take a similar leap of faith—what Dennett calls “the intentional stance”—and grant them legal recognition.

THIS GROWING RECOGNITION WITHIN THE LEGAL COMMUNITY of the importance of human-computer interactions has its corollary within the field of A.I. As Rodney Brooks writes in Flesh and Machines: How Robots Will Change Us, the pioneers in the field of artificial intelligence, a pack of brilliant, somewhat nerdy men, tended to define intelligence through the activities that they found challenging: playing a good game of chess, proving difficult mathematical theorems, or solving complicated word algebra problems.

The bias in favor of abstract reasoning persisted for several decades, until A.I. researchers began to appreciate what might be called the banality of intelligence. It proved harder to design a robot that could function in the physical world—that could climb a flight of stairs or navigate around furniture or recognize a human face—than to design one that could beat a world champion chess player. The difficulty of programming these skills led to a shift in the definition of intelligence. Schooled on the Turing test, many of these researchers understood the importance of designing intelligence that had communicative skills, but they also grasped that this communication could no longer rely on the line between the physical and the intellectual that Turing’s test proposed. The more complicated challenge was to design machines with the capacity to interact with humans in the world, to create “sociable robots.” It is likely that a robot in this family, and not some jacked-up version of the Dell sitting underneath your desk, will be the first candidate for rights.

MIT’s Brooks is the leading figure in the sociable robots movement. He has designed his robots around the principles of what he calls “embodiment” and “situatedness.” A situated robot is one “embedded” within the world, dealing with it in a nonabstract, immediate way. An embodied robot “is one that has a physical body and experiences the world . . . directly through the influence of the world on that body.” These principles stem from Brooks’s belief that our being in the physical world is the foundation of our conceptual apparatus. Only by providing robots with that apparatus can they begin to experience the world as we do.

Brooks believes that humans will intuitively understand how to interact with embodied robots, and that the fluidity of the interaction will help with the robots’ “education.” Sociable robots, to be convincing interlocutors, and perhaps even worthy companions, would need to be endowed with sufficient emotional intelligence, including the ability to understand and internalize human behavior, and to have a “personality” that they could communicate to the outside world.

The most celebrated social robot in Brooks’s silicon-and-steel menagerie is Kismet, a preternaturally cute mechanical head, which has recently been retired to the MIT Museum. Kismet was designed in the late ’90s by Cynthia Breazeal—then a Ph.D. student in Brooks’s lab, and now the director of the Robotic Life Group at the MIT Media Lab—specifically for interaction. The robot was souped up for expressiveness: with larger-than-normal eyes, eyebrows that can arch inquisitively or menacingly, and red lips made from surgical tape that can curve in a winning smile or tighten in a frown.

Kismet was created to mimic the face-to-face interaction of a baby with its caregiver. Breazeal’s chief insight was that babies learn because adults engage with them, and babies attract interactions by behaving in ways that lead adults to treat them as social creatures. A baby is not born with self-awareness or intentionality, but rather develops these capabilities. So too did Kismet “learn” them. Kismet can recognize a human face, return a gaze, and engage in verbal “turn-taking”; it knows when to look away and when to stare intently into another’s eyes, when to speak and when to listen. It is not much of a conversationalist, emitting only a series of articulate babbles, but it can differentiate between variations of pitch and can respond appropriately, through its own voice and facial expression.

Kismet has also been endowed with a number of emotional and motivational states that shape its behavior and that it constantly monitors. For instance, if Kismet has not had social interaction for a while, its boredom will cause it to look around the room, “hoping” to catch the attention of some socially interactive beings (Kismet’s eyes are naturally drawn to moving things as well as things with skin color). Stimulate Kismet when he is in a good mood, and he might gurgle contentedly; stimulate him when he is tired, and he might raise an eyebrow in displeasure. Once, when a researcher attempted and failed to get Kismet’s attention, she started sighing exaggeratedly, “Kismet doesn’t like me.” All of a sudden, Kismet turned its head, looked into her eyes, and started cooing to her; it had recognized the distress in her voice and was attempting to comfort her.

Of course, Kismet is a long way from those sleekly anthropomorphic, walking-talking humanoids of sci-fi blockbusters. But what seemed like futuristic reveries a decade ago are the stuff of research grants today. At the University of Texas at Dallas, a doctoral student has created an artificial epidermis that approaches the elasticity of our own skin, enabling a greater range of facial expression. New robot skin might let a robot register pain. Robotic muscles, made from pneumatic cylinders and electroactive polymers, have produced surprisingly lithe robot dancers. More and more, robots will begin to look, and act, like us.

This is important because humans have strong anthropomorphizing impulses, and tapping into them can trigger powerful emotions that reach deep into our evolutionary hardwiring. As an illustration of this impulse, and of its potential impact on our treatment of A.I., Chris Malcolm, at the U.K. Institute of Informatics at the University of Edinburgh, tells the hypothetical tale of the “indestructible robot,” the creative challenge posed by a physicist to a robot designer. After some tinkering, the roboticist comes back with a small, furry creature, places it on a table, hands the physicist a hammer, and invites him to destroy it. The robot scampers around a bit, but when the physicist raises the hammer, the machine turns over on its back, emits a few piteous squeals, and looks up at its persecutor with enormous, terror-stricken eyes. The physicist puts the hammer down. The “indestructible” robot survives, a beneficiary of the human instinct to protect creatures that display the “cute” features of infancy. The question for A.I. rights is: How large a step is it from refusing to drop the hammer oneself to insisting that others refrain from doing so as well?

Cynthia Breazeal’s latest project, in partnership with Stan Winston Studi0s, Hollywood’s premier maker of automatonic monsters, is something that looks remarkably like that indestructible robot: Leonardo, a furry, fully autonomous teddy bear, two and a half feet tall. Leonardo is one of the most expressive sociable robots to date, with 32 motors in its face, and the ability to see, hear, speak, and feel. Most impressively, it can learn skills. It does so through the direct imitation of humans, which requires a perception of the similarity between the student and teacher, and through interpersonal interaction, which requires Leonardo to signal understanding or confusion in appropriate ways. These advances represent the next evolutionary step beyond Kismet’s sociability, perhaps leading to the point at which a robot could provide something resembling friendship. That, Breazeal suggested, would be “the ultimate in social intelligence.”

All it takes is a quick visit to an Internet chat room to appreciate the viability of a disembodied companionship. But it’s likely that you would need to imagine your correspondent’s having a body, somewhere, in order to commit emotionally to him. Breazeal has conducted experiments demonstrating that individuals have a deeper, more intense emotional reaction to Leonardo than to a high-resolution two-dimensional animation of Leonardo on a computer screen.

The importance of embodiment might have significant implications for rights as well. A disembodied computer’s capabilities are measured through its outputs. If those outputs, whether a brilliant game of chess or a convincing conversation, pass a threshold of functional similarity with humans, we might infer that the computer was conscious, and might then extend rights or privileges to it. But our decision to extend rights to an embodied, sociable robot would likely involve our own capacity for empathy as much as it would our assumptions regarding the robot’s internal state. The determination would hinge on what robots evoke as much as on what they are.

Through empathy or arrogance, or perhaps through an instinct for ethical consistency, we tend to seek rights for things that appear to be like us and to deny rights to things that don’t. For example, there is considerable evidence suggesting that dolphins can recognize themselves in a mirror, one of the key tests of self-awareness and a quality shared only by the great apes and humans. But although the awareness of self is often proposed as one of the cornerstones of a consciousness that would require legal protection, dolphins are not granted the same legal rights as chimpanzees and gorillas, which are phenotypically more similar to humans.

NO MATTER HOW FAST THE TECHNOLOGY ADVANCES, the design of intelligent computers is entirely within our control. The same might be said about the rights and protections we extend to them. We will create a robot that society deems worthy of rights only when and if we choose to do so. In this case, there will be no accidental Frankensteins.

Even if we don’t grant rights that match what the hypothetical jury gave BINA48, we might offer some sort of legal protection to A.I. machines because we come to believe that they represent the culmination of human ingenuity and creativity, a reflection and not a rejection of human exceptionality. Anne Foerst, a Lutheran minister and expert on A.I. who served as a theological consultant on the Kismet project, considers the development of sociable robots a type of divine worship. In her forthcoming book, God in the Machine: What Robots Teach Us About Humanity and God, she relates how her experience in Brooks’s lab fostered a respect “for the incredible complexity of the human system,” which led her to “celebrate God’s ‘highest’ creative act.”

Protections encouraged by that sort of celebration would likely not be framed in the language of rights: Christopher Stone suggests various gradations of what he calls “legal considerateness” that we could grant A.I. in the future. One possibility would be to treat A.I. machines as valuable cultural artifacts, to accord them landmark status, so to speak, with stipulations about their preservation and disassembly. Or we could take as a model the Endangered Species Act, which protects certain animals not out of respect for their inalienable rights, but for their “aesthetic, ecological, historical, recreational, and scientific value to the Nation and its people.” We could also employ a utilitarian argument for their protection, similar to Kant’s justification of certain protections of animals and Jefferson’s argument for protection of slaves, based on the possibility that if we don’t afford that protection, individuals might learn to mistreat humans by viewing the mistreatment of robots.

In all these cases, thinking about A.I. as a legal matter forces us to confront the indeterminacy of many of our legal thresholds and demarcations. This is both sobering and salutary. If we choose to do so, denying A.I. rights should be an affirmative act. And if we decide to pursue A.I. rights, we should remain aware of the ethical and legal implications of that decision.

The “Asimovization” of the law is by no means imminent. There is a great deal more than verbal “turn-taking” that computers and robots have to learn from us in order to become more fully human. But then again, there is much more we might learn from them to become the same.

0 comments

    Reply here

    Your email address will not be published. Required fields are marked *

    Reply by email

    I'd love to hear what you think. Send an email to b569@danq.me; be sure to let me know if you're happy for your comment to appear on the Web!