The Legal Rights of Robots

Robert A. Freitas Jr.

Student Lawyer 13(January 1985):54-56

Note: This web version is derived from an earlier draft of the paper and may possibly differ in some substantial aspects from the final published paper.

 


End Note. Can the wheels of justice turn for our friends in the mechanical kingdom? Don't laugh

Robert A. Freitas Jr. is a California lawyer and a collector of futuristic legal arcana. He last wrote for Student Lawyer on the rights of extraterrestrials.


 

Recently a man in Bellevue, Washington, finding that his car would not go through some six inches of snow, became enraged and attacked the automobile. He broke out the car’s windows with a tire iron and emptied a revolver into its side. “He killed it,” said police. “It’s a case of autocide.”

Such wanton acts of violence are not limited to Coke machines, photocopiers, public telephones, and other gizmos that steal our dimes and quarters. In 1979, a sheriff in California shot a large mainframe computer for uncontrollably spewing out arrest records. As if to even the score, that same year a one-ton Litton Industries mobile robot stalked and killed a human warehouse worker who trespassed on the machine’s turf during business hours. The worker’s family sued Litton and was awarded a $10-million judgment, but the surly robot got off with a slap on the sensor.

Under present law, robots are just inanimate property without rights or duties. Computers aren’t legal persons and have no standing in the judicial system. As such, computers and robots may not be the perpetrators of a felony; a man who dies at the hands of a robot has not been murdered. (An entertaining episode of the old Outer Limits TV series, entitled “I, Robot,” involved a court trial of a humanoid robot accused of murdering its creator.) But blacks, children, women, foreigners, corporations, prisoners, and Jews have all been regarded as legal nonpersons at some time in history. Certainly any self-aware robot that speaks English and is able to recognize moral alternatives, and thus make moral choices, should be considered a worthy “robot person” in our society. If that is so, shouldn’t they also possess the rights and duties of all citizens?

It may be an idea ahead of its time. People have been jailed for kidnapping or wrecking computers, but it’s the rights of humans, not machines, that are being protected. Trashing your own computer maliciously or another’s accidentally is no crime. When a computer forges checks using bogus data supplied by human accomplices, the people, not the machine, are charged with the crime.

But how long can the law lag behind technology? Knowledgeable observers predict consumer robotics will be a multibillion-dollar growth industry by 2000. Clever personal robots capable of climbing stairs, washing dishes, and accepting spoken commands in plain English should be widely available by 2005. By the turn of the century the robot population may number in the millions.

By 2010, most new homes will offer a low-cost domestic robot option. This “homebot” will be a remote-controlled peripheral of a computer brain buried somewhere in the house. Homebot software will include: (1) applications programs to make your robot behave as a butler, maid, cook, teacher, sexual companion, or whatever; and (2) acquired data such as family names, vital statistics and preferences, a floor map of the house, food and beverage recipes, past family events, and desired robot personality traits. If a family moves, it would take its software with it to load into the domestic system at the new house. The new homebot’s previous mind would be erased and overwritten with the personality of the family’s old machine.

If homebots became members of households, could they be called as witnesses? In the past, courts have heard testimony from “nonhumans” during witch trials in New England, animal trials in Great Britain, and other cases in which animals, even insects, were defendants. But homebots add a new twist. Since the robot’s mind is portable, Homebot Joe might witness a crime but Homebot Robbie might actually testify in court, if Joe’s mind, has, in the interim, been transferred to Robbie. (Is this hearsay?) Further, a computer memory that can be altered is hardly a reliable witness. Only if homebots have tamperproof “black box recorders,” as in commercial jetliners, might such testimony be acceptable to a court.

Some futurists have already begun to devise elaborate codes of ethics for robots. The most famous are science-fiction writer Isaac Asimov’s classic Three Laws of Robotics. First: A robot may not injure a human being, or, through inaction, allow a human being to come to harm. Second: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third: A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

But even with Asimov’s three laws in place there’s lots of room for mischief. While corrupting a robots “laws” may someday be deemed a serious felony, a human could order a robot to steal or to destroy property, other robots, or even itself, since the machine must dutifully obey under Asimov’s Second Law.

With those kinds of abuses possible, questions of “machine rights” and “robot liberation” will surely arise in the future. Twenty years ago Hilary Putnam at Massachusetts Institute of Technology was the first to address the issue of the civil rights of robots. “It seems preferable to me,” Putnam concluded after lengthy philosophical analysis, “that discrimination based on the softness or hardness of the body parts of a synthetic organism seems as silly as discriminatory treatment of humans on the basis of skin color.”

More recently Richard Laing, formerly a computer scientist at the University of Michigan, has contemplated the day when human-level intelligent machines exhibit complex behaviors including altruism, kinship, language, and even self-reproduction. “If our machines attain this level of behavioral sophistication,” Laing reasons, “it may finally not be amiss to ask whether they have not become so like us that we have no further right to command them for our own purposes, and so should quietly emancipate them.”

The case law on robots’ rights is pretty thin but not, as one might expect, totally nonexistent. One criminal case, widely reported in the popular press under the sensational banner, “Computer Raped by Telephone,” involved a professional programmer who invaded the computer of a competitor and stole a copy of a valuable proprietary program using a telephone link and several secret passwords. During the course of the investigation, the question arose whether a search warrant could be issued to order the computer to retrieve evidence of the programmer’s invasion. The first such warrant was issued prior to Ward v. Superior Court of California (3 C.L.S.R. 206 [1972]). This was the first time a computer had ever been hauled in for questioning.

Pro-robot precedent is slowly being established in the area of autopilot litigation. Airline autopilots are flying robots – computers controlling flaps and ailerons rather than mechanical arms or legs. In Klein v. U.S. (13 Av.Cas. 18137 [D.Md. 1975]), the court found that in negligence cases, while a pilot is not required to use the autopilot on a landing, his failure to use it may be inconsistent with good operating procedure and may be evidence of a failure of due care. In Wells v. U.S. (16 Av.Cas. 17914 [W.D.Wash. 1981]), another court inferred negligence on the part of the human pilot from evidence that he switched from automatic pilot to manual control in a crisis situation. These cases appear to be the first time that courts have recognized robot judgment as superior to human judgment in any legal capacity. Human beings were deemed negligent for not following a computer’s advice and for not surrendering control to a robot.

In other legal realms, the rapid spread of computers is already permitting the formation of contractual obligations among machines. A simple example is found in many supermarkets today. The checkout registers are minicomputers linked to the store’s central computer, which registers a reduction in inventory every time a purchase is made. When stock levels reach the re-order point, the machine automatically telephones a supply warehouse and orders what it judges to be the most economical quantity. Typically the supply warehouse is a separate legal entity. Thus the computers have made a simple contract among themselves. The only communication between buyer and seller is a series of electronic impulses containing information intelligible only to the computers which receive it.

Science fiction writers Ben Bova and Harlan Ellison may have established a landmark precedent in robot civil rights while defending the copyright of their short story, “Brillo,” which they claimed had been infringed by the ABC/Paramount TV series, Future Cop. Judge Albert Stevens ruled that robots have the same status as human beings when used as characters in stories and are protected by the copyright laws. This may be the first time robots have ever been legally equated with human beings in any connection.

At this time no computer has been deemed an author under the copyright laws, although a computer-generated English-language story published in a national magazine did receive copyright protection. Computer programs that write programs (code generators) are being mass-marketed for home computers, and industry is clamoring for automatic programming techniques to improve efficiency. There is even a program that summarizes news stories as they appear on the UPI wire service. Courts will soon be forced to grapple with the unstated assumption underlying the copyright concepts of authorship and originality – that “authors” must be human.

If we give rights to intelligent machines, either robots or computers, we’ll also have to hold them responsible for their own errors. Robots, by analogy to humans, must conform to a “reasonable computer” standard. Sentient computers and their software should be held to the standard of competence of all other data processing systems of the same technological generation. Thus, if all “sixth generation” computers ought to be smart enough to detect bogus input in some circumstances, then, given that circumstance, a court will presume that a “sixth generation” computer knew or should have known the input data were bogus.

Exactly who or what would be the tortfeasor in these cases? Unlike a living being whose mind and body are inseparable, a robot’s mind (software) and body are severable and distinct. This is an important distinction. Robot rights most logically should reside in the mechanism’s software (the programs executing in the robot’s computer brain) rather than in its hardware.

This can get mighty complicated. Robots could be instantly reprogrammed, perhaps loading and running a new software applications package every hour. Consider a robot who commits a felony while running the aggressive “Personality A” program, but is running mild-mannered ‘Personality M” when collared by the police. Is this a false arrest? Following conviction, are all existing copies of the criminal software package guilty too, and must they suffer the same punishment? (Guilt by association?) If not, is it double jeopardy to take another copy to trial? The robot itself could be released with its aggressive program excised from memory, but this may offend our sense of justice.

The bottom line is it’s hard to apply human laws to robot persons. Let’s say a human shoots a robot, causing it to malfunction, lose power, and “die.” But the robot, once “murdered,” is rebuilt as good as new. If copies of its personality data are in safe storage, then the repaired machine’s mind can be reloaded and up and running in no time – no harm done and possibly even without memory of the incident. Does this convert murder into attempted murder? Temporary roboslaughter? Battery? Larceny of time? We’ll probably need a new class of felonies or “cruelty to robots” statutes to deal with this.

If robots are persons, will the Fifth Amendment protect them from self-incrimination? Under present law, a computer may be compelled to testify, even against itself, without benefit of the Fifth Amendment. Can a warrant be issued to search the mind of a legal person? If not, how can we hope to apprehend silicon-collar criminals in a world of electronic funds transfer and Quotron stock trading?

How should deviant robots be punished? Western penal systems assume that punishing the guilty body punishes the guilty mind – invalid for computers whose electromechanical body and software mind are separable. What is cruel and unusual punishment for a sentient robot? Does reprogramming a felonious computer person violate constitutional privacy or other rights?

Robots and software persons are entitled to protection of life and liberty. But does “life” imply the right of a program to execute, or merely to be stored? Denying execution would be like keeping a human in a permanent coma – which seems unconstitutional. Do software persons have a right to data they need in order to keep executing?

Can robot citizens claim social benefits? Are unemployed robo-persons entitled to welfare? Medical care, including free tuneups at the government machine shop? Electricity stamps? Free education? Family and reproductive rights? Don’t laugh. A recent NASA technical study found that self-reproducing robots could be developed today in a 20-year Manhattan-Project-style effort costing less than $10 billion (NASA Conference Publication 2255, 1982).

According to sociologist and futurist Arthur Harkins at the University of Minnesota, “the advent of robots with sexual-service capabilities and simulated skin will create the potential for marriage between living and nonliving beings within the next twenty years.” For very lonely people, humanlike robots that don’t age and can work nonstop could become highly desirable as marriage partners for humans. In many instances, says Harkins, “such marriages may be celebrated with traditional wedding vows and country club receptions.”

In the far distant future, there may be a day when vociferous robo-lobbyists pressure Congress to fund more public memory banks, more national network microprocessors, more electronic repair centers, and other silicon-barrel projects. The machines may have enough votes to turn the rascals out or even run for public office themselves. One wonders which political party or social class the “robot bloc” will occupy.

In any case, the next time that Coke machine steals your quarter, better think twice before you kick it. Someday you may need a favor.