Advertisement

SKIP ADVERTISEMENT

Op-Ed Contributor

The First Church of Robotics

Credit...Ji Lee

Berkeley, Calif.

THE news of the day often includes an item about some development in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children. This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools. But such conclusions aren’t just changing how we think about computers — they are reshaping the basic assumptions of our lives in misguided and ultimately damaging ways.

I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldn’t be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)

In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering” machine that is designed to play the TV quiz show “Jeopardy.” Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.

Another example is the way in which robot teachers are portrayed. For starters, these robots aren’t all that sophisticated — miniature robotic devices used in endoscopic surgeries are infinitely more advanced, but they don’t get the same attention because they aren’t presented with the A.I. spin.

Furthermore, these robots are just a form of high-tech puppetry. The children are the ones making the transaction take place — having conversations and interacting with these machines, but essentially teaching themselves. This just shows that humans are social creatures, so if a machine is presented in a social way, people will adapt to it.

What bothers me most about this trend, however, is that by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people.

In one recent example, Clay Shirky, a professor at New York University’s Interactive Telecommunications Program, has suggested that when people engage in seemingly trivial activities like “re-Tweeting,” relaying on Twitter a short message from someone else, something non-trivial — real thought and creativity — takes place on a grand scale, within a global brain. That is, people perform machine-like activity, copying and relaying information; the Internet, as a whole, is claimed to perform the creative thinking, the problem solving, the connection making. This is a devaluation of human thought.

Consider too the act of scanning a book into digital form. The historian George Dyson has written that a Google engineer once said to him: “We are not scanning all those books to be read by people. We are scanning them to be read by an A.I.” While we have yet to see how Google’s book scanning will play out, a machine-centric vision of the project might encourage software that treats books as grist for the mill, decontextualized snippets in one big database, rather than separate expressions from individual writers. In this approach, the contents of books would be atomized into bits of information to be aggregated, and the authors themselves, the feeling of their voices, their differing perspectives, would be lost.

What all this comes down to is that the very idea of artificial intelligence gives us the cover to avoid accountability by pretending that machines can take on more and more human responsibility. This holds for things that we don’t even think of as artificial intelligence, like the recommendations made by Netflix and Pandora. Seeing movies and listening to music suggested to us by algorithms is relatively harmless, I suppose. But I hope that once in a while the users of those services resist the recommendations; our exposure to art shouldn’t be hemmed in by an algorithm that we merely want to believe predicts our tastes accurately. These algorithms do not represent emotion or meaning, only statistics and correlations.

What makes this doubly confounding is that while Silicon Valley might sell artificial intelligence to consumers, our industry certainly wouldn’t apply the same automated techniques to some of its own work. Choosing design features in a new smartphone, say, is considered too consequential a game. Engineers don’t seem quite ready to believe in their smart algorithms enough to put them up against Apple’s chief executive, Steve Jobs, or some other person with a real design sensibility.

But the rest of us, lulled by the concept of ever-more intelligent A.I.’s, are expected to trust algorithms to assess our aesthetic choices, the progress of a student, the credit risk of a homeowner or an institution. In doing so, we only end up misreading the capability of our machines and distorting our own capabilities as human beings. We must instead take responsibility for every task undertaken by a machine and double check every conclusion offered by an algorithm, just as we always look both ways when crossing an intersection, even though the light has turned green.

WHEN we think of computers as inert, passive tools instead of people, we are rewarded with a clearer, less ideological view of what is going on — with the machines and with ourselves. So, why, aside from the theatrical appeal to consumers and reporters, must engineering results so often be presented in Frankensteinian light?

The answer is simply that computer scientists are human, and are as terrified by the human condition as anyone else. We, the technical elite, seek some way of thinking that gives us an answer to death, for instance. This helps explain the allure of a place like the Singularity University. The influential Silicon Valley institution preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined; it will become alive in the blink of an eye, and take over the world before humans even realize what’s happening.

Some think the newly sentient Internet would then choose to kill us; others think it would be generous and digitize us the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain. Yes, this sounds like many different science fiction movies. Yes, it sounds nutty when stated so bluntly. But these are ideas with tremendous currency in Silicon Valley; these are guiding principles, not just amusements, for many of the most influential technologists.

It should go without saying that we can’t count on the appearance of a soul-detecting sensor that will verify that a person’s consciousness has been virtualized and immortalized. There is certainly no such sensor with us today to confirm metaphysical ideas about people, or even to recognize the contents of the human brain. All thoughts about consciousness, souls and the like are bound up equally in faith, which suggests something remarkable: What we are seeing is a new religion, expressed through an engineering culture.

What I would like to point out, though, is that a great deal of the confusion and rancor in the world today concerns tension at the boundary between religion and modernity — whether it’s the distrust among Islamic or Christian fundamentalists of the scientific worldview, or even the discomfort that often greets progress in fields like climate change science or stem-cell research.

If technologists are creating their own ultramodern religion, and it is one in which people are told to wait politely as their very souls are made obsolete, we might expect further and worsening tensions. But if technology were presented without metaphysical baggage, is it possible that modernity would not make people as uncomfortable?

Technology is essentially a form of service. We work to make the world better. Our inventions can ease burdens, reduce poverty and suffering, and sometimes even bring new forms of beauty into the world. We can give people more options to act morally, because people with medicine, housing and agriculture can more easily afford to be kind than those who are sick, cold and starving.

But civility, human improvement, these are still choices. That’s why scientists and engineers should present technology in ways that don’t confound those choices.

We serve people best when we keep our religious ideas out of our work.

A correction was made on 
Aug. 10, 2010

An Op-Ed article on Monday about artificial intelligence gave an incorrect title for a book by the author, Jaron Lanier. It is “You Are Not a Gadget.”

How we handle corrections

Jaron Lanier, a partner architect at Microsoft Research and an innovator in residence at the Annenberg School of the University of Southern California, is the author, most recently, of “You Are Not a Gadget.”

A version of this article appears in print on  , Section A, Page 19 of the New York edition with the headline: The First Church of Robotics. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT