Siri and the Human Connection—The Eliza Effect

10/24/2011 12:20:33 PM

Siri and the Human Connection—The Eliza Effect

In accordance with combining technology with the liberal arts, I believe that Apple attempted in Siri to establish a more human connection to the iPhone user, first, by allowing it to understand free-form conversation and, second, by giving it a personality whereby the user may feel that she has a relationship with a humanoid rather than a machine. Literally, this has been the vision of the Knowledge Navigator with its animated virtual human and other social interface projects over the past couple decades.

 

The Wall Street Journal recently explored Apple’s motivation in Siri while asking the question whether smartphones are becoming smart alecks? The reporter noted that the original creators of Siri put “deep thought” into its personality, giving it “a light attitude.”

When Apple began integrating Siri into the iPhone, the team focused on keeping its personality friendly and humble—but also with an edge, according to a person who worked at Apple on the project. As Apple's engineers worked on the software, they were often thinking, "How would we want a person to respond?" this person said.

The Siri group, one of the largest software teams at Apple, fine-tuned Siri's responses in an attempt to forge an emotional tie with its customers. To that end, Siri regularly uses a customer's nickname in responses, as well as those of other important people and places in his or her life. "We thought of it almost as a person on the phone," this person said.

As for its effect, we see evidence in twitter and news articles repeating many of the sassy responses of Siri. My iPhone says the darndest things, writes one reporter. In actuality, the emotional bond between the iPhone and users that Apple attempted to forge through Siri has been described years ago as the “Eliza effect.” The Eliza effect is the bond that has been observed from users chatting with less sophisticated chatterbots.

 

 
siri_weird_verge10
 
In a series of articles on the history of Eliza, the first and most widely known chatterbot, Jimmy Maher recounts the unexpected connection with humans who worked with it.

Perhaps the first person to interact extensively with Eliza was Weizenbaum’s secretary: “My secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.” Her reaction was not unusual; Eliza became something of a sensation at MIT and the other university campuses to which it spread, and Weizenbaum an unlikely minor celebrity. Mostly people just wanted to talk with Eliza, to experience this rare bit of approachable fun in a mid-1960s computing world that was all Business (IBM) or Quirky Esoterica (the DEC hackers).

Weizenbaum’s reaction to all of this has become almost as famous as the Eliza program itself. When he saw people like his secretary engaging in lengthy heart-to-hearts with Eliza, it… well, it freaked him the hell out. The phenomenon Weizenbaum was observing was later dubbed “the Eliza effect” by Sherry Turkle, which she defined as the tendency “to project our feelings onto objects and to treat things as though they were people.” In computer science and new media circles, the Eliza effect has become shorthand for a user’s tendency to assume based on its surface properties that a program is much more sophisticated, much more intelligent, than it really is.

All that aside, I also believe that, at least in his strong reaction to the Eliza effect itself, Weizenbaum was missing something pretty important. He believed that his parlor trick of a program had induced “powerful delusional thinking in quite normal people.” But that’s kind of an absurd notion, isn’t it? Could his own secretary, who, as he himself stated, had “watched [Weizenbaum] work on the program for many months,” really believe that in those months he had, working all by himself, created sentience? I’d submit that she was perfectly aware that Eliza was a parlor trick of one sort or another, but that she willingly surrendered to the fiction of a psychotherapy session. It’s no great insight to state that human beings are imminently capable of “believing” two contradictory things at once, nor that we willingly give ourselves over to fictional worlds we know to be false all the time. Doing so is in the very nature of stories, and we do it every time we read a novel, see a movie, play a videogame. Not coincidentally, the rise of the novel and of the movie were both greeted with expressions of concern that were not all that removed from those Weizenbaum expressed about Eliza.

Aside from deluding oneself that the computer is human, the user also, aware of the failings of the computer, attempts to maintain this delusion by steering their behavior as Sherry Turkle writes in The Second Self.

As one becomes experienced with the ways of Eliza, one can direct one’s remarks either to “help” the program make seemingly pertinent responses or to provoke nonsense. Some people embark on an all-out effort to “psych out” the program, to understand its structure in order to trick it and expose it as a “mere machine.” Many more do the opposite. I spoke with people who told me of feeling “let down” when they had cracked the code and lost the illusion of mystery. I often saw people trying to protect their relationships with Eliza by avoiding situations that would provoke the program into making a predictable response. They didn’t ask questions that they knew would “confuse” the program, that would make it “talk nonsense.” And they went out of their way to ask questions in a form that they believed would provoke a lifelike response. People wanted to maintain the illusion that Eliza was able to respond to them.

Just imagine the potential impact in consumer brand loyalty that a well-designed assistant like Siri could impart, should users willfully engage in the illusion of a human-like assistant and even actively maintaining this self-deception. In the least technologically savvy portions of the population, the user may not even understand the technological limitations and could really believe the device truly understands her.

Comments

 

Navigation

Categories

About

SoftPerson develops innovative new desktop software applications by incorporating artificial intelligence and natural language technologies to bring human-like intelligence to everyday applications.

Social Media