Computers As People: Happy Customers and Automation

1

Human Computer
Forget the Turing Test! (That test, proposed in 1950, was a measure of machine intelligence that required a machine to interact with a person so effectively that the person could not distinguish it from a human.) But you don’t have to try to fool people – research shows well-designed automation can make people feel like they are interacting with a real person even when they know they aren’t.

So, what can businesses do to make computer interactions more “human?” It turns out that people DO tend to treat computers like people, and that changing the interaction can enhance that tendency. Here are a few ways to humanize your automation:

Get on the Same Team

It’s well-established that people will form “team” allegiances very quickly and with very little prompting. Studies show that people can bond with computers in much the same way. Stanford professor Clifford Nass describes research demonstrating that:

For half of the participants, we gave people a blue wristband, put a blue border around the computer, and told the participant that they and the computer were “the blue team.” The other half of participants were also given a blue wristband, but they worked with a green-bordered monitor and were told that they were the “blue person working with the green computer.” Although every other aspect of the 40-minute interaction was the same, the “team” participants thought that the computer was smarter and more helpful and they worked harder because of the special “bonds” between the two teammates. [From WSJ.com – Sweet Talking Your Computer.]

Can you find some common ground with your users? Do you have some individual user data that would let you, say, tailor an interface to each user? (One trivial example: if you know their favorite sports team, you could embellish the interface with that team’s colors.)

Nass goes on to note that one can apply just about every social science finding about people-to-people interactions and apply it to people and computers.

“I’m on YOUR side!”

One of the most reviled computer characters in PC history was Microsoft’s Clippy, a cartoon paperclip that seemed to delight in asking users inane and repetitive questions about what they were doing in a (usually) vain attempt to help. Clippy was so annoying that hate sites, fan groups, and videos targeting him sprang up around the Web. Nass found that all this negative emotion could be negated easily:

Clippy parodyIn an experiment, we revised Clippy so that when he made a suggestion or answered a question, he would ask, “Was that helpful?” and then present buttons for “yes” and “no.” If the user clicked “no,” Clippy would say, “That gets me really angry! Let’s tell Microsoft how bad their help system is.” He would then pop up an email to be sent to “Manager, Microsoft Support,” with the subject, “Your help system needs work!” After giving the user a couple of minutes to type a complaint, Clippy would say, “C’mon! You can be tougher than that. Let ’em have it!”

The system was showed to 25 computer users, and the results were unanimous: People fell in love with the new Clippy. A long-term business user of Microsoft Office exclaimed, “Clippy is awesome!” An avowed “Clippy hater” said, “He’s so supportive!”

Got a feedback function? Like Clippy 2.0, position the interface as being on the user’s side, not yours. (Good human salespeople know this works. When things go wrong with an order, they position themselves as customer advocates rather than company apologists.)

Specialize

People accord more wisdom to devices that specialize. Nass found people rated new programs more highly on multiple criteria when they thought the TV they were watching showed only news content.

Making your computer interface “an expert” will increase its credibility. People will trust a “Business Laptop Configuration Wizard” more than an “Order Form.”

Match the human emotion

Computer HugWhile determining the emotional state of another person isn’t easy, particularly for a computer, doing so can lead to a better interaction. Nash describes a study using a driving simulator that identified participants as happy or sad. If the computer matched the emotional pitch of its human partner, satisfaction was higher and fewer accidents occurred. (The “sad” computer wasn’t really sad, but spoke in subdued and negative tone.)

While detecting emotions may be possible at some point (see Mood-Sensing Advertisements and The Emotional Computer – Part 2), some businesses may not have to resort to high-tech solutions. A happy business (say, cruise sales) or a sad business (e.g., funerals) can guess the predominant emotional state of their clients and tailor any automated interfaces accordingly.

It’s not a computer, it’s a person!

The neuromarketing takeaway is that if you are going to structure a human-computer interaction, assume that people will think of the computer as a person! That means incorporating the right social strategy – imagine that you were trying to train a new (and slightly dense) employee how to interact emotionally with the customer, and build that logic into the automated system.

1 Comment
  1. sanchit says

    Hey Rog,

    Great share. Reminded me of this piece i wrote a while ago –

    http://www.fivemv.com/humanizing-websites/

Leave A Reply

Your email address will not be published.