Artificial Intelligence
Why do most chatbots fail—and how to avoid it
By Dawn Harpster
0 min read
Chatbots don’t need to have a persona, but there are other ways to ensure digital assistants create a unique and compelling customer experience that aligns with brand guidelines.
I love my job, but when strangers ask me what I do, I just tell them I’m a software designer. That’s because people have lots of opinions about chatbots and digital assistants, most of them negative. And let’s face it: We’ve all experienced frustration when engaging with conversational AI. Raise your hand if you’ve ever yelled, “SPEAK TO A REPRESENTATIVE!” into the phone.
If a voice assistant makes someone feel uncomfortable, it’s because it isn’t giving them clear guidance. If users walk away angry, who are they angry with? Not the bot persona—usually it’s the company for creating a bad experience.
Anyone who thinks designing AI conversations is easy doesn’t realize how complex and unpredictable human behavior can be. But that’s one of the reasons why this work is endlessly fascinating.
Handbook
Designing customer conversations: Best practices
On an episode of the VUX World podcast, I joined host Kane Simms for a conversation about why most chatbots are inadequate, annoying, or ineffective.
Here, I break down the key takeaways from our talk about why chatbots fail: the mistakes conversation designers make, misconceptions about AI, and a few tips on how to create a bot that works for you—and your customers.
Chatbot mistake #1: Personas over people.
For years, the pervasive trend in the chatbot AI space has been creating a distinct persona for each bot. Does a digital assistant need a personality? Maybe if its purpose is (at least partially) entertainment. Brand identity is another consideration. But most of the time, persona is secondary to user-friendliness.
My philosophy is pretty simple: The user is the most important actor—not the virtual assistant’s persona. You don’t need to have a persona at all to have a good virtual assistant. You do need to have good writing, good intent, and good phrase training. Plus, it’s important to have an understanding of what your user wants to accomplish.
Placing more emphasis on persona than on the user experience is detrimental to both the digital assistant and the user. Most of the time, users just want to get business done. If a bot is meant to provide something transactional, users call or click to complete a piece of business. A persona might impede that.
There are some scenarios in which an AI personality doesn’t even make sense. Think about visiting a bank to make a face-to-face transaction. When you approach the teller, does the teller introduce themselves “Hi, I’m Jill, and I’ll be your teller today”? No— you tell them what you want to do, conduct your business, and leave. Would you even remember the teller’s name? No, because all that’s important is that you did the business you came to do.
However, it’s helpful for the bot to be in character, in line with the business it helps users do. If an organization I’m working with has brand guidelines, they’re priceless reference material. Chatbot voice design—including sonic qualities, tone, and style of speech—should reflect the mission and values of the enterprise. Digital assistants in industries like insurance or banking usually require some level of gravitas, while a bot on an e-commerce site geared to Millennials can be much more casual.
I take copious notes about the specific vocal qualities clients want their digital assistants to have, including things like pitch and delivery. Usually, when I read the script, I can get a feel for the character I need to create pretty quickly whether it’s more or less formal—saying thank you instead of thanks and whether or not it should use contractions.
Chatbot mistake #2: Working against psychology.
In human-to-human conversation, there are a lot of unwritten rules. In a face-to-face meeting or a video call, we can see each other nodding our heads. We can tell if the other person is paying attention and often whether they understand what’s being said. When someone asks a question, they pause afterward to signal that it’s the other’s turn to speak.
Those visual cues do not exist on the phone or in a chat window. Knowing when and how to build in those pauses is critical. Psychology can also help conversation designers limit users’ frustrations by working within other common behavioral tendencies:
Bargability: When—and when not—to interrupt.
Knowing when—or if—the prompt should be “bargable” is another consideration. If users must listen to an entire monologue before their voice command will be heard, it’s not “bargable”. In some contexts, users should be able to interrupt the assistant at any time. Other prompts that shouldn’t be “bargable”, like a list of menu options that recently changed.
Cognitive load: The power of primacy and recency.
Cognitive load refers to how many things a person can remember at once. If you overload them with information, they’re not going to remember what you’ve told them and they might not remember what they need to say.
Conversation designers should avoid giving users a long list of choices. Chances are, they will remember only the first and last options. Five choices should be the absolute limit. I’ve seen some schools of thought that claim three choices at most is preferable. I say it depends on how the bot presents the list.
Phrasing is everything.
How a bot asks questions of users is just as important as what it asks. I’ve made plenty of mistakes in this arena. For example, I redesigned digital assistance services for a company that wires money globally. At first, I wrote the question: “Are you calling to send or receive money?” Instead of answering either “send” or “receive”, users were saying simply “yes”.
It didn’t occur to me that their natural inclination was to answer in the affirmative and that either answer would be correct. I rephrased it to say: “Which are you calling to do: Send money or pick up money?”
It’s the same question, just phrased in such a way that the user knows that their response should be either option: “send money” or “pick up money”. Essentially, we need to teach people how to speak to the bot for the bot to understand and for the user to walk away happy.
Chatbot mistake #3: Unrealistic expectations.
We have to be honest about what bots can do right now. Although AI technology is evolving at a rapid pace, bots usually do best with simple, repetitive tasks. Building one’s first bot with a use case that tries to do too much can be a huge pitfall.
It’s also critical to build in contingencies for the unexpected. Things happen on phone calls—dogs bark, kids scream—and users get distracted and miss prompts. We do users and companies a disservice if we’re hyper-focused on the happy path of the ideal digital customer journey experience.
We have to give people opportunities to recover from flubs, mistakes, and interruptions to return to the flow of the conversation or get them out of the flow if they can’t recover.
Alexa, play the future of digital assistance.
Many folks have virtual assistants in their homes that they talk to daily—which has really changed how people interact with voicebots in general. We do have to train our users to an extent, but they’re trained every day, whether or not they realize it.
I never want to make these predictions, because I’m (almost) always wrong. If you were to ask me 10 years ago whether I’d still be doing this work today, I would have said no—I didn’t think my services would still be necessary or relevant, but the opposite has happened. With more digital assistants than ever, user expectations have changed. They expect more, so we have to rise to the occasion.