Artificial Intelligence
AI in the contact center just exploded
By Ben Rigby
0 min read
I have fond memories of working with the potter’s wheel as a kid. It’s so fulfilling to take a hunk of clay and shape it into a beautiful bowl. But it’s treacherous along the way. The bowl may collapse just as you’re about to stop the wheel. You may drop the bowl on the way to the kiln. And more than a few bowls will explode in the kiln if you haven’t rolled out all of the air bubbles. In the end, you learn to appreciate the finished product, but never to get too attached to any single piece along the way.
Natural language processing (NLP) has just exploded in the kiln. NLP is a type of machine learning technique to help software applications process, understand, and analyze human language.
Myself and the team at Talkdesk have spent years shaping and perfecting our NLP models to best serve contact center customers. We’ve spent thousands of hours at the wheel with these models and systems, making sure that they perform as best as possible at our primary jobs to be done:
- Identify the issues causing customers to reach out.
- Increase self-service rates.
- Help agents resolve issues quickly and correctly.
But now we are starting to spin a new bowl.
Enter ChatGPT into the contact center AI paradigm.
Generative pre-trained transformer (GPT) is the large language model (LLM) at the heart of ChatGPT. With no additional training, it’s able to do intent detection, entity extraction, sentiment analysis, redaction, summarization, and classification—all of the core NLP functions.
It can also understand a conversation to a large extent. Given a transcript, you can ask questions about that transcript, such as “Was the agent polite?” and it will give you a reasonable answer. In fact, you could ask “Was the agent polite and why”—and it’ll give you a sound set of reasons. It’s a single model—a single API call—that can perform this Swiss Army Knife of NLP functions. Prior to GPT, we would have developed an entity model, an intent model, a sentiment model, and a redaction model—all with separate training, tuning, and monitoring processes.
Not to mention that it works in virtually any language. Multi-language has been revolutionized by LLMs, as more data and more parameters are available to solve the problem of understanding language as a whole. Before such models, we had to literally rebuild one NLP model every time we wanted to train it in another language. Up to 80% of the work could be just replicating an existing model for a new language.
Talkdesk, AI, ChatGPT, and the contact center.
Our job at Talkdesk is to help customers solve their core jobs in the contact center and not to build NLP models. And there is no mistaking what has happened in the kiln. The GPT air bubble exploded the natural language processing bowl.
We’ve got to look into the kiln and come to terms with it, while accepting that the new bowl is not baked yet. GPT is a transformed approach to NLP, but there are issues with GPT’s mechanics that need to be ironed out before it is production ready for a wide variety of use cases. In particular, the outputs of GPT need to be either human reviewed or processed to remove inaccuracies and bias. Kevin Roose’s two-hour conversation with Bing, powered by ChatGPT, drifted into some surreal digressions that raise cause for concern.
We will continue to deliver today’s NLP models even as we work to replace them. The potter’s wheel analogy breaks down a little bit here because our current NLP models are working just as well as they did yesterday. In particular, they may have accuracy, cost performance, and speed characteristics that are advantageous. Whereas today’s most capable GPT model runs slowly and is expensive.
But to see the NLP bowl as anything other than exploded is to be stuck in an antiquated mindset that will quickly lead to obsolete technology. If we look at the jobs to be done in the contact center, it’s clear that this current batch of generative models is going to drive a massive shift in how we solve those jobs, which will result in better outcomes and value for our customers. And that’s our core concern.
Several weeks ago we announced our first feature powered by GPT—automatic summary—that automatically summarizes an interaction and selects an accurate disposition for an engagement. It effectively eliminates much of the after call work that impacts an agent’s average handle time, while also improving data accuracy. There are no straight lines when it comes to progress. We’re extremely excited about the future of NLP and the new bowls we are baking. We will be rolling out new GPT-powered features over the near term and as the models mature, we will be aggressively incorporating them into our roadmap. Onwards!