We Want Chatbots to Act More Human But Let's Keep Some Human Traits to Ourselves The human-bot relationship is the new normal, so we must think critically about the possible long term impact of tone deaf AI.

By Sid Banerjee Edited by Dan Bova

Opinions expressed by Entrepreneur contributors are their own.

Zapp2Photo | Getty Images

There is something uniquely unsettling about watching footage of Atlas, the robot developed by Boston Dynamics. Its human-like movements suggest a sense of body-awareness and intuition inherent in human beings, but it is distinctly not human.

The technologies behind AI and robotics will continue advancing. As they become ever more sophisticated, we must ask ourselves, how human-like should AI be? Where should we allow the boundaries to continue to blur, and where do we need to draw a clear line in the sand? It's a challenging conundrum made only more complicated by headlines about robot citizenship and speculation about an impending apocalypse.

When we evaluate AI's evolving role in customer experience, we can begin to answer this question. The early implementation of chatbots serves as a small window into the world of human and bot interactions, and a case study for how the technology should be shaped moving forward.

Related: Top 10 Best Chatbot Platform Tools to Build Chatbots for Your Business

When mirroring human behavior makes sense.

Early hype around chatbots was met with a lot of disappointing groans by the many consumers prematurely introduced to bots. Initial bots were rightfully criticized for being ineffective and often incapable of performing the basic tasks they were designed to do. What was probably most frustrating for customers dealing with said bots, however, was their lack of empathy. If a customer is taking the time to contact a brand for help with something, they really want to feel understood. The irony here is that machines are not particularly well versed in feelings (in their defense, I know plenty of humans who are not very well versed in understanding feelings).

As technology develops, AI will need to become more emotionally aware to truly understand human requests. Empathy is a must as companies increasingly seek to communicate with consumers through automated solutions. Chatbots have come a long way from the early days. An estimated 16 percent of Americans (that's 39 million people) now own a smart speaker. But even in these more advanced solutions, there is a fairly chronic issue of tone-deafness. The human-bot relationship is the new normal, so we must think critically about the possible long term impact of tone deaf AI.

Consider that when you make a request of Alexa, she will not say "you're welcome" if you thank her. On the one hand it is encouraging to know she is no longer "listening" after a command, but on the other, many are concerned that we are setting a precedent of rudeness and callousness for a future generation. A more nefarious example is found in the lack of consequence for being rude to bots, and more specifically, in the way bots respond to things like sexual harassment. In the case of Alexa, she will now "disengage" if asked to do something inappropriate, but as the Quartz writer Leah Fessler points out, her North Star is to please. That's problematic when we consider that more complex bots like Sophia will be among us soon.

To avoid bots that perpetuate a tone-deaf society, we need to train AI on empathy. This is not a simple task, but it is possible. Empathy is a "soft skill," but it is a skill nonetheless. So training AI on empathy can be approached in the same way we train AI on anything -- with a digestible data set. This would include training itself to "hear" data points such as tone of voice (both written and verbalized), words that express sentiments and emotions, and even how a human responds temporally, or over time, to interactions. Is the human transitioning from agitated to happy, or the inverse? A bot needs to know the difference so it can temper its response to avoid agitating a calm human -- or calm down an agitated human. Understanding emotions and feelings is a critical part of understanding humans and how they act, what they want and how to appropriately respond. If a machine can't learn empathy, it can't understand how empathy affects human requests and actions, and it can't create optimum outcomes.

Training AI on empathy means teaching machines to extract subjective emotions, feelings and sentiments from conversations with humans. AI models can learn, just as humans learn, how those qualitative feelings impact needs, responses, actions and results. An angry human, using anger trigger words (including obscenities), a loud voice, changes in speaking cadence or choice of words, can demonstrate a need to be heard, have their feelings validated, perhaps have issues escalated. They want a quick and effective response. A happy human who wants to talk is more comfortable taking time, chit-chatting, maybe even having a friendly conversation with a bot about the marginally relevant topics like the weather, or a sports team. Until AI can recognize emotion and empathy it can't learn how emotion affects human behavior, needs and desired responses.

Related: The Next Addition to Your Marketing Department Should Be a Chatbot

When bots should just be bots.

The counter-narrative to a future where the line between bot and human is indistinguishable is that humans are flawed. And why would we want to recreate AI in our exact likeness, when the promise of AI is helping us go beyond our limitations? The hard line in the sand we should draw between bots and humans is that of biased decision making.

One of the greatest strengths bots possess is their lack of shame or ego. So often these human emotions are at the heart of being incapable of recognizing our own biases, in whatever shape they take: racist opinions (whether overt or unconscious), sexist assumptions, or whatever -ism we have all been guilty of at some point or another. Taking advantage of starting as a "fresh slate," we need to ensure bots do not simply learn to respond differently to prejudicial attributes such as male vs female voices or language of origin. There are many attributes that AI may determine affect human responses and needs, but humans have an obligation not to let bots overly generalize based on gender, language, or nation of origin, just as we ask our human employees not to discriminate.

Related: How Chatbots Save Time and Change How Business Gets Done

In the last two years, we have watched Sophia the robot evolve from agreeing to destroy the human race, to telling a joke, learning to walk, and most recently joining Will Smith on a "date." Being such a public symbol of AI, it's clear the technology is improving in leaps and bounds, but still has quite a way to go. Most of us will not be interfacing with the likes of Sophia, but many of us will continue to interact with Siri, Alexa and the other branded chatbots of the world. This ever-expanding proliferation requires that we continue to push for more empathy in AI and less bias. In this way we will hopefully arrive in a place where AI can seamlessly interact with humans without falling victim to human error. Finding that balance will be tricky, but the resulting harmony will have a hugely positive impact on consumers and companies alike.

Sid Banerjee

Executive Chairman and Co-Founder of Clarabridge

Sid Banerjee is executive chairman and co-founder of Clarabridge. A founding employee at MicroStrategy, Banerjee held VP-level positions in both product marketing and worldwide services. During his tenure leading MicroStrategy’s worldwide services division, he grew the organization to 500+ employees supporting enterprise deployments of BI solutions. Before joining MicroStrategy, Banerjee held management positions at Ernst & Young and Sprint International.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Productivity

6 Habits That Help Successful People Maximize Their Time

There aren't enough hours in the day, but these tips will make them feel slightly more productive.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business News

'High Error Rates': ChatGPT Is Down, Tens of Thousands of Users Affected in Mass Outage. Here's What We Know.

OpenAI has identified the issue, and they are "currently monitoring," the company said.