These Entrepreneurs Are Taking on Bias in Artificial Intelligence Meet the founders, data scientists and researchers trying to make sure the algorithms that increasingly run our lives are free from bias.

By Liz Webber

sorbetto | Getty Images

Whether it's a navigation app such as Waze, a music recommendation service such as Pandora or a digital assistant such as Siri, odds are you've used artificial intelligence in your everyday life.

"Today 85 percent of Americans use AI every day," says Tess Posner, CEO of AI4ALL.

AI has also been touted as the new must-have for business, for everything from customer service to marketing to IT. However, for all its usefulness, AI also has a dark side. In many cases, the algorithms are biased.

Related: What Is AI, Anyway? Know Your Stuff With This Go-To Guide.

Some of the examples of bias are blatant, such as Google's facial recognition tool tagging black faces as gorillas or an algorithm used by law enforcement to predict recidivism disproportionately flagging people of color. Others are more subtle. When Beauty.AI held an online contest judged by an algorithm, the vast majority of "winners" were light-skinned. Search Google for images of "unprofessional hair" and the results you see will mostly be pictures of black women (even searching for "man" or "woman" brings back images of mostly white individuals).

While more light has been shined on the problem recently, some feel it's not an issue addressed enough in the broader tech community, let alone in research at universities or the government and law enforcement agencies that implement AI.

"Fundamentally, bias, if not addressed, becomes the Achilles' heel that eventually kills artificial intelligence," says Chad Steelberg, CEO of Veritone. "You can't have machines where their perception and recommendation of the world is skewed in a way that makes its decision process a non-sequitur from action. From just a basic economic perspective and a belief that you want AI to be a powerful component to the future, you have to solve this problem."

As artificial intelligence becomes ever more pervasive in our everyday lives, there is now a small but growing community of entrepreneurs, data scientists and researchers working to tackle the issue of bias in AI. I spoke to a few of them to learn more about the ongoing challenges and possible solutions.

Cathy O'Neil, founder of O'Neil Risk Consulting & Algorithmic Auditing

Solution: Algorithm auditing

Back in the early 2010s, Cathy O'Neil was working as a data scientist in advertising technology, building algorithms that determined what ads users saw as they surfed the web. The inputs for the algorithms included innocuous-seeming information like what search terms someone used or what kind of computer they owned.

However, O'Neil came to realize that she was actually creating demographic profiles of users. Although gender and race were not explicit inputs, O'Neil's algorithms were discriminating against users of certain backgrounds, based on the other cues.

As O'Neil began talking to colleagues in other industries, she found this to be fairly standard practice. These biased algorithms weren't just deciding what ads a user saw, but arguably more consequential decisions, such as who got hired or whether someone would be approved for a credit card. (These observations have since been studied and confirmed by O'Neil and others.)

What's more, in some industries -- for example, housing -- if a human were to make decisions based on the specific set of criteria, it likely would be illegal due to anti-discrimination laws. But, because an algorithm was deciding, and gender and race were not explicitly the factors, it was assumed the decision was impartial.

"I had left the finance [world] because I wanted to do better than take advantage of a system just because I could," O'Neil says. "I'd entered data science thinking that it was less like that. I realized it was just taking advantage in a similar way to the way finance had been doing it. Yet, people were still thinking that everything was great back in 2012. That they were making the world a better place."

O'Neil walked away from her adtech job. She wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, about the perils of letting algorithms run the world, and started consulting.

Eventually, she settled on a niche: auditing algorithms.

"I have to admit that it wasn't until maybe 2014 or 2015 that I realized this is also a business opportunity," O'Neil says.

Right before the election in 2016, that realization led her to found O'Neil Risk Consulting & Algorithmic Auditing (ORCAA).

"I started it because I realized that even if people wanted to stop that unfair or discriminatory practices then they wouldn't actually know how to do it," O'Neil says. "I didn't actually know. I didn't have good advice to give them." But, she wanted to figure it out.

So, what does it mean to audit an algorithm?

"The most high-level answer to that is it means to broaden our definition of what it means for an algorithm to work," O'Neil says.

Often, companies will say an algorithm is working if it's accurate, effective or increasing profits, but for O'Neil, that shouldn't be enough.

"So, when I say I want to audit your algorithm, it means I want to delve into what it is doing to all the stakeholders in the system in which you work, in the context in which you work," O'Neil says. "And the stakeholders aren't just the company building it, aren't just for the company deploying it. It includes the target for the algorithm, so the people that are being assessed. It might even include their children. I want to think bigger. I want to think more about externalities, unforeseen consequences. I want to think more about the future."

For example, Facebook's News Feed algorithm is very good at encouraging engagement and keeping users on its site. However, there's also evidence it reinforces users' beliefs, rather than promoting dialog, and has contributed to ethnic cleansing. While that may not be evidence of bias, it's certainly not a net positive.

Right now, ORCAA's clients are companies that ask for their algorithms to be audited because they want a third party -- such as an investor, client or the general public -- to trust it. For example, O'Neil has audited an internal Siemens project and New York-based Rentlogic's landlord rating system algorithm. These types of clients are generally already on the right track and simply want a third-party stamp of approval.

However, O'Neil's dream clients would be those who don't necessarily want her there.

"I'm going to be working with them because some amount of pressure, whether it's regulatory or litigation or some public relations pressure kind of forces their hand and they invite me in," O'Neil says.

Most tech companies pursue profit above all else, O'Neil says, and won't seriously address the issue of bias unless there are consequences. She feels that existing anti-discrimination protections need to be enforced in the age of AI.

"The regulators don't know how to do this stuff," O'Neil says. "I would like to give them tools. But, I have to build them first. ... We basically built a bunch of algorithms assuming they work perfectly, and now it's time to start building tools to test whether they're working at all."

Related: Artificial Intelligence Is Likely to Make a Career in Finance, Medicine or Law a Lot Less Lucrative

Frida Polli, co-founder and CEO of Pymetrics

Solution: Open source AI auditing

Many thought artificial intelligence would solve the problem of bias in hiring, by making sure human evaluators weren't prejudging candidates based on the name they saw on a resume or the applicant's appearance. However, some argue hiring algorithms end up perpetuating the biases of their creators.

Pymetrics is one company that develops algorithms to help clients fill job openings based on the traits of high-performing existing employees. It believes it's found a solution to the bias problem in an in-house auditing tool, and now it's sharing the tool with the world.

Co-founder and CEO Frida Polli stresses that fighting bias was actually a secondary goal for Pymetrics.

"We're not a diversity-first platform," Polli says. "We are a predictive analytics platform."

However, after seeing that many of her clients' employee examples used to train Pymetrics's algorithms were not diverse, combating bias became important.

"Either you do that or you're actually perpetuating bias," Polli says. "So, we decided we certainly were not going to perpetuate bias."

Early on, the company developed Audit AI to make sure its algorithms were as neutral as possible when it came to factors including gender and race. If a company looking to fill a sales role had a sales team that was predominantly white and male, an unaudited algorithm might pick a candidate with those same traits. Polli was quick to point out that Audit AI would also recommend adjustments if an algorithm was weighted in favor of women or people of color.

Some critics say if you tweak a hiring algorithm to remove bias you're lowering the bar, but Polli disagrees.

"It's the age-old criticism that's like, 'oh well, you're not getting the best candidate,'" Polli says. "'You're just getting the most diverse candidate, because now you've lowered how well your algorithm is working.' What's really awesome is that we don't see that. We have not seen this tradeoff at all."

In May, Pymetrics published the code for its internal Audit AI auditing tool on Github. Polli says the first goal for making Audit AI open source is to encourage others to develop auditing techniques for their algorithms.

"If they can learn something from the way that we're doing it that's great. Obviously there are many ways to do it but we're not saying ours is the only way or the best way."

Other motivations include simply starting a conversation about the issue and potentially learning from other developers who may be able to improve Audit AI.

"We certainly don't believe in sort of proprietary debiasing because that would sort of defeat the purpose," Polli says.

"The industry just needs to be more comfortable in actually realizing that if you're not checking your machine learning algorithms and you're saying, 'I don't know whether they cause bias,' I just don't think that that should be acceptable," she says. "Because it's like the ostrich in the sand approach."

Related: The Scariest Thing About AI Is the Competitive Disadvantage of Being Slow to Adapt

Rediet Abebe, co-founder of Black in AI and Mechanism Design for Social Good

Solution: Promoting diverse AI programmers and researchers

Use of facial recognition has grown dramatically in recent years -- whether it's for unlocking your phone, expediting identification at the airport or scanning faces in a crowd to find potential criminals. But, it's also prone to bias.

MIT Media Lab researcher Joy Buolamwini and Timnit Gehru, who received her PhD from the Stanford Artificial Intelligence Laboratory, found that facial recognition tools from IBM, Microsoft and Face++ accurately identified the gender of white men almost 100 percent of the time, but failed to identify darker skinned women in 20 percent to 34 percent of cases. That could be because the training sets themselves were biased: The two also found that the images used to train one of the facial recognition tools were 77 percent male and more than 83 percent white.

One reason machine learning algorithms end up being biased is that they reflect the biases -- whether conscious or unconscious -- of the developers who built them. The tech industry as a whole is predominantly white and male, and one study by TechEmergence found women make up only 18 percent of C-level roles at AI and machine learning companies.

Some in the industry are trying to change that.

In March 2017, a small group of computer science researchers started a community called Black in AI because of an "alarming absence of black researchers," says co-founder Rediet Abebe, a PhD candidate in computer science at Cornell University. (Gehru is also a co-founder.)

"In the conferences that I normally attend there's often no black people. I'd be the only black person," Abebe says. "We realized that this was potentially a problem, especially since AI technologies are impacting our day-to-day lives and they're involved in decision-making and a lot of different domains," including criminal justice, hiring, housing applications and even what ads you see online.

"All these things are now being increasingly impacted by AI technologies, and when you have a group of people that maybe have similar backgrounds or correlated experiences, that might impact the kinds of problems that you might work on and the kind of products that you put out there," Abebe says. "We felt that the lack of black people in AI was potentially detrimental to how AI technologies might impact black people's lives."

Adebe feels particularly passionate about including more African women in AI; growing up in Ethiopia, a career in the sciences didn't seem like a possibility, unless she went into medicine. Her own research focuses on how certain communities are underserved or understudied when it comes to studying societal issues -- for example, there is a lack of accurate data on HIV/AIDS deaths in developing countries -- and how AI can be used to address those discrepancies. Adebe is also the co-founder and co-organizer of Mechanism Design for Social Good, an interdisciplinary initiative that shares research on AI's use in confronting similar societal challenges through workshops and meetings.

Initially, Abebe thought Black in AI would be able to rent a van to fit all the people in the group, but Black in AI's Facebook group and email list has swollen to more than 800 people, from all over the world. While the majority of members are students or researchers, the group also includes entrepreneurs and engineers.

Black in AI's biggest initiative to date was a workshop at the Conference on Neural Information Processing Systems (NIPS) in December 2017 that garnered about 200 attendees. Thanks to partners such as Facebook, Google and ElementAI, the group was able to give out over $150,000 in travel grants to attendees.

Abebe says a highlight of the workshop was a keynote talk by Haben Girma, the first deaf/blind graduate from Harvard Law School, which got Abebe thinking about other types of diversity and intersectionality.

Black in AI is currently planning its second NIPS workshop.

As part of the more informal discussions happening in the group's forums and Facebook group, members have applied and been accepted to Cornell's graduate programs, research collaborations have started and industry allies have stepped forward to ask how they can help. Black in AI hopes to set up a mentoring program for members.

Related: Why Are Some Bots Racist? Look at the Humans Who Taught Them.

Tess Posner, CEO of AI4ALL

Solution: Introducing AI to diverse high schoolers

The nonprofit AI4ALL is targeting the next generation of AI whiz kids. Through summer programs at prestigious universities, AI4ALL exposes girls, low-income students, racial minorities and those from diverse geographic backgrounds to the possibilities of AI.

"It's becoming ubiquitous and invisible," says Tess Posner, who joined AI4ALL as founding CEO in 2017. "Yet, right now it's being developed by a homogenous group of technologists mostly. This is leading to negative impacts like race and gender bias getting incorporated into AI and machine learning systems. The lack of diversity is really a root cause for this."

She adds, "The other piece of it is we believe that this technology has such exciting potential to be addressed to solving some key issues or key problems facing the world today, for example in health care or in environmental issues, in education. And it has incredibly positive potential for good."

Started as a pilot at Stanford University in 2015 as a summer camp for girls, AI4ALL now offers programs at six universities around the country: University of California Berkeley, Boston University, Carnegie Mellon University, Princeton University, Simon Fraser University and Stanford.

Participants receive a mix of technical training, hands-on learning, demos of real-world applications (such as a self-driving car), mentorship and exposure to experts in the field. This year, guest speakers included representatives from big tech firms including Tesla, Google and Microsoft, as well as startups including H20.ai, Mobileye and Argo AI.

The universities provide three to five "AI for good" projects for students to work on during the program. Recent examples include developing algorithms to identify fake news, predict the infection path of the flu and map poverty in Uganda.

For many participants, the AI4ALL summer program is only the beginning.

"We talk about wanting to create future leaders in AI, not just future creators, that can really shape what the future of this technology can bring," Posner says.

AI4ALL recently piloted an AI fellowship program for summer program graduates to receive funding and mentorship to work on their own projects. One student's project involved tracking wildfires on the West Coast, while another looked at how to optimize ambulance dispatches based on the severity of the call after her grandmother died because an ambulance didn't reach her in time.

Other graduates have gone on to create their own ventures after finishing the program, and AI4ALL provides "seed grants" to help them get started. Often, these ventures involve exposing other kids like themselves to AI. For example, three alumni started a workshop series called creAIte to teach middle school girls about AI and computer science using neural art, while another runs an after school workshop called Girls Explore Tech.

Another graduate co-authored a paper on using AI to improve surgeons' technique that won an award at NIPS's Machine Learning for Health workshop in 2017.

"We have a lot of industry partners who have seen our students' projects and they go, 'Wow. I can't believe how amazing and rigorous and advanced this project is.' And it kind of changes people's minds about what talent looks like and who the face of AI really is," Posner says.

Last month, AI4ALL announced it will be expanding its reach in a big way: The organization received a $1 million grant from Google to create a free digital version of its curriculum, set to launch in early 2019.

Related: Artificial Intelligence May Reflect the Unfair World We Live in

Chad Steelberg, co-founder and CEO of Veritone

Solution: Building the next generation of AI

Serial entrepreneur Chad Steelberg first got involved in AI during his high school years in the 1980s, when he worked on algorithms to predict the three-dimensional structures of proteins. At the time, he felt AI's capabilities had reached a plateau, and he ended up starting multiple companies in different arenas, one of which he sold to Google in 2006.

A few years later, Steelberg heard from some friends at Google that AI was about to take a huge leap forward -- algorithms that could actually understand and make decisions, rather than simply compute data and spit back a result. Steelberg saw the potential, and he invested $10 million of his own money to found Veritone.

Veritone's aiWARE is an operating system for AI. Instead of communicating between the software and hardware in a computer, like a traditional operating system, it takes users' queries -- such as "transcribe this audio clip" -- and finds the best algorithm available to process that query, whether that's Google Cloud Speech-to-Text, Nuance or some other transcription engine. As of now, aiWARE can scan more than 200 models in 16 categories, from translation to facial recognition.

Algorithms work best when they have a sufficiently narrow training set. For example, if you try to train one algorithm to play go, chess and checkers, it will fail at all three, Steelberg says. Veritone tells the companies it works with to create algorithms for very narrow use cases, such as images of faces in profile. AiWARE will find the right algorithm for the specific query, and can even trigger multiple algorithms for the same query. Steelberg says when an audio clip uses multiple languages, the translations aiWARE returns are 15 percent to 20 percent more accurate than the best single engine on the platform.

Algorithms designed for parsing text and speech, such as transcription and translation, are another area prone to bias. One study found algorithms categorized written African American vernacular English as "not English" at high rates, while a Washington Post investigation found voice assistants such as Amazon's Alexa have a hard time deciphering accented English.

Though it wasn't built to eliminate bias, aiWARE ends up doing exactly that, Steelberg says. Just like the human brain is capable of taking all of its learned information and picking the best response to each situation, aiWARE learns which model (or models) is most appropriate to use for each query.

"We use our aiWARE to arbitrate and evaluate each of those models as to what they believe the right answer is, and then aiWARE is learning to choose which set of models to trust at every single point along the curve," Steelberg says.

It's not an issue if an algorithm is biased. "What's problematic is when you try to solve the problem with one big, monolithic model," Steelberg says. AiWARE is learning to recognize which models are biased and how, and work around those biases.

Another factor that results in biased AI is that many algorithms will ignore small subsets of a training set. If in a data set of 1 million entries, there are three that are different, you can still achieve a high degree of accuracy overall while performing horribly on certain queries. This is often the reason facial recognition software fails to recognize people of color: The training set contained mostly images of white faces.

Veritone tells companies to break down training sets into micro models, and then aiWARE can interpolate to create similar examples.

"You're essentially inflating that population, and you can train models now on an inflated population that learn that process," Steelberg says.

Using small training sets, aiWARE can build models for facial recognition with accuracy in the high 90th percentile for whatever particular subcategory a client is interested in (e.g., all the employees at your company), he says.

Steelberg says he believes an intelligent AI like aiWARE has a much better chance of eliminating bias than a human auditor. For one, humans will likely have a hard time identifying flawed training sets. They also might bring their own biases to the process.

And for larger AI models, which might encompass "tens of millions of petabytes of data," a human auditor is just impractical, Steelberg says. "The sheer size makes it inconceivable."

Liz Webber

Entrepreneur Staff

Insights Editor

Liz Webber is the insights editor at Enterpreneur.com, where she manages the contributor network.

Want to be an Entrepreneur Leadership Network contributor? Apply now to join.

Business News

A New Hampshire City Was Named the Hottest Housing Market in the U.S. This Year. Here's the Top 10 for 2024.

Zillow released its annual lists featuring the top housing markets, small towns, coastal cities, and geographic regions. Here's a look at the top real estate markets and towns in 2024.

Business News

'We're Not Allowed to Own Bitcoin': Crypto Price Drops After U.S. Federal Reserve Head Makes Surprising Statement

Fed Chair Jerome Powell's comments on Bitcoin and rate cuts have rattled cryptocurrency investors.

Business Ideas

63 Small Business Ideas to Start in 2024

We put together a list of the best, most profitable small business ideas for entrepreneurs to pursue in 2024.

Business Ideas

Is Your Business Healthy? Why Every Entrepreneur Needs To Do These 3 Checkups Every Year

You can't plan for the new year until you complete these checkups.

Science & Technology

This AI is the Key to Unlocking Explosive Sales Growth in 2025

Tired of the hustle? Discover a free, hidden AI from Google that helped me double sales and triple leads in a month. Learn how this tool can analyze campaigns and uncover insights most marketers miss.