How Human-Machine Learning Partnerships Can Reduce Unconscious Bias The human tendency toward bias is so deeply rooted that companies sincere about not discriminating might need machines to help them.
By Brian Uzzi Edited by Dan Bova
Opinions expressed by Entrepreneur contributors are their own.
As Airbnb counters complaints about discrimination and racism experienced by some users of its online site that connects travelers and property owners worldwide, its best approach may be a combination of human and computer expertise.
On the human side, Airbnb, which takes a strong stance for having "zero tolerance for any discrimination," hired its first director of diversity, David J. King III, who is responsible for ensuring that the company and its users are more diverse.
On the computer science side, machine learning can provide a shortcut for raising red flags and identifying tendencies among users, particularly "hosts" with properties for rent that may lead to discrimination.
Airbnb has faced an onslaught of negative headlines, including tweets by a transgender woman who said she was denied an Airbnb rental after she disclosed to the host that she was transgender. Airbnb has also been sued by an African-American user who claimed he was discriminated against when he tried to book a property on the site and his complaints were allegedly ignored by the company.
In pursuit of its mission to "create a world where people can belong anywhere," it's not enough for Airbnb to ask people to uphold its hospitality practices; there also may be limits to what the company legally can demand of hosts who are independent contractors. In addition, there are obvious ways in which hosts can circumvent hospitality policies: for example, claiming that a property is not available for rent when individuals of certain ethnic or racial backgrounds, or whose last names connote a religious affiliation, make a request.
Related: Overcoming Unconscious Bias Is Key to Building an Inclusive Team
A best way for Airbnb to avoid discriminatory actions is to uncover the "ground truth" about host's beliefs and attitudes that goes beyond their lip service given to promises not to discriminate on the basis of race, ethnicity, gender, sexual orientation, religion or other factors.
Machine learning can go deep, quickly and efficiently, by combing the language generated by tens of thousands (or even millions) of users and finding robust empirical associations among language, tone and desired and undesired behavior. For example, when vetting hosts, Airbnb could use machine learning to scour user-generated text such as web site descriptions of properties, customer reviews, or Tweets. Using pattern recognition and computer learning capabilities, machines can search for "red flag" words, phrases, or attitudes that have been shown to highly correlate with a tendency toward certain discriminatory beliefs or behaviors. The machine learning technique can be very powerful in revealing hosts' tendencies without them being aware of what is being searched for; therefore, their texts, reviews or tweets embody their natural responses.
Identifying unsuitable hosts before a problem arises reduces the risk of discrimination, which for a hospitality web site like Airbnb is absolutely crucial. Prevention is paramount when avoiding the severe damage to reputation and brand that can occur when negative user experiences are amplified through social media.
Consider the video prank by Domino's pizza employees that went viral, showing them spitting on food and engaging in other behaviors that were not only disgusting, but also violated health codes. Even though that food was never delivered, the video prank hurt Domino's reputation with customers and sent its approval rating plummeting.
Related: Airbnb Hires Ex-U.S. Attorney General to Help Shape Policy
In Airbnb's case, the best defense against tarnishing its reputation is a good offense on vetting hosts, such as with questionnaires that go beyond describing themselves and their properties, to unconsciously reveal more about their backgrounds and attitudes—even addressing issues such as immigration, gun control, or veterans' rights. Responses can be scanned, using computer algorithms, for certain patterns within the responses. More than just the use of specific words, the tone of language can indicate whether there is a high correlation with tendencies to discriminate against certain groups.
A challenge in crafting such a questionnaire would be wording the questions to avoid bias. Once again, the pattern recognition abilities of machine learning could be used to design questions that have a high probability of being effective in revealing underlying sentiment and predicting the future behaviors of hosts.
In the same way, machine learning can help improve the effectiveness of diversity training within the company to help people adjust their behavior not to offend others. Machine learning could help pinpoint words, phrases, or tones that have an unintended effect or that exacerbate a situation -- such as when tough feedback is being given after a performance review.
The computer doesn't override human choice; rather, it highlights potential problems that the human can then address such as by softening the tone, while not changing the content -- or not.
As Airbnb battles against perceptions of discrimination by some users, the combination of diversity expertise with computer science yields a shortcut that is highly efficient and cost effective. But neither will suffice on its own. Working together, machine and human can partner to improve the vetting process by seeking the right information and scanning for "red flags" to help prevent discrimination before it has a chance to occur.