Google Forms an External Council to Foster 'Responsible' AI It'll help govern facial recognition, machine learning and beyond.
By Jon Fingas Edited by Dan Bova
This story originally appeared on Engadget
Google is joining Facebook, Stanford and other outfits setting up institutions to support ethical AI. The company has created an Advanced Technology External Advisory Council that will shape the "responsible development and use" of AI in its products. The organization will ponder facial recognition, fair machine learning algorithms and other ethical issues. The initial council is a diverse group that tackles a range of disciplines and experiences.
The current advisors include academics focused both on technical aspects of AI (such as computational mathematics and drones) as well as experts in ethics, privacy and political policy. There's also an international focus, with people ranging from as far afield as Hong Kong and South Africa.
ATEAC will hold its first meeting in April, and plans three more over the course of 2019. And while they'll clearly play into Google's development process, the company will publish summaries of its talks and spur members to share "generalizable" info with their existing organizations. The aim is to improve the tech industry as a whole, not just Google's work.
The council's creation follows Google's promise to embrace ethical AI following the controversy over its involvement in the U.S. military's Project Maven drone initiative. Effectively, Google is trying to avoid repeating history by asking the council to question its decisions.