British Philosophers Consider the Ethics of a Robotic Future A standards document offers guidelines on robot-human relationships and more.
By Tom Brant
Our biggest sale — Get unlimited access to Entrepreneur.com at an unbeatable price. Use code SAVE50 at checkout.*
Claim Offer*Offer only available to new subscribers
This story originally appeared on PCMag
While the European Union is worried about lost tax revenue from a future robot-dominated workforce, the national standards body of the United Kingdom is more concerned with the ethical hazards of autonomous systems used in everyday life.
The British Standards Institute (BSI) commissioned a group of scientists, academics, ethicists and philosophers to provide guidance on potential hazards and protective measures. They presented their guidelines at a robotics conference in Oxford, England last week.
"As far as I know this is the first published standard for the ethical design of robots," professor of robotics at the University of the West of England Alan Winfield told the Guardian. "It's a bit more sophisticated than that Asimov's laws," he said, referring to the basic rules of good robot behavior that Isaac Asimov proposed: don't harm humans, obey orders and protect yourself.
The BSI document covers everything from whether an emotional bond with a robot is desirable, to the prospect of sexist or racist robots, the Guardian reported.
BSI says its ethical standards build on existing safety requirements for industrial and medical robots. The organization says that ethical hazards are "broader" than physical hazards, and though its ethics guidelines are not laws, it hopes that robot designers will use them.
The EU, which Britain will soon leave, is also working on robot ethics standards. Its provisional code of conduct for robotics engineers and users includes provisions like "robots should act in the best interests of humans" and forbids users from modifying a robot to enable it to function as a weapon.