Critics have been urging companies involved in the creation of artificial intelligence to develop a code of ethics before it’s too late. Now Google is complying, following backlash over its work with the U.S. Pentagon developing a system to analyze military drone visuals.
But is this new set of principles enough to calm people’s fears about the potential dangers of militarized A.I.? Or is it just a public relations sleight of hand intended to assuage the critics?
After all, without any independent oversight, there’s little binding Google to its word.
The need for oversight is particularly pressing with regards to militarized A.I., or autonomous weapons systems. What differentiates this category of weapons is their autonomy: combat drones, for example, that could eventually replace human-piloted fighter planes;
This story was originally published on CBC News. To read the rest of this news worthy story, please visit http://www.cbc.ca/news/technology/google-militarized-ai-1.4707697?cmp=rss.