Friday 8 June 2018

Can Google keep its promises on building ethical AI?

AFP Contributor via Getty Images
The company certainly talks a good game.

Google's collaboration with the Department of Defense to develop AI system's for the US military's fleet of war drones, dubbed Project Maven, proved a double-edged sword for the technology company. On one hand, the DoD contract was quite lucrative, worth as much as $250 million annually.

On the other hand, public backlash to the news that the company was helping the government build more efficient killing machines was immediate, unwavering and utterly ruthless. A dozen employees quit the company in protest, another 4,000 petitioned management to terminate the contract outright. The uproar was so deafening that Google had to come out and promise to not renew the deal upon its completion next year.

Now, Sundar Pichai has gone even further to soothe the public, releasing his own version of Asimov's "Three Laws." Of course, Google is no stranger to the AI landscape. The company already leverages varying forms of AI in a number of its products, from Gmail and Photos to its salon-calling digital assistant and the waveform generating system that allows Assistant to speak. But can a company that unilaterally removed its own "Don't be evil" guiding principle from common practice really be trusted to do the right thing when raising artificial minds to maturity?




By Andrew Tarantola.
Full story at Engadget.




No comments:

Post a Comment