Skip to main content
University of Wisconsin–Madison
School of Journalism and Mass Communication

Do data robots need their own set of ethics?

Google recently acquired DeepMind for $400 million and will incorporate the London-based artificial intelligence startup’s team and software into Google’s search team, now known as the “Knowledge” group.

This is an especially vital development for journalists, who often use Google Search to first research a story. 

DeepMind specializes in artificial intelligence, a rapidly developing area of Google, which includes Google Glass, a wearable speech-recognition device, and the driver-less car, which is legal to drive in three states. However, the most striking element of the acquisition was DeepMind’s stipulation that Google create an artificial intelligence ethics review board to oversee the safety of developing these technologies, and Google agreed.

Bianca Bosker, writing for the The Huffington Post, pointed to one co-founder of DeepMind and his slightly disconcerting outlook of human beings’ future relationship with artificial intelligence and smart technologies in 2011 as possible motivation for the ethics board.

“Eventually, I think human extinction will probably occur, and technology will likely play a part in this,” DeepMind’s Shane Legg said in an interview with Alexander Kruel. Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the “number 1 risk for this century.”

Bosker also outlined possible guidelines for the new ethics board.

Together with input from other AI researchers, Barrat has developed a wishlist of five policies he hopes Google’s safety board will adopt to ensure the applications of AI are ethical. These include creating guidelines that determine when it’s “ethical for systems to cause physical harm to humans,” how to limit “the psychological manipulation of humans” and how to prevent “the concentration of excessive power.”

Read the entire article here.

Liz Ganes and James Temple, explain how DeepMind’s “deep learning” artificial intelligence designs could work within Google Search and smart devices in this Re/code article.

Deep learning is a form of machine learning in which researchers attempt to train computer algorithms to spot meaningful patterns by showing them lots of data, rather than trying to program in every rule about the world. Taking inspiration from the way neurons work in the human brain, deep learning uses layers of algorithms that successively recognize increasingly complex features — going from, say, edges to circles to an eye in an image.

Read the entire article here.

Gary Marcus, writing for The New Yorker, discussed the host of ethical issues that came with Google’s other smart technologies and outlined the ideal capabilities of the developing artificial intelligence market.

What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.

Read the entire article here.

As companies like Google integrate artificial intelligence technologies with their products, especially Google Search, every journalist’s first dig into a story, ethics will continue to be a highly contested topic. However, Google has set an important precedent across the technology industry regarding the possible ethical implications of how these devices can affect humans.

(image credit: Alejandro Zorrilal Cruz [Public domain], via Wikimedia Commons)

Leave a Reply

Your email address will not be published. Required fields are marked *