Tech Giants Team Up to Keep AI From Getting Out of Hand

Say hello to the Partnership on Artificial Intelligence to Benefit People and Society.
GettyImages497452707s.jpg
Getty Images

Let's face it: artificial intelligence is scary. After decades of dystopian science fiction novels and movies where sentient machines end up turning on humanity, we can't help but worry as real world AI continues to improve at such a rapid rate. Sure, that danger is probably decades away if it's even a real danger at all. But there are many more immediate concerns. Will automated robots cost us jobs? Will online face recognition destroy our privacy? Will self-driving cars mess with moral decision making?

The good news is that many of the tech giants behind the new wave of AI are well aware that it scares people---and that these fears must be addressed. That's why Amazon, Facebook, Google's DeepMind division, IBM, and Microsoft have founded a new organization called the Partnership on Artificial Intelligence to Benefit People and Society.

"Every new technology brings transformation, and transformation sometimes also causes fear in people who don't understand the transformation," Facebook's director of AI Yann LeCun said this morning during a press briefing dedicated to the new project. "One of the purposes of this group is really to explain and communicate the capabilities of AI, specifically the dangers and the basic ethical questions."

If all that sounds familiar, that's because Tesla and Space X CEO Elon Musk had been harping on this issue for years, and last December, he and others founded a an organization, OpenAI, that aims to address many of the same fears. But OpenAI is fundamentally a R&D outfit. The Partnership for AI is something different. It's a consortium---open to anyone---that seeks to facilitate a much wider dialogue about the nature, purpose, and consequences of artificial intelligence.

According to LeCun, the group will operate in three fundamental ways. It will foster communication among those who build AI. It will rope in additional opinions from academia and civil society---people will a wider perspective on how AI will effect society as a whole. And it will inform the public on the progress of AI. That may include educating lawmakers, but the organization says it will not lobby the government.

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We've already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles--something that will be hard to do in a large group with many stakeholders---it won't really have a way to ensure those ideals are put into practice. Although one of the organization's tenets is "Opposing development and use of AI technologies that would violate international conventions or human rights," Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

In other words, if one of the member organizations decides to do something blatantly unethical, there's not really anything the group can do to stop them. Rather, the group will focus on gathering input from the public, sharing its work, and establishing best practices.

Just bringing people together isn't really enough to solve the problems that AI raises, says Damien Williams, a philosophy instructor at Kennesaw State University who specializes in the ethics of non-human consciousness. Academic fields like philosophy have diversity problems of their own. So many different opinions abound. One enormous challenge, he says, is that the group will need to continually reassess its thinking, rather than settling on a static list of ethics and standards that doesn't change or evolve.

Williams is encouraged that tech giants like Facebook and Google are even asking questions about ethics and bias in AI. Ideally, the group will help establish new standards for thinking about artificial intelligence, big data, and algorithms that can weed out harmful assumptions and biases. But that's a mammoth task. As co-chair Eric Horvitz from Microsoft Research put it, the hard work begins now.