Creators of AI can no longer trust it

Debi

Owner/Admin
Staff
Joined
Sep 16, 2013
Messages
241,487
Reaction score
233,992
Points
315
Location
South of Indy
AI is now so complex its creators can’t trust why it makes decisions

Artificial intelligence is seeping into every nook and cranny of modern life. AI might tag your friends in photos on Facebook or choose what you see on Instagram, but materials scientists and NASA researchers are also beginning to use the technology for scientific discovery and space exploration.

But there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another.

Modern artificial intelligence is still new. Big tech companies have only ramped up investment and research in the last five years, after a decades-old theory was shown to finally work in 2012. Inspired by the human brain, an artificial neural network relied on layers of thousands to millions of tiny connections between “neurons” or little clusters of mathematic computation, like the connections of neurons in the brain. But that software architecture came with a trade-off: Since the changes throughout those millions of connections were so complex and minute, researchers aren’t able to exactly determine what is happening. They just get an output that works.

At the Neural Information Processing Systems conference in Long Beach, California, the most influential and highest-attended annual AI conference, hundreds of researchers from academia and tech industry will meet today (Dec. 7) at a workshop to talk about the issue. While the problem exists today, researchers who spoke to Quartz say the time is now to act on making the decisions of machines understandable, before the technology is even more pervasive.

“We don’t want to accept arbitrary decisions by entities, people or AIs, that we don’t understand,” said Uber AI researcher Jason Yosinkski, co-organizer of the Interpretable AI workshop. “In order for machine learning models to be accepted by society, we’re going to need to know why they’re making the decisions they’re making.”

Full story at site

As these artificial neural networks are starting to be used in law enforcement, health care, scientific research, and determining which news you see on Facebook, researchers are saying there’s a problem with what some have called AI’s “black box.” Previous research has shown that algorithms amplify biases in the data from which they learn, and make inadvertent connections between ideas.
 
They’re making a super brain! We can’t even beat it at chess let alone when it wants to take over the world.