<img height="1" width="1" style="display:none" src="https://q.quora.com/_/ad/ad9bc6b5b2de42fb9d7cd993ebb80066/pixel?tag=ViewContent&amp;noscript=1">

How Do You Code Intuition Into AI?

June 20 2017

Where were you at 2:14 in the morning (Eastern Daylight Time) on August 29, 1997? That was the moment, according to Arnold Schwarzenegger’s Cyberdyne Systems Model 101 Terminator, when the world’s first (and sadly the last) AI became self-aware.

In all probability, though, AI isn’t going to suddenly switch on like that. It won’t happen at a specific moment. Self-awareness in the human species took about two million years, give or take a few hundred thousand years. Although a sample size of one doesn’t tell us much about the process, it’s reasonable to assume that self-aware AI will emerge in pieces over a long period of time, if it hasn’t already.

Danny Hillis, co-founder of The Thinking Machines Corporation, told filmmaker Werner Herzog, “I can not only imagine artificial intelligence evolving spontaneously on the internet, but I can’t tell you that it hasn’t happened already. Because it wouldn’t necessarily reveal itself to us.”

Design a frictionless shopping experience for your users. Download our 'Search  and Navigation UX Design Guide'.

That inscrutability in attempting to peer into the artificial brains we have built so far is what seems to have concerned Will Knight, Sr. Editor at the MIT Technology Review. In his article "The Dark Secret at the Heart of AI," he interviewed several of the world’s leading experts in AI, trying to pin down precisely what we know about what we don’t know.

Increasingly, deep learning algorithms are teaching themselves how do things like drive and identify security threats. The number of inputs are so immense and the weighting of them so subtle that machines can’t even answer how they came to decide what they did, even if we could ask them.

MIT Professor Tommi Jaakkola, summed up the argument as, “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

The World Economic Forum took a fascinating position on this topic by being concerned not as much by AI as IA, or intelligence augmentation. Like Elon Musk's "neural lace," IA refers to enhancing human intelligence with a computer interface.

IBM is one of the leading companies that say it is devoting more time to developing IA for human brains to take advantage of their Watson system. Meanwhile, the School of Computer Science at Carnegie Mellon University announced that around 98 percent of their current research is built around application of IA, instead of the more typical AI development.

The most ironic aspect of the investigation into AI reasoning is that we can’t even explain how we come to our own decisions. Data and rationality is only a small part of our own processes.

Most decisions by humans still come from a cognitive process called “recognition heuristics,” but they might as well be called “the gut.” Certain things, we know because we just know, and maybe that’s all AI would be able to answer as well. It’s part experience, part logic and a magical portion of common sense, even in uncommon situations.

Daniel Dennett, philosopher and cognitive scientist at Tufts University, suggested, “If it can’t do better than us at explaining what it’s doing, then don’t trust it.” Instead of digging into the algorithms that drive AI to find answers, perhaps we just need to invite it, and ourselves, into more civil discourse.

Download our 'Site Search and Navigation UX Design Guide'

Topics:

Machine Learning

Subscribe to our newsletter

Subscribe to our newsletter