The Dark Secret at the Heart of AI – MIT Technology Review

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

The Dark Secret at the Heart of AI – MIT Technology Review

If I Only Had a Brain: How AI ‘Thinks’ – The Daily Beast

Artificial intelligence has gotten pretty darn smart—at least, at certain tasks. AI has defeated world champions in chess, Go, and now poker. But can artificial intelligence actually think?

The answer is complicated, largely because intelligence is complicated. One can be book-smart, street-smart, emotionally gifted, wise, rational, or experienced; it’s rare and difficult to be intelligent in all of these ways. Intelligence has many sources and our brains don’t respond to them all the same way. Thus, the quest to develop artificial intelligence begets numerous challenges, not the least of which is what we don’t understand about human intelligence.

Still, the human brain is our best lead when it comes to creating AI. Human brains consist of billions of connected neurons that transmit information to one another and areas designated to functions such as memory, language, and thought. The human brain is dynamic, and just as we build muscle, we can enhance our cognitive abilities—we can learn. So can AI, thanks to the development of artificial neural networks (ANN), a type of machine learning algorithm in which nodes simulate neurons that compute and distribute information. AI such as AlphaGo, the program that beat the world champion at Go last year, uses ANNs not only to compute statistical probabilities and outcomes of various moves, but to adjust strategy based on what the other player does.

Facebook, Amazon, Netflix, Microsoft, and Google all employ deep learning, which expands on traditional ANNs by adding layers to the information input/output. More layers allow for more representations of and links between data. This resembles human thinking—when we process input, we do so in something akin to layers.

If I Only Had a Brain: How AI ‘Thinks’ – The Daily Beast