HOME
Accurate neural network computer vision without the 'black box'
New research offers clues to what goes on inside the minds of machines as they learn to see. A method developed by Cynthia Rudin's lab reveals how much a neural network calls to mind different concepts as an image travels through the network’s layers. Credit: Duke University School of Nursing
The artificial intelligence behind self-driving cars, medical image analysis and other computer vision applications relies on what's called deep neural networks.
Loosely modeled on the brain, these consist of layers of interconnected "neurons"—mathematical functions that send and receive information—that "fire" in response to features of the input data. The first layer processes a raw data input—such as pixels in an image—and passes that information to the next layer above, triggering some of those neurons, which then pass a signal to even higher layers until eventually it arrives at a determination of what is in the input image.
But here's the problem, says Duke computer science professor Cynthia Rudin. "We can input, say, a medical image, and observe what comes out the other end ('this is a picture of a malignant lesion', but it's hard to know what happened in between."
News Source