The new project stemmed from an unexpected discovery made when researchers trained an artificial neural network to study car dashcam footage.
WashU assistant professor of neuroscience Tom Franken and professor of physics Ralf Wessel have secured a $427,625 grant from the National Institutes of Health to study how artificial neural networks and primate brains process and predict video imagery.
In doing so, they’ll draw novel connections between how artificial and biological brains analyze visual information, starting from the basic tasks of differentiating objects in a scene.
The key concept at play in these studies is “border ownership,” a visual process that aids in distinguishing foreground objects from background to help determine an object’s shape. Several famous optical illusions play with our sense of border ownership, including Rubin’s vase, which appears to depict the silhouette of a vase... or a pair of faces, depending on how one determines the image’s foreground versus its background.
In a previous study, Franken, Wessel, and graduate student Zeyuan Ye unexpectedly discovered border ownership signals emerging in an artificial neural network trained to study dashcam footage from cars. The network’s goal was to predict the next frame of each video, not to identify the shapes of the objects in the video.
But as it turned out, those border ownership signals were critical to the network’s task. When the research team deleted those processes, the network’s prediction accuracy dropped substantially.
“Our surprising findings thus suggest that border ownership signals in the brain may have a different function than what is usually assumed,” Franken said. “They may be important to predict future visual input in complex, natural videos rather than to simply identify the shape of objects.”
Franken, Wessel, and Ye will study both artificial and primate brains to determine how border ownership processes aid in predicting complex, dynamic visual information. If the findings are positive, border ownership could become a key building block in the development of devices that can further restore vision in blind people, as well as improving the performance of AI.
The project falls straight in the wheelhouse of Toward a Synergy Between Artificial Intelligence and Neuroscience, a multiyear project funded by the Incubator for Transdisciplinary Futures. Wessel, one of the group’s faculty leads, says the grant results directly from the group’s collaborative work.
“When Arts & Sciences leadership engages with faculty in moonshot ideas, such as the AI and Neuroscience cluster, and researchers from different departments stick their heads together, the result is more than the sum of its parts,” Wessel said. “Projects like these result from great teamwork across academic ranks, from grad students to the dean.”