Research determines how the brain recognizes what’s important at first glance.
Researchers at the Centre for Neuroscience Studies (CNS) at Queen’s University have discovered that a region of the brain – the superior colliculus – contains a mechanism responsible for interpreting how visual input from a scene determines where we look. This mechanism, known as a visual salience map, allows the brain to quickly identify and act on the most important information in the visual field, and is a basic mechanism for our everyday vision.
The study, published today in the journal Nature Communications, found that neurons in this region of the brain create a visual saliency map (a representation, or distilled version, of the scene that highlights the most visually conspicuous objects), which correlated with established computer models of saliency. The research opens up new opportunities in a wide range of fields including neuroscience, psychology, visual robotics, and advertising, as well as applications for diagnosing neurological disorders.
“When we look out at the world, the first things that attract our gaze are the low-level visual features that comprise a scene – the contours, the colours, the luminance of the scene – and computational models of visual saliency are designed to predict where we will look based on these features,” explains Brian White, a postdoctoral researcher at the CNS and the study’s lead author. “Our colleagues at the University of Southern California – led by Professor Laurent Itti – are at the forefront in the development of these models. With our neurophysiological expertise, we showed that neurons in the superior colliculus create a saliency map that guides attention, in much the same way as predicted by the saliency model. Until now, this was largely just a concept with little supporting evidence, but our latest study provides the first strong neural evidence for it.”
Dr. White and his co-investigators, including fellow Queen’s researcher Douglas Munoz, measured how the activation of neurons in this area of the brain respond to natural visual stimuli, such as video of dynamic nature scenes. The research team found a strong correlation between the model’s predictions of visual saliency across the scene, and the patterns of activation by these neurons – demonstrating not only the validity of the model in predicting visual saliency and attention, but opening new possibilities in a range of fields.
Dr. White says the findings have important applications in the development of diagnostic tests for neurological disorders – such as Parkinson’s disease, Huntington’s disease and Alzheimer’s disease. Patients with such disorders show patterns of gaze that differ from controls when viewing natural scenes. These differences can be distinguished using the saliency model, and can then be used to help understand what the different brains are doing based on the neurophysiological results.
“While a number of fields can benefit from an improved understanding of saliency coding in the brain, the real benefit is the opportunity for further study on the superior colliculus and how it integrates inputs from other brain areas,” Dr. White says. “We’re very interested in furthering both the clinical and diagnostic benefits that can be derived from these findings, as well as the opportunity for further basic research.”
Source of text: Queen’s University
Original Research article:
White BJ, Berg DJ, Kan JY, Marino RA, Itti L, Munoz DP. Superior colliculus neurons encode a visual saliency map during free viewing of natural dynamic video. Nat Commun. 2017 Jan 24;8:14263. doi: 10.1038/ncomms14263.