The first few seconds in which an ad is viewed, are the one's in which your subconscious processes whether that ad signals danger or safety.
Every way we look, every split-second of the day, our eyes are assaulted by an onslaught of colours, shapes and contrasts, while our subconscious works tirelessly to decide what our brain actually sees and remembers. Our subconscious has been wired through years of evolution to protect us from perceived threats in the world around us while searching for resources for our survival.
Particular objects, qualities or features that stand out and are more prominent than competing objects/features in the surrounding environment are know as salient objects. These objects or features tend to draw more attention and are more likely to be remembered by individuals.
Humans are biologically wired in our subconscious for reward, safety and social interaction. Messaging that communicates safety and reward will trigger the reward and pleasure centre of the brain, thereby drawing your attention towards it.
Similarly, anything that signals danger to us, will also attract our attention through our desire to avoid danger. Sharp points, edges, angles, colour contrasts and contour shapes all signal danger to the brain.
The animal kingdom is full of examples of visual salience. Animals with red markings stand out starkly to signify that they are toxic or dangerous. Flower petals display red or yellow to attract and create a kind of landing strip for incoming bees.
Using visual tactics that signify danger don't have to be all bad - they can be used to attract initial attention and be coupled with a safe and/or rewarding message. Road traffic signs are an example of this type of tactic in play.
Salience is often used in psychology and Neuroscience to understand how visual features affect attention, memory and emotional processing. Neuroscientists at the Human Brain Project (HBP), a 10 year collaborative effort predominantly within the European scientific community of neuroscientists, computational engineers and medical scientists, have done recent work examining how the visual system of the brain detects salient objects. This specific collaboration is the ‘Visuo-Motor Integration’ co-design project of the HBP.
The team has designed robots that compute vision processing through recursive computational loops of inputs and outputs, estimating where a human eye would move - to detect salient features within an environment. Instead of modelling these processes on a computer simulation, the team built robots to demonstrate these visual processes and eye movements.
The scientists are most interested in: how the brain estimates changes between the visual field, movement of the eyes and the brain's capacity to change it's representation patterns of the information received by the eyes (as a coordinate system), in detecting and maintaining perceptions of environmental salience. As Senden explains: "the neurons that take input from the eyes form an orderly map of the retina (referred to as retinotopic map). This means that visual space is brought into a coordinate system centred around fixation". The problem is that the eyes are moving around and the brain has to account for these changes. At current there are no good computer models that can account for the changes of visual input and subsequently the brain's accommodating for these changes.
At the moment there are machine learning algorithms, that attempt to predict some aspects of salience - mostly heat map data of objects from videos and stills, however at the moment they have only a modest level accuracy. Computational systems are on there way there, it is only a matter of time, when these systems can generate highly accurate estimations of visual salience. As we learn more about how the visual brain system processes visual inputs from the eyes, we will also be able to produce better computational models of visual salience. Right now it seems that what this means for consumer neuroscience, is that we will develop better models of visual information processing, that predict saliency features of our environments.
The results from these findings would allow us to predict with better accuracy what objects in the environment stand out and would be remembered. This would have big implications for the way this technology will be used for future Neuromarketing and Neuormarketing studies. It also provides us with insight into where machine learning and our technology is taking us in getting more accurate and more reliable data in consumer behaviour, inside things like visual advertising and visual design and what we can look forward to in the future as machine learning only improves. Stay tuned. Keep excited.
For more information visit the HBI blog here.