We use behavioural, psychophysical, eye-tracking, EEG, and fMRI experiments to study various aspects of vision, and attentional function. In particular, we study the links between visual attention and eye movements, the control we have over which things in our environment capture our attention, and the cognitive mechanisms involved in processing events and selecting appropriate responses to those events.
In the animation below, your goal is to fixate on the cross in the middle, and, keeping your eyes still, say out loud whether the red letter is a T or and L as quickly as you can. Have a go now before you read on.
You might have found that you took longer to say the identity of the red letter when the red circle appeared at a different location to the red letter. On the other hand, you might have found you were faster to say the identity of the red letter when the red circle appeared at the same location as the red letter. This is called “attentional capture” – because you are searching for red, red things capture your attention whether they are relevant (like the letter) or irrelevant (like the circle).
We use cueing paradigms like the one above to investigate how and why objects and events in the world around us capture and hold our attention. There are various projects currently being undertaken in our lab in which we are testing how different stimuli (rare, coloured, 3D, threatening) change the way we process visual information.
We use diverse eye trackers (e.g., from SMI, SR Research) to record what people do with their eyes during various tasks. In our experiments, we use attention-grabbing stimui such as a bright flash to allocate a person’s attention to a particular location. By presenting a response-relevant stimulus briefly afterwards, we can test how grabbing people’s attention in one location affects their ability to process information at the same location as the flash, or somewhere else.
Relational Account of Attention
Our lab also has several projects underway examining the Relational Theory, proposed by our own Dr Stefanie Becker. This theory states that in the involuntary capture of visual attention, the observer’s top down template extends to include relational information of search targets to their surrounding environment. Conversely, competing models hold attention as being automatically guided to the most salient items in display. This theory has been tested using coloured and size stimuli, and is currently being tested with depth stimli and spatial stimuli.
Cognitition, Attention, Eye movements, Perception, Contingent capture, Visual Attention, Visual Search, Neuropsychology