attentional landscapes for object interaction

Anna Belardinelli, University of Tübingen, Germany

Computational models of visual attention have been proposed to prioritize and extract interesting regions in a scene. Similar issues arise when considering vision for object interaction.
Gaze control works as a gateway between three components, linking the task, the object , and the effector in an effective way.
In an eye-tracking experiment participants were shown images of real-world graspable objects and asked to categorize, to pantomime lifting or opening the objects. Fixation locations and attention heatmaps show different modes around task-relevant locations, in accordance with previous literature and with touch maps, collected via a touchscreen.
The relation between task-specific fixation/touch point distribution and object categories can be learnt by using suitable object descriptors as input vectors and the heatmaps as target function. Since affording points depend both on the task and the shape of the object, we use Histograms Of Gradients to encode the object global shape and Kernel Regression to learn the mapping to the fixated/touched locations according to each task.