robotic models of active perception

Dimitri Ognibene, King’s College London, United Kingdom

Unstructured social environments, e.g. building sites, release an overwhelming amount of information, yet behaviourally  relevant variables can be not directly accessible. A key solutions found by nature to cope with such problems is Active Perception (AP), as is shown by many examples, such as the foveal anatomy and control of the eye. Effectively designing a system that finds and selects relevant information and understanding its interdependencies on related functions, such as learning, will have an impact on both Robotics and Cognitive Neuroscience.

The main insights coming from the development of two different Active Vision (AV) robotic models will be presented:

1) an information theoretic AV model for dynamic environments where achieving an effective behaviour requires the prompt recognition of the hidden states (e.g. intentions) and the  interactions (e.g. attraction), and spatial relationships between the elements in the environment. This general framework is described in the context of social interaction with  AV systems which support the anticipation of other agents’ goals [Ognibene & Demiris 2013] and the recognition of complex activities [Lee et al submitted];

2) a neural model of the development of AV strategies in ecological tasks, such as exploring and reaching rewarding objects in a class of similar  environments, the agent world. This model shows that an embodied agent can autonomously learn what are the behaviourally relevant contingencies in its world and how to use them to direct its perception [Ognibene & Baldassarre 2014].

This talk will finally touch on recent developments with AP regarding: extension of Active Inference Framework to AP [Friston et al]; the active allocation of resources for perception in industrial contexts [Darwin EU FP7 Project]; improving perception through the design of body parameters [Sokran, Howard & Nanayakkara 2014]; haptic exploration [Konstantinova et al 2014] and  guidance [Ranasinghe et al 2013].