Better Vision Through Manipulation PDF Print E-mail
Written by Rizki Noor Hidayat Wijayaź   

Giorgio Metta,DIST University of Genova Viale F. Causa, 13 16145 Genova, Italy :Paul Fitzpatrick, LIRA-Lab, MIT AI Lab 200 Technology Square Cambridge, MA 02139 US

Abstract For the purposes of manipulation, we would like to know what parts of the environment are physically coherent ensembles that is, which parts will move together, and which are more or less independent. It takes a great deal of experience before this judgement can be made from purely visual information. This paper develops active strategies for acquiring that experience through experimental manipulation, using tight correlations between arm motion and optic flow to detect both the arm itself and the boundaries of objects with which it comes into contact. We argue that following causal chains of events out from the robot*s body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience.

Introduction

A robot is an actor in its environment and not simply a passive observer. This gives it the potential to examine the world using causality, by performing probing actions and learning from the response.

Tracing chains of causality from motor action to perception (and back again) is important both to understand how the brain deals with sensorimotor coordination and to implement those same functions in an artificial system, such as a humanoid robot. In this paper, we propose that such causal probing can be arranged in a developmental sequence leading to a manipulation-driven representation of objects. We present results for two important steps along the way, and describe how we plan to proceed. Table 1 shows three levels of causal complexity.

The simplest causal chain that the robot experiences is the perception of its own actions. The temporal aspect is immediate: visual information is tightly synchronized to motor commands. We use this strong correlation to identify parts of the robot body specifically, the end-point of the arm. Once this causal connection is established, we can go further and use it to active explore the boundaries of objects.

In this case, there is one more step in the causal chain, and the temporal nature of the response may be delayed since initiating a reaching movement doesn*t immediately elicit consequences in the environment. Finally we argue that extending this causal chain further will allow us to approach the representational power of mirror neurons" (Fadiga et al., 2000), where a connection is made between our own actions and the actions of another.

[download: 2,36 MB]