Xiaogang Yan, Department of Computer Science
Neural Network Models for Learning Representations of 3D Objects via Tactile Exploration
How does the brain represent the geometry of 3D objects? Most researchers considering this question focus on vision. However, infants first learn about 3D objects in the haptic system, that is, by tactile exploration of objects. In this talk, I will present a neural network model that learns something about the structure of a 3D cuboid, using input from the motor system that controls a simulated hand navigating on its surfaces. It does this with a simple unsupervised network, that learns to represent frequently-experienced sequences of motor movements. The network learns an approximate mapping from agent-centred (i.e., egocentric) movements to object-centred (i.e., allocentric) locations on the cuboid's surfaces. I will show how this mapping can be improved by the addition of tactile landmarks, by the presence of asymmetries in the cuboid and by the supplement of agent's configurations. I will also show how the learned geometry of the cuboid can support a reinforcement scheme, that enables the agent to learn simple paths to goal locations on the cuboid.
Last modified:
This page is maintained by the seminar list administrator.