A cardinality blind visual classifier can classify multiple tokens of the same type simultaneously. I have developed a convolutional neural network model which is cardinality blind and, alongside it, a model of visual attention that takes the phenomenon into account. It turns out that the model can help explain some aspects of visual processing in humans and even language. I will briefly present the model and then describe some of its results. Finally I will describe a correlation between the visual model and the linguistic structure of noun phrases which suggests a much tighter connection between sensorimotor processing and language than is generally assumed.
Last modified: Thursday, 07-Oct-2010 13:17:24 NZDT
This page is maintained by the seminar list administrator.