Action Perception in the Mind

Among the many actions we witness in the world, some are intuitively more similar than others. For example, running and walking seem quite related, while chopping vegetables seems very different. What determines this similarity structure – is is just how the actions look, or is there also a role for higher-level semantic information?

To investigate this, we’re running a series of behavioral studies, in which we measure participants’ action similarity spaces for a large set of everyday actions. In one version, participants complete an unguided sorting task, arranging short video clips on the screen so that videos that seem similar are close together, while videos that seem different are far apart.

We then use computational modeling to try to predict how participants arrange the actions: do their similarity judgments match similarity based on the low-level perceptual features present in the videos (e.g., orientation and spatial frequency), mid-level features that are more consciously recognizable (like the body parts used in the actions and whether the actions are directed at objects, people, or space), or high-level semantic categories?

So far, we’ve found that behavioral similarity judgments are best predicted by higher-level features that capture what actions are for, rather than how they look. In addition, these judgments don’t seem to draw on neural representations in the visual cortex, but must instead be housed in higher-level regions, perhaps in the anterior temporal or frontal cortex.

If you’d like to read more, check out our submission to CCN 2018 and VSS 2018 poster – and look out for a pre-print coming soon!

Avatar
Leyla Tarhan

PhD in Cognitive Neuroscience, making a move into industry.