Action Perception in the Brain

When we see other people acting in the world, most of the visual system is engaged. This allows us to discriminate the direction of the actor’s movement, segment their body from the background, identify which body parts they’re using and the objects, people, and spaces around them, and ultimately recognize the action. How is the visual cortex organized to accomplish these processing steps?

To answer this question, I’ve used fMRI to measure neural responses to 60 everyday actions. These actions are a wide sample of our daily visual experience, including things like “running,” “cooking”, “cleaning,” and “knitting.” We can make sense of these responses through voxel-wise encoding modeling: we found that visual cortex responses were well fit by two feature spaces: 1) the body parts involved in the actions and 2) the actions’ targets (an object, person, the actor, the reachable space, and a distant location). But of course, not all regions within that cortex are tuned to the same features. Finally, data-driven clustering methods revealed evidence for five networks with distinct tuning patterns: one related to self- or other-directed actions, and four related to actions’ interaction envelopes (the scale of space at which they affect the world).

Based on these findings, we propose that an action’s sociality and interaction envelope are two of the major representational joints in how actions are processed by visual regions of the brain.

This work raises a number of questions that we’re still working on; for example, what can we learn from what people are actually looking at when they watch our action videos? And how does this relate to the way that action representations are organized at a more cognitive level?

If you’d like to read more, check out our Pre-print on BioRXiv and publication in Nature Communications!

Avatar
Leyla Tarhan

PhD in Cognitive Neuroscience; writing about science, technology, and food.