We live in a three-dimensional world, but our eyes only receive two-dimensional images. How does our brain combine these images into a 3D percept?
In our laboratory, we aim to understand the neural mechanisms underlying visual perception using psychophysical experiments, neuro-imaging (fMRI), and computational modeling.
Our recent findings suggest that 3D motion perception is based on two distinct signals, binocular disparity and monocular motion.
We are beginning to understand how these signals are processed, and how they are combined, so that we can successfully interact with our dynamic three-dimensional world.
Motion processing with two eyes in three dimensions
The movement of an object toward or away from the head is perhaps the most critical piece of information
an organism can extract from its environment. Such 3D motion produces horizontally opposite motions on the two retinae.
of primate visual processing assert that neurons early in the visual system combine monocular inputs into a single
cyclopean stream (lacking eye-of-origin information) and extract 1D (“component”) motions; later stages then extract
2D pattern motion from the cyclopean output of the earlier stage. We found, however, that 3D motion perception
is in fact affected by the comparison of opposite 2D pattern motions between the two eyes.
These results imply the existence
of eye-of-origin information in later stages of motion processing and therefore motivate the incorporation of such
eye-specific pattern-motion signals in models of motion processing and binocular integration.
A paper on our findings has been published in the Journal of Vision. You can also view a demonstration.
Neural circuits underlying the perception of 3D motion
The macaque middle temporal area (MT) and the human MT complex (MT+) have well-established sensitivity to both 2D motion and position in depth. Yet evidence for sensitivity to 3D motion has remained elusive.
We showed that human MT+
encodes two binocular cues to 3D motion, one based on estimating
over time, and the other based on interocular comparisons of retinal
By varying orientation, spatiotemporal characteristics, and binocular
properties of moving
displays, we distinguished these 3D motion signals from their
binocular disparity and monocular retinal motion. Furthermore, an
confirmed MT+ selectivity to the direction of 3D motion.
These results demonstrate that
critical binocular signals for 3D motion processing are present in
MT+, revealing an
important and previously overlooked role for this well-studied brain area.
A paper on our findings has been published in Nature Neuroscience. You can also download the supplementary materials (.pdf) and view animations of the stimuli.
Percepts of motion through depth without percepts of position in depth
It is a fundamental challenge for the visual system of primates to accurately encode 3D motion.
Prior work on the perception of static depth has employed binocularly 'anti-correlated' random dot displays, in which corresponding dots have opposite contrast polarity in the two eyes: a black dot in one eye is paired with a white dot in the other eye. Such displays have been shown to yield weak, distorted, or nonexistent percepts of position in depth.
We showed that the perception of motion through depth is not impaired by binocular anticorrelation. Although subjects were not able to accurately judge the depths of these displays, they were surprisingly able to perceive the direction of motion through depth.
A paper on our findings has been published in the Journal of Vision. A demonstration accompanies the paper.
The stereokinetic effect
An ellipse rotating in the image plane can produce the 3D percept of a rotating rigid circular disk. In theory, the motion of the 3D percept cannot be reliably inferred based on the 2D stimulus.
However, when we quantitatively estimated the perceived 3D motion, we found that it was nearly identical across observers, suggesting that all observers had the same 3D percept.
We assumed that given the 2D stimulus the visual system generates a rigid 3D percept that is as slow and smooth as possible. The percepts predicted by these assumptions closely matched the experimental data, suggesting that the visual system resolves perceptual ambiguity in such stimuli using slow and smooth motion assumptions.
A paper on our findings has been published in Vision Research. A demonstration accompanies the paper.