New research out of the University of Pennsylvania is filling in gaps between two prevailing theories about how the brain generates our perception of the world.
One, called the Bayesian decoding theory, says that to best recognize what’s in front of us, the brain combines the sensory signals it receives, a scene we’re looking at, for example, with our preconceived notions. Experience says that a school bus is typically yellow, so logically the one in front of us is more likely yellow than blue.
The second theory, called efficient coding, explains that, for sensory input resources like neurons in the brain and retinas in the eyes to efficiently do their jobs, they process and store more precise information about the most frequently viewed objects. If you see a yellow bus every day, your brain encodes an accurate picture of the vehicle so on next viewing you’ll know its color. Conversely, the brain won’t expend many resources to remember a blue bus.
Alan Stocker, an assistant professor in the psychology and the electrical and systems engineering departments, and Xue-Xin Wei, a psychology graduate student, combined these two concepts to create a new theory about how we perceive our world: How often we observe an object or scene shapes both what we expect of something similar in the future and how accurately we’ll see it.