The predictive processing model is a cognitive framework for modeling how the brain synthesizes information from two channels:
- The “bottoms up” stream of raw data coming in through our senses for processing
- The “top down” stream of predictions about the world
These two channels merge together in a continuous interplay inside the brain and allow us to make sense of the world, with each system continually feeding back to the other in a process we’d refer to as “learning”.
This Slate Star Codex post is a review of Andy Clark’s Surfing Uncertainty, and has a fascinating analysis of how the two systems interact. It’s a great summary of the concept and one of the best concise descriptions of how the brain works that I’ve ever seen. Here’s a great description of the bottoms-up / top-down interplay:
The bottom-up stream starts out as all that incomprehensible light and darkness and noise that we need to process. It gradually moves up all the cognitive layers that we already knew existed – the edge-detectors that resolve it into edges, the object-detectors that shape the edges into solid objects, et cetera.
The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”. It uses its knowledge of concepts to make predictions – not in the form of verbal statements, but in the form of expected sense data. It makes some guesses about what you’re going to see, hear, and feel next, and asks “Like this?” These predictions gradually move down all the cognitive layers to generate lower-level predictions. If that uniformed guy was a policeman, how would that affect the various objects in the scene? Given the answer to that question, how would it affect the distribution of edges in the scene? Given the answer to that question, how would it affect the raw-sense data received?
The author looks at disorders and other phenomena through the predictive processing lens to see how they hold up — things like the learning, dreaming, the placebo effect, priming, schizophrenia, and autism:
Autistic people classically can’t stand tags on clothing – they find them too scratchy and annoying. Remember the example from Part III about how you successfully predicted away the feeling of the shirt on your back, and so manage never to think about it when you’re trying to concentrate on more important things? Autistic people can’t do that as well. Even though they have a layer in their brain predicting “will continue to feel shirt”, the prediction is too precise; it predicts that next second, the shirt will produce exactly the same pattern of sensations it does now. But realistically as you move around or catch passing breezes the shirt will change ever so slightly – at which point autistic people’s brains will send alarms all the way up to consciousness, and they’ll perceive it as “my shirt is annoying”.