Most of the popular conversation around intelligence these days (at least in circles I follow) is about the artificial variety — AI, deep learning, neural networks, and the like. Neuroscientists Jeff Hawkins and his company Numenta have been studying intelligence since 2005, but oriented on how the brain itself works. Hawkins’s belief is that true “general AI” won’t be possible at all if we can’t first understand deeply how the brain works.
He recently published a paper on the “Thousand Brains Theory of Intelligence”, which posits that the brain is simultaneously generating predictions on multiple threads from different senses that it then assimilates to create models of the world:
To illustrate this concept in our newest paper, we use the example of a coffee cup. Imagine touching a coffee cup with one finger. As you move your finger over the cup, you sense different parts of it. You might feel the lip, then the curve of the handle, then the flatness of the bottom. Each sensation you receive is processed relative to its location on the cup. The curved handle of the mug is always in the same relative position on the cup; it is not a feature relative to you. At one moment it might be on your left and another moment on your right, but it is always in the same location on the cup. If you were asked to reach into a box and identify this object by touching it with one finger, you probably couldn’t with a single touch. But if you continued to move your finger over the object, you would integrate more sensory features from different locations, until you recognized with certainty that the only object containing this set of features at these locations is the coffee cup.
Now imagine the same mug, but this time you grasp it with multiple fingers at the same time. Whereas before you had to move your finger to recognize the cup, now you might be able to recognize it with a single grasp. The columns associated with each finger don’t have enough information on their own to identify the cup, but connections between columns allow them to reach the correct answer more quickly. In effect, the columns “vote” as to what is the most likely object, and quickly settle on cup. The same process occurs across senses, so cortical columns that process visual input can communicate with columns processing touch. In fact, there are connections in the cortex between low level sensory regions that don’t make sense in the classic hierarchical model of the cortex but do make sense in the Thousand Brains Theory.
Sandy sent me to this MIT AI podcast interview with Hawkins that goes deep on the Thousand Brains Theory and many other interesting related subjects of neuroscience and brain research.