Brains are good, but not that good.

Home / Machine Learning / Brains are good, but not that good.

What’s wrong with the idea that the brain is an excellent learning machine? In machine learning and artificial intelligence research literature, we almost invariably see the argument that we need better algorithms that can learn from fewer training examples or that are one-shot learners. Along with that argument, typically, comes the assertion that humans learn well from few (or single) training examples. The problem is that people forget the neurophysiology of the brain.

Notably, within the brain, neural firing is not a constant. At the cellular level and the macro/network-level of neurons, neuronal activity is continuously oscillating. If we’re looking at connectivity between neurons, we may count each oscillation as a training example. Naturally, the phrase ”what fires together, wires together” comes to mind.

Say we get a ”single five-second look” at an object. It isn’t just one training example. Because of neuronal oscillation, the brain gets more training examples. Now, some neurons fire rhythmically in the absence of synaptic input while some fire rhythmically in the presence of it. For the resting visual cortex, the frequency of this oscillation is in the 7.5 – 12.5 Hz range. Given our ”single five-second look,” we’re talking 35 – 62 observations. That’s not single-shot. We can train a simple bi-classification model using a multilayer perceptron with a single- hidden layer given 35 – 62 training examples. It may not be that good at its job but consider the scale. Consider that the brain isn’t super good at it either after five-seconds of an entirely novel or unique observation.

It’s also more complicated than this. There’s the fact that the structure of our neocortex is highly recurrent. There is a lot of feedback between the layers. This adds additional firing even after the removal of a stimulus from our sensory field.

So, we don’t have a single-shot learning algorithm in our brain. I’d go so far as to say that the mechanisms of memory in the human brain are inefficient. But, our brains have evolved to work well in spite of those inefficiencies. The cyclical nature of neuronal firing, as an integral part of our brain’s perceptual sampling system, is an example of such an evolutionary strategy.

I don’t mean to imply that we shouldn’t research ways to reduce the number of training samples needed by ML algorithms. While computing power has increased dramatically and we have a new outbreak of interest and usefulness in neural network applications, we still need to do more with less. The big problem of Deep Learning is that it needs many examples to learn. But, we need to stop saying that humans can do it. Perhaps looking at things this way will help us design strategies for learning agents to do more with the information they have.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

%d bloggers like this: