The brain is extraordinarily complex. We are in desperate need of models that decode this complexity and allow us to speak about the brain’s fundamental dynamics simply, comprehensively, and predictively. I believe I have one, and it revolves around resonance.
Neural resonance is currently an underdefined curiosity at the fringes of respectable neuroscience research. I believe that over the next 10 years it’ll grow into being a central part of the vocabulary of functional neuroscience. I could be wrong– but here’s the what and why.
Resonance, in a nutshell
To back up a bit and situate the concept of resonance, consider how we create music. Every single one of our non-electronic musical instruments operate via resonance– e.g., by changing fingering on a trumpet or flute, or moving a trombone slide to a different position, we change which frequencies resonate within the instrument. And when we blow into the mouthpiece we produce a messy range of frequencies, but of those, our instrument’s physical parameters amplify a very select set of frequencies and dampen the rest, and out comes a clear, musical tone. Singing works similarly: we change the physical shape of our voiceboxes, throats, and mouths in order to make certain frequencies resonate and others not.
Put simply, resonance involves the tendency of systems to emphasize certain frequencies or patterns at the expense of others, based on the system’s structural properties (what we call “acoustics”). It creates a rich, mathematically elegant sort of order, from a jumbled, chaotic starting point. We model and quantify resonance and acoustics in terms of waves, frequencies, harmonics, constructive and destructive interference, and the properties of systems which support or dampen certain frequencies.
So what is neural resonance?
Literally, ‘resonance which happens in the context of the brain and neurons’, or the phenomenon where the brain’s ‘acoustics’ prioritizes certain patterns, frequencies, and harmonics of neural firings over others.
Examples would include a catchy snippet of music or a striking image that gets stuck in a one’s head, with the neural firing patterns that represent these snippets echoing or ‘resonating’ inside the brain in some fashion for hours on end.[1] Similarly, though ideas enter the brain differently, they often get stuck, or “resonate,” as well– see, for instance, Dawkins on memes. In short, neural resonance is the tendency for some patterns in the brain (ideas) to persist more strongly than others, due to the mathematical interactions between the patterns of neural firings into which perceptions and ideas are encoded, and the ‘acoustic’ properties of the brain itself.
But if we want to take the concept of neural resonance as more than a surface curiosity– as I think we should– we can make a deeper analogy to the dynamics of resonant and acoustic systems by modeling information as actually resonating in the brain. That there are deep, rich, functionally significant, and semi-literal parallels between many aspects of brain dynamics and audio theory. Just like sound resonates in and is shaped by a musical instrument, ideas enter, resonate in, are shaped by, and ultimately leave their mark on our brains.
I thought the brain was a computer, not a collection of resonant chambers?
Yes; I’m essentially arguing that the brain computes via resonance and essentially acoustical mechanics.
I’m basically arguing that we should try to semi-literally adapt the equations we’ve developed for sound and music to the neural context, and that most neural phenomena can be explained pretty darn wellin terms of these equations. In short:The brain functions as a set of connected acoustic chambers. We can think of it as a multi-part building, with each room tuned to make a slightly different harmonies resonate, and with doors opening and closing all the time so these harmonies constantly mix. (Sometimes tones carry through the walls to adjacent rooms.) The harmonies are thoughts; the ‘rooms’ are brain regions.Importantly, the transformations which brain regions apply to thoughts are akin to the transformations a specific room would apply to a certain harmony. The acoustics of the room— i.e., the ‘resonant properties’ of a brain region– profoundly influence the pattern occupying it. The essence of thinking, then, is letting these patterns enter our brain regions and resonate/refine themselves until they ring true.My basic argument is that you can explain basically every important neural dynamic within the brain in terms of resonance, that it’s a comprehensive, generative, and predictive model– much moreso than current ‘circuit’ or ‘voting’ based analogies.
– Sensory preprocessing filters: as information enters the brain, it’s encoded into highly time-dependent waves of neural discharges. The ‘neuroacoustic’ properties of the brain, or which kinds of wave-patterns are naturally amplified (i.e., resonate) or dampened by properties of the neural networks relaying this pattern, act as a built-in, ‘free’ signal filter. For instance, much of the function of the visual and audio cortexes emerges from the sorts of patterns which they amplify or dampen.
– Competition for neural resources: much of the dynamics of the brain centers around thoughts and emotions competing for neural resources, and one of the central challenges of models purporting to describe neural function is to provide a well-defined fitness condition for this competition. Under the neural resonance / neuroacoustics model, this is very straightforward: patterns which resonate in the brain acquire more and better maintain resources (territory) than those that resonate less well.– What happens when we’re building an idea: certain types of deliberative or creative thinking may be analogous to tweaking a neural pattern’s profile such that it resonates better.
– How ideas can literally collide: if two neural patterns converge inside a brain region, one of several overlapping things may occur: one resonates more dominantly and swamps the other, destructive interference, constructive interference, or a new idea emerges directly from the wave interference pattern.
– How ideas change us: since neural activity is highly conditioned, patterns which resonate more change more neural connections. I.e., the more a thought, emotion, or even snippet of music persists in resonating and causing neurons to fire in the same pattern, the more it leaves its mark on the brain. Presumably, having a certain type of resonance occur in the brain primes the brain’s neuroacoustics to make patterns like it more likely to resonate in the future (see, for instance, sensitization aka kindling).[2] You become what resonates within you.
In short, resonance, or the tendency for certain neural firing patterns to persist due to how their frequency- and wave-related properties interact with the features of the brain and each other, is a significant factor in the dynamics of how the brain filters, processes, and combines signals. However, we should also keep in mind that:
Resonance in the brain is an inherently dynamic property because the brain actively manages its neuroacoustics!
I’ve argued above that our ‘neuroacoustics’- that which determines what sorts of patterns resonate in our heads and get deeply ingrained in our neural nets- is important and actively shapes what goes on in our heads. But this is just half the story: we can’t get from static neuroacoustic properties to a fully-functioning brain, since, if nothing else, resonant patterns would get stuck. The other, equally important half is that the brain has the ability to contextually amplify, dampen, filter, and in general manage its neural resonances, or in other words contextually shape its neuroacoustics.
Some of the logic of this management may be encoded into regional topologies and intrinsic properties of neuron activation, but I’d estimate that the majority (perhaps 80%) of neuroacoustic management occurs via the contextual release of specific neurotransmitters, and in fact this could be said to be their central task.
With regard to what manages the managers, presumably neurotransmitter release could be tightly coupled with the current resonance activity in various brain regions, but the story of serotonin, dopamine, and norepinephrine may be somewhat complicated as it’s unclear how much of neurotransmitter activity is a stateless, walkforward process. The brain’s metamanagement may be a phenomenon resistant to simple rules and generalities.
A key point regarding the brain managing its neuroacoustics is that how good the brain is at doing so likely varies significantly between individuals, and this variance may be at the core of many mental phenomena. For instance:
– That which distinguishes both gifted learners and high-IQ individuals from the general populance may be that their brains are more flexible in manipulating their neuroacoustic properties to resonate better to new concepts and abstract situations, respectively. Capacity for empathy may be shorthand for ‘ability to accurately simulate or mirror other peoples’ neuroacoustic properties’.
– Likewise, malfunctions and gaps in the brain’s ability to manage its neural resonance, particularly in matching up the proper neuroacoustic properties to a given situation, may be a large part of the story of mental illness and social dysfunction. Autism spectrum disorders for instance, may be almost entirely caused by malfunctions in the brain’s ability to regulate its neuroacoustic properties.
– If we can exercise and improve the brain’s ability to manage its neural resonance (perhaps with neurofeedback?), all of these things (IQ, ability to learn, mental health, social dexterity) should improve.- Mood may be another word for neuroacoustic configuration. A change in mood implies a change in which ideas resonate in ones mind. Maintaining a thought or emotion means maintaining one’s neuroacoustic configuration. (See addendum on chord structures and Depression.)
– ‘Prefrontal biasing’, or activity in the prefrontal cortex altering competitive dynamics in the brain, may be viewed in terms of resonance: put simply, the analogy is that the PFC is located at a leveraged acoustic position (e.g., the tuning pegs of a guitar) and has a strong influence on the resonant properties of many other regions.- Phenomena such as migraines may essentially be malfunctions in the brain’s neuroacoustic management. A runaway resonance.
– How competition for neural resources is resolved;
– How complex decision-making abilities may arise from simple neural properties;
– How ideas may interact with each other within the brain;
– That audio theory may be a rich source of starting points for equations to model information dynamics in the brain;
– What the maintenance of thought and emotion entails, and why a change in mood implies a change in thinking style;
– How subconscious thought may(?) be processed;
– What intelligence is, and how there could be different kinds of intelligence;
– How various disorders may naturally arise from a central process of the brain (and that they are linked, and perhaps can be improved by a special kind of brain exercise);
– The division of function between neurons and neurotransmitters;
– The mechanism by which memes can be ‘catchy’ and how being exposed to memes can create a ‘resonant beachhead’ for similar memes;
– The mechanism of how neurofeedback can/should be broadly effective.There are few holistic theories of brain function which cover half this ground.Tests which have the ability to falsify or support models of neural function (such as this one) aren’t available now, but may arise as we get better at simulating brains and such. I look forward to that– it would certainly be helpful to be able to more precisely quantify things such as neural resonance, neuroacoustics, interference patterns within the brain, and such.
1. Fundamental simplicity– it’s one of the few models of neural function which can actually provide an intuitive answer to the question of what’s going on in someone brain.
2. Emergent complexity– from a small handful of concepts (or just one, depending on how you count it), the elegant complexity of neural dynamics emerges.
3. Ideal level of abstraction– this is a model which we can work both downward from to e.g., use as a sanity check for neural simulation since the resonant properties of neural networks are tied to function (the Blue Brain project is doing this to some extent), and upward from to generate new explanations/predictions within psychology, since resonance appears to be a central and variable element of high-speed neural dynamics and the formation and maintenance of thought and emotion.
If it’s a good, meaningful model, we should be able to generate novel hypotheses to test. I have outlined some in my description above (e.g., that many, diverse mental phenomena are based on the brain’s ability to manage its neural resonance, and if we approve this in one regard it should have significant spillover). There will be more. I wish I had the resources to generate and test specific hypotheses arising from this model.ETA 10 years.
(2) If we model a neural region as a fairly self-contained resonant chamber (with limited but functionally significant leakage), the time it takes a neural signal, following an ‘average’ neural path, to get to the opposite edge of the chamber and return will be a key property of the system. (Sound travels in fairly straight lines; neural “waves” do not. This sort of analysis will be non-trivial, and will perhaps need to be divorced from a strict spacial interpretation. And we may need to account for chemical communication.) Each brain region has a slightly different profile in this regard, and this may help shape what sorts of information come to live in each brain region.
Addendum, 10-11-10: Chord Structures
Major chords are emotively associated with contentment; minor chords with tragedy. If my resonance analogy is correct, there may be a tight, deeply structural analogy between musical theory, emotion, and neural resonance. I.e., musical chords are mathematically homologous to patterns of neural resonance, wherein major and minor forms exist and are almost always associated with positive and negative affect, respectively.
Now, it’s not clear whether there’s an elegant, semi-literal correspondence between e.g., minor chords, “minor key” neural resonances, and negative affect. There could be three scenarios:
1. No meaningful correspondence exists.
2. There isn’t an elegant mathematical parallel between e.g., the structure of minor chords and patterns of activity which produce negative affect in the brain, but within the brain we can still categorize patterns as producing positive or negative affect based on their ‘chord’ structure.
3. Musical chords are deeply structurally analogous to patterns of neural resonance, in that e.g., a minor chord has a certain necessary internal mathematical structure that is replicated in all neural patterns that have a negative affect.
The answer is not yet clear. But I think that the incredible sensitivity we have to minute changes in musical structure- and the ability of music to so profoundly influence our mood- is evidence of (3), that musical chords and the structure of patterns of neural impulses are deeply analogous, and knowledge from one domain may elegantly apply to the other. We’re at a loss as to how and why humans invented music; it’s much less puzzling if it’s a relatively elegant (though simplified) expression of what’s actually going on in our heads. Music may be an admittedly primitive but exceedingly well-developed expression of neuro-ontology, hiding in front of our noses.
http://en.wikipedia.org/wiki/Adaptive_resonance_theory
Anonymous said: Bill W., a co-founder of Alcoholics Anonymous, based the original twelve-step program on "one drunk talking to another." I have seen brain pictures in which images of two people facing each other have the same regions of the brain light up. I suspect that is what happens when two alcoholics meet; that the same regions of the brain of each resonate. Since alcoholism affects less than 10% of the population, it makes sense that non-alcoholics do not produce the same type of brain waves that alcoholics do and the theory of neural resonance explains this very well. Thanks for the column.
Interesting… I could see that. Thanks for the comment.
Hi Mike Johnson,
You and I have been thinking along parallel lines! Resonance in the brain? Exactly right!
Check out this on-line book I am working on…
http://sharp.bu.edu/~slehar/HRezBook/HRezBook.html
And this paper…
http://sharp.bu.edu/~slehar/ConstructiveAspect/ConstructiveAspect.html
Steve Lehar
slehar@gmail.com
It seems to me that you are arguing for a structured dynamical model to characterize neural behavior. Totally agreed. Structured dynamical models are the most sophisticated tools we currently have for described complex systems. Historically, we (humans) have analogized that the brain is like our most sophisticated technology, and the current technology is no exception. I would bet that if ever quantum computers become practical, we will all be claiming that the brain is a structured dynamical quantum computer. In so far as the analogy with acoustics, that seems like acoustics is a fine example of structured dynamical systems, but I fail to see anything particularly unique about it. What about actual language (not the acoustics of it)? Or the genome. A different time scale, but still certainly structured and dynamic. I think anything we recognize as reasonably complex would be fine. One absent piece in terms of the acoustics analogy, is that acoustics are fairly deterministic. To be able to predict neural activity at the same precision as one could predict acoustics, one would require an insanely large amount of information (and still perhaps not possible, given the possibility of quantum effects at ion channels and such – note that i’m not talking about “Hammeroff” style quantum effects, but actually measured quantum effects within the ion channels that govern neural activity). So, I would argue for using a formalism that explicitly considers and acknowledges uncertainty in the dynamics. Thus, I would argue for a *statistical* structured dynamical model today, with the caveat that when I know of some cooler technology, I will likely want to change my claim.
Hmm– I agree and disagree. :) Putting the label “acoustics” on the brain is sorta misleading, since by itself it’s only really accurate as a general statement that we need a structured dynamical model (of which ‘acoustics’ is a relatively simple example). But the term “resonance” in the context of the brain is more meaningful. It’s ‘just’ shorthand for a property of a SDS, but in my opinion it’s a pretty elegant/”non-leaky” high-level abstraction. Very useful for quantifying and simplifying some important functional properties of neural circuits* in a fairly tractable, intuitive way.
So I would say the core thrust of this idea is essentially about resonance. Acoustics enters the picture in that one key way we traditionally analyze resonance is we analyze the acoustic properties of the signal medium/context — so I’d define the meaning of ‘neural acoustics’ in relation to ‘neural resonance’. I don’t think I was very clear about this conceptual hierarchy in my writeup.
I’m not sure if resonance applies to the other structured dynamical contexts you mentioned. I’d have to think about that.
*You bring up the issue of acoustics being a more-or-less ‘deterministic science’, whereas the brain is highly dynamical or contextual. I think that’s actually a strength of this theory, that a key part of how the brain accomplishes some types of computation is by changing the profile of what patterns do resonate within certain regions. But yeah, you’d need some sort of notation or structure that could allow for this…
I feel like I should toss in the obligatory disclaimer that modeling specific neural circuits is always going to be more predictive than mid-level abstractions like neural resonance. But until we get to that point on a large scale, I think there’s quantitative and qualitative information to be extracted via the lens of resonance/acoustics. Or maybe a better way to put it would be, they’re efficient variables for predicting functional dynamics or typing neural circuits.
(Kinda reminds me of IBM’s cat cortex simulation, which included a virtual EEG so they could check ‘sanity check’ their emergent statistical behavior… pretty amazing! Even though they definitely oversold how their sim complexity compared to a real brain…)
you may appreciate this: http://mindsbasis.blogspot.com/2014/06/heat-as-sound-neural-impulses-as-sound.html?showComment=1459997156699#c2275447020880545307