Intellectual Lineages

One of the most challenging things I’ve done lately is to chart out the research lineages I endorse for understanding my work at the Qualia Research Institute — basically, to try to enumerate the existing threads of research we’ve woven together to create our unique approach. Below is the current list, focusing on formalism, self-organized systems, and phenomenology:


Formalism lineages:

The brain is very complicated, the mind is very complicated, and the mapping between these two complicated things seems very murky. How can we move forward without getting terribly confused? And what should a formal theory of phenomenology even try to do? These are not easy questions, but the following work seems to usefully constrain what answers here might look like:

David Marr is most famous for Marr’s Three Levels (along with Tomaso Poggio), which describe ”the three levels at which any machine carrying out an information-processing task must be understood:”

  • Computational theory: What is the goal of the computation, why is it appropriate, and what is the logic of the strategy by which it can be carried out?
  • Representation and algorithm: How can this computational theory be implemented? In particular, what is the representation for the input and output, and what is the algorithm for the transformation?
  • Hardware implementation: How can the representation and algorithm be realized physically? [Marr (1982), p. 25]

This framework sounds simple, but is remarkably important since arguably most of the confusion in neuroscience (and phenomenology research) comes from starting a sentence on one Marr-Poggio level and finishing it on another, and this framework lets people debug that confusion.

Giulio Tononi offered the first “full-stack” paradigm for formalizing consciousness with Integrated Information Theory (IIT). It’s notable as a viable empirical theory of consciousness in its own right, a clear enunciation of what the ‘proper goal’ of a theory of consciousness should be (determining a mathematical object isomorphic to the phenomenology of a system), and also as a collection of clever tools for approaching the many sub-problems of consciousness. Tononi’s lineage traces back to Nobel laureate Gerald Edelman, a distinction shared by many stars of modern neuroscience such as Karl Friston, Olaf Sporns, and Anil K. Seth.

Emmy Noether provided the theoretical basis for formalizing invariants in physical systems through Noether’s theorem: ‘every symmetry in a system’s equations corresponds to a conserved quantity in that system (and vice-versa).’ This formed the seed for modern gauge theory, the mathematical basis for modeling conservation laws for energy, mass, momentum, and electric charge.

Noether’s work may provide phenomenology at least two things:

  1. A concrete mathematical tool for formalizing invariance relationships in subjective experience, in the form of gauge theory;
  2. A research aesthetic for what kinds of approaches have produced particularly powerful formalisms in the past- e.g., a focus on determining the invariants of a system, constructing explanations in terms of the presence or absence of mathematical symmetries, and in general finding things people are already doing implicitly and describing them explicitly. Read more.

Self-organization lineages:

Traditionally, neuroscience has been concerned with cataloguing the brain, e.g. collecting discrete observations about anatomy, observed cyclic patterns (EEG frequencies), and cell types and neurotransmitters, and trying to match these facts with functional stories. However, it’s increasingly clear that these sorts of neat stories about localized function are artifacts of the tools we’re using to look at the brain, not of the brain’s underlying computational structure.

What’s the alternative? Instead of centering our exploration on the sorts of raw data our tools are able to gather, we can approach the brain as a self-organizing system, something which uses a few core principles to both build and regulate itself. As such, if we can reverse-engineer these core principles and use what tools we have to validatethese bottom-up models, we can both understand the internal logic of the brain’s algorithms- the how and why the brain does what it does- as well as find more elegant intervention points for altering it.

Karl Friston notes that adaptive systems such as the brain must resist a natural tendency to disorder- i.e. they must ‘swim upstream’ against the second law of thermodynamics to maintain homeostasis- and argues that systems self-organizing against this constraint will exhibit certain predictable properties. In Friston’s words:

In short, the long-term (distal) imperative — of maintaining states within physiological bounds — translates into a short-term (proximal) avoidance of surprise. Surprise here relates not just to the current state, which cannot be changed, but also to movement from one state to another, which can change. This motion can be complicated and itinerant (wandering) provided that it revisits a small set of states, called a global random attractor, that are compatible with survival (for example, driving a car within a small margin of error). It is this motion that the free-energy principle optimizes.

Friston’s free-energy principleforms the core of a ‘full-stack’ model of how the brain self-organizes, and one with corresponding implications for the computational, structural, and dynamical properties of mind.

Selen Atasoy has pioneered a new method for interpreting neuroimaging which (unlike conventional approaches) may plausibly measure things directly relevant to phenomenology. Essentially, it’s a method for combining fMRI/DTI/MRI to calculate a brain’s intrinsic ‘eigenvalues’, or the neural frequencies which naturally resonate in a given brain, as well as the way the brain is currently distributing energy (periodic neural activity) between these eigenvalues. Furthermore, it seems a prioriplausible that measuring these natural resonances could be a powerful technique for understanding the brain, since (1) they follow nicely predictable mathematical laws, and (2) a system with such harmonics will likely self-organize around them, and thus have a hidden predictability or elegance.

Robin Carhart-Harris has worked extensively on importing key concepts from statistical physics and information theory into neurobiology. His flagship work, the entropic brain, offers entropy and self-organized criticality as key properties which constrain both the dynamics and quality of conscious states, and discusses the effects of psychedelics on these properties:

At its core, the entropic brain hypothesis proposes that the quality of any conscious state depends on the system’s entropy measured via key parameters of brain function. Entropy is a powerful explanatory tool for cognitive neuroscience since it provides a quantitative index of a dynamic system’s randomness or disorder while simultaneously describing its informational character, i.e., our uncertainty about the system’s state if we were to sample it at any given time-point. When applied in the context of the brain, this allows us to make a translation between mechanistic and qualitative properties. 

System entropy, as it is applied to the brain, is related to another current hot-topic in cognitive neuroscience, namely “self-organized criticality”3(Chialvo et al., 2007). The phenomenon of self-organized criticality refers to how a complex system (i.e., a system with many constituting units that displays emergent properties at the global-level beyond those implicated by its individual units) forced away from equilibrium by a regular input of energy, begins to exhibit interesting properties once it reaches a critical point in a relatively narrow transition zone between the two extremes of system order and chaos. Three properties displayed by critical systems that are especially relevant to the present paper are: (1) a maximum number of “metastable” or transiently-stable states (Tognoli and Kelso, 2014), (2) maximum sensitivity to perturbation, and (3) a propensity for cascade-like processes that propagate throughout the system, referred to as “avalanches” (Beggs and Plenz, 2003).

What many overlook about Carhart-Harris’s work is how his concept of ‘entropic disintegration’ (the process by which a large influx of energy overwhelms existing attractors and causes the brain to self-organize around new equilibria) opens the door to sophisticated analogies between the self-organizational dynamics brains exhibit when pushed into high-energy states, and the self-organizational dynamics of metals when heated above their recrystallization temperature. QRI is working to extend Carhart-Harris’s work on entropic disintegration under the frame of ‘neural annealing‘.


Phenomenology lineages:

What are the natural kinds of subjective experience? Are there universal ‘laws of psychodynamics’ one can discover from introspection? How would one make tangible progress on formalizing a true science of phenomenology? These are all hard problems, and answers are rare and difficult to validate. However, there are some lineages which seem to have particularly useful, concrete, and systematic ontologies of mind:

David Pearce has written extensively about how the textureof phenomenological experience is at least as important as its intentional content, how various pharmacological substances can alter phenomenological texture in predictable ways, and how this offers various generative heuristics for future research in both phenomenology and pharmacology. A core thread in David’s work is valence realism, or the idea that some experiences really do feel better than others, and that this can bridge a key part of the is-ought distinction. Further reading: Can Biotechnology Abolish Suffering?

Buddhism is, at its core, an attempt at a complete description of phenomenology and its dynamics, including a description of what suffering is and how it arises, and a recipe for how to disrupt the dynamics which lead to suffering. Or as Bhikkhu Bodhi notes in his introduction to A Comprehensive Manual of Abhidhamma (a core Buddhist text),

The system that the Abhidhamma Piṭaka articulates is simultaneously a philosophy, a psychology, and an ethics, all integrated into the framework of a program for liberation. The Abhidhamma may be described as a philosophy because it proposes an ontology, a perspective on the nature of the real. … The project starts from the premise that to attain the wisdom that knows things ‘as they really are,’ a sharp wedge must be driven between those types of entities that possess ontological ultimacy, that is, the dhammas, and those types of entities that exist only as conceptual constructs but are mistakenly grasped as ultimately real.

There’s an enormous amount of skillful phenomenological wisdom in Buddhism, all shaped by a 2600-year evolutionary process toward usefulness and persistence. Although not all of Buddhist theory can be imported to a more mathematical frame as-is, there are surprising parallels between Buddhism’s theory of mind and QRI’s other research lineages, e.g. the ‘self’ as a leaky reification formed from self-reinforcing algorithmic processes, jhanas as resonant modes of the brain, etc. Likewise, the meta-heuristic of “how would Buddha research consciousness and suffering if he were alive today?” may be highly generative.