Recent interviews: Allen Saakyan and Adam Ford

March 15, 2019  |  No Comments

I had the pleasure of being on Allen Saakyan’s The Simulation show along with my colleague Andrés. Very nice interview format, looking forward to the next round.

Adam Ford also interviewed me about the Templeton Foundation’s new “Advancing Research into Consciousness” initiative, which attempts to pit theories of consciousness against each other in empirical trials. We also spoke extensively about the philosophical aspects of QRI‘s work.

Meditation & Science Jam 2019, Koh Phangan, Thailand

March 15, 2019  |  No Comments

This February I co-organized a small conference on meditation and neuroscience on Koh Phangan, Thailand; I spoke about some of the research we’re doing at QRI, with a focus on the frameworks of

  • Predictive Coding (Karl Friston’s work)
  • Connectome-specific harmonic waves (Selen Atasoy’s work)
  • Neural annealing (QRI’s extension of Robin Carhart-Harris’s work on entropic disintegration).

The video unfortunately cuts out at the hour-mark, but here’s a link to my full slides. The other speakers included Anthony Markwell, George Lebedev, Ivanna Evtukhova, and Anastasia Bawari.


Intellectual Lineages

March 9, 2019  |  No Comments

One of the most challenging things I’ve done lately is to chart out the Qualia Research Institute‘s intellectual lineages– basically, to try to enumerate the existing threads of research we’ve woven together to create our unique approach. Below is the current list, focusing on formalism, self-organized systems, and phenomenology:


Formalism lineages:

The brain is very complicated, the mind is very complicated, and the mapping between these two complicated things seems very murky. How can we move forward without getting terribly confused? And what should a formal theory of phenomenology even try to do? These are not easy questions, but the following work seems to usefully constrain what answers here might look like:

Read More

Consciousness: a Cosmological Perspective (Sharpening the Simulation Argument)

February 14, 2019

The following is an excerpt from Principia Qualia, Appendix F. I put it at the very end as a special, unexpected treat for people who read everything- but as it could provide independent support for the Symmetry Theory of Valence (STV), it deserves scrutiny.

Essentially, the following argument ties together three mysteries into a unique, falsifiable solution:

(1) The ‘Simulation Argument:’ or whether our universe has the hallmarks of being created through some intentional process;

(2) The ‘fine-tuning problem‘ in physics: why various physical constants such as the speed of light, weight of the electron, etc seem delicately tuned to support a certain sort of complexity;

(3) The ‘Problem of Evil:’ why suffering exists, and why it seems so common compared to goodness.

On this last topic, I come to a much more optimistic conclusion than most. Usually philosophers find themselves justifying why we find ourselves living in a bad universe dominated by suffering– my argument suggests instead that it can be reasonable, rational, and even plausible to be hopeful about the cosmic ledger.

Read More

The Neuroscience of Meditation: Four Models

December 22, 2018

Background: I’m a philosopher at the Qualia Research Institute (QRI) working on the intersection of neuroscience and phenomenology. As part of this research and to develop my practice, I recently did a 7-day vipassana meditation retreat. The following are some perspectives, models, and hypotheses I had on how some of the ‘Western’ ideas we’re working with at QRI could connect to ‘Eastern’ contemplative practices. (Yes, I know I wasn’t supposed to think during a retreat, but enlightenment will just have to wait, I have things to do…)

Buddhism is the start of something really important

I think Buddhism does a really good job at telling a coherent and useful story about how the mind works — and the difficulty of telling stories about the mind that are both coherent and useful is, I think, drastically under-appreciated. Major credit to Buddha here. But Buddhism is also incomplete: it talks about what the big-picture dynamics of subjective experience are, but is silent about how these dynamics are implemented. Put simply, Buddhism says a lot about the mind, but nothing about the brain, and this is ultimately limiting. We need ways to connect what’s going on in the mind during meditation to neuroscience and information theory; we need more frames for what’s going on, we need better and more quantitative frames, we need meta frames for how everything fits together.

Read More

Interview & podcast

November 15, 2018

Adam Ford recently posted some bits from an interview we did a while back- an excerpt from part 1:

Perhaps the clearest and most important ethical view I have is that [consequentialist] ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).

From part 2, on why some communities seem especially confused about consciousness:

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

Second, people don’t realize how important good understandings of qualia & valence are. They’re upstream of basically everything interesting and desirable.

Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why.

And from part 3, on the need for bold, testable theories (and institutions which can generate them):

I would agree with [Thomas] Bass that we’re swimming in neuroscience data, but it’s not magically turning into knowledge. There was a recent paper called “Could a neuroscientist understand a microprocessor?” which asked if the standard suite of neuroscience methods could successfully reverse-engineer the 6502 microprocessor used in the Atari 2600 and NES. This should be easier than reverse-engineering a brain, since it’s a lot smaller and simpler, and since they were analyzing it in software they had all the data they could ever ask for, but it turned out that the methods they were using couldn’t cut it. Which really begs the question of whether these methods can make progress on reverse-engineering actual brains. As the paper puts it, neuroscience thinks it’s data-limited, but it’s actually theory-limited.

The first takeaway from this is that even in the age of “big data” we still need theories, not just data. We still need people trying to guess Nature’s structure and figuring out what data to even gather. Relatedly, I would say that in our age of “Big Science” relatively few people are willing or able to be sufficiently bold to tackle these big questions. Academic promotions & grants don’t particularly reward risk-taking.

I also did a podcast with Adrian Nelson of Waking Cosmos where we talked about consciousness research, valence, ethics, and applying these concepts to non-biological objects (e.g. black holes):

 

A new theory of Open Individualism

September 1, 2018

My colleague Andrés recently wrote about various theories of personal identity, and how a lack of a clear consensus here poses a challenge to ethics. From his post:

Personal Identity: Closed, Empty, Open

In Ontological Qualia I discussed three core views about personal identity. For those who have not encountered these concepts, I recommend reading that article for an expanded discussion.

In brief:

1. Closed Individualism: You start existing when you are born, and stop when you die.

Read More

A Future for Neuroscience

August 13, 2018

I think all neuroscientists, all philosophers, all psychologists, and all psychiatrists should basically drop whatever they’re doing and learn Selen Atasoy’s “connectome-specific harmonic wave” (CSHW) framework. It’s going to be the backbone of how we understand the brain and mind in the future, and it’s basically where predictive coding was in 2011, or where blockchain was in 2009. Which is to say, it’s destined for great things and this is a really good time to get into it.

I described CSHW in my last post as:

Selen Atasoy’s Connectome-Specific Harmonic Waves (CSHW) is a new method for interpreting neuroimaging which (unlike conventional approaches) may plausibly measure things directly relevant to phenomenology. Essentially, it’s a method for combining fMRI/DTI/MRI to calculate a brain’s intrinsic ‘eigenvalues’, or the neural frequencies which naturally resonate in a given brain, as well as the way the brain is currently distributing energy (periodic neural activity) between these eigenvalues.

This post is going to talk a little more about how CSHW works, why it’s so powerful, and what sorts of things we could use it for.

Read More

Seed ontologies

June 3, 2018

Chatting with people at a recent conference on consciousness (TSC2018), I had the feeling of strolling through an alchemist’s convention: lots of optimistic energy & clever ideas, but also a strong sense that the field is pre-scientific. In short, there was a lot of overly-confident hand-waving.

But there were also a handful of promising ideas that stood out, that seemed like they could form at least part of the seed for an actual science of qualia; something that could transform the study of mind from alchemy to chemistry. Today I want to list these ideas, and say a few things about their ecosystem.

Read More

Why are humans good?

March 19, 2018

Are humans worthy of colonizing the universe? Are we particularly awesome and benevolent, moreso than a random mind sampled from mindspace?

The following isn’t a full argument, but I want to point toward two things humans seem to do:

  • First, our brains are set up in such a way that we stochastically seek out pleasant states of mind. (I think this is a contingent fact about humans, not a universal for intelligent beings);
  • Second, we model other beings through empathy (i.e. we’re ‘qualia resonators’ and take on aspects of the phenomenology of those around us, insofar as we can perceive it).

These two things combine in a very interesting way: left to our individual devices, we tend to adapt our environment such that it embodies many beings with positive valence, and their positive valence becomes our positive valence. The domestication and adaptation of wolves is a good example: dogs are basically four-legged happiness machines that we keep around because their fantastic qualia rubs off on us.

Now of course there are a million caveats: we’re often bad at these things, empathy-based behavior has a ton of failure-modes, this doesn’t address hedonic set-points or game-theoretic traps, etc. But the way these two things interact is a big reason I like humanity, and want to preserve it.