Interview & podcast

November 15, 2018

Adam Ford recently posted some bits from an interview we did a while back- an excerpt from part 1:

Perhaps the clearest and most important ethical view I have is that [consequentialist] ethics must ultimately “compile” to physics. What we value and what we disvalue must ultimately cash out in terms of particle arrangements & dynamics, because these are the only things we can actually change. And so if people are doing ethics without caring about making their theories cash out in physical terms, they’re not actually doing ethics- they’re doing art, or social signaling, or something which can serve as the inspiration for a future ethics.

The analogy I’d offer here is that we can think about our universe as a computer, and ethics as choosing a program to run on this computer. Unfortunately, most ethicists aren’t writing machine-code, or even thinking about things in ways that could be easily translated to machine-code. Instead, they’re writing poetry about the sorts of programs that might be nice to run. But you can’t compile poetry to machine-code! So I hope the field of ethics becomes more physics-savvy and quantitative (although I’m not optimistic this will happen quickly).

Eliezer Yudkowsky refers to something similar with notions of “AI grade philosophy”, “compilable philosophy”, and “computable ethics”, though I don’t think he quite goes far enough (i.e., all the way to physics).

From part 2, on why some communities seem especially confused about consciousness:

First, people don’t realize how bad most existing models of qualia & valence are. Michael Graziano argues that most theories of consciousness are worse than wrong- that they play to our intuitions but don’t actually explain anything. Computationalism, functionalism, fun theory, ‘hedonic brain regions’, ‘pleasure neurochemicals’, the reinforcement learning theory of valence, and so on all give the illusion of explanatory depth but don’t actually explain things in a way which allows us to do anything useful.

Second, people don’t realize how important good understandings of qualia & valence are. They’re upstream of basically everything interesting and desirable.

Here’s what I think has happened, at least in the rationalist community: historically, consciousness research has been a black hole. Smart people go in, but nothing comes out. So communities (such as physicists and LessWrong) naturally have an interest in putting up a fence around the topic with a sign that says ‘Don’t go here!’ – But over time, people forgot why the mystery was blocked off, and started to think that the mystery doesn’t exist. This leads to people actively avoiding thinking about these topics without being able to articulate why.

And from part 3, on the need for bold, testable theories (and institutions which can generate them):

I would agree with [Thomas] Bass that we’re swimming in neuroscience data, but it’s not magically turning into knowledge. There was a recent paper called “Could a neuroscientist understand a microprocessor?” which asked if the standard suite of neuroscience methods could successfully reverse-engineer the 6502 microprocessor used in the Atari 2600 and NES. This should be easier than reverse-engineering a brain, since it’s a lot smaller and simpler, and since they were analyzing it in software they had all the data they could ever ask for, but it turned out that the methods they were using couldn’t cut it. Which really begs the question of whether these methods can make progress on reverse-engineering actual brains. As the paper puts it, neuroscience thinks it’s data-limited, but it’s actually theory-limited.

The first takeaway from this is that even in the age of “big data” we still need theories, not just data. We still need people trying to guess Nature’s structure and figuring out what data to even gather. Relatedly, I would say that in our age of “Big Science” relatively few people are willing or able to be sufficiently bold to tackle these big questions. Academic promotions & grants don’t particularly reward risk-taking.

I also did a podcast with Adrian Nelson of Waking Cosmos where we talked about consciousness research, valence, ethics, and applying these concepts to non-biological objects (e.g. black holes):

 

Liked it? Take a second to support MJ on Patreon!

No Comments


Trackbacks

  1. Recent QRI highlights – November 2018 – Qualia Research Institute

Leave a Reply