When someone on Reddit says “ELI5”, it means “I’m having a hard time understanding this, could you explain it to me like I’m 5 years old?”
Here’s my attempt at an “ELI5” for Principia Qualia.
We can think of conscious experiences as represented by a special kind of mathematical shape. The feeling of snowboarding down a familiar mountain early in the morning with the air smelling of pine trees is one shape; the feeling of waking up to your new kitten jumping on your chest and digging her claws into your blankets is another shape. There are as many shapes as there are possible experiences.
Now, the interesting part: if we try to sort experiences by how good they feel, is there a pattern to which shapes represent more pleasant experiences? I think there is, and I think this depends on the symmetry of the shape.
There’s a lot of evidence for this, and if this is true, it’s super-important! It could lead to way better painkillers, actual cures for things like Depression, and it would also give us a starting point for turning consciousness research into a real science (just like how alchemy turned into chemistry). Basically, it would totally change the world.
But first thing’s first: we need to figure out if it’s true or not.
Put simply, Principia Qualia (click for full version) is a blueprint for building a new Science of Qualia.
PQ begins by considering a rather modest question: what is emotional valence? What makes some things feel better than others?
This sounds like the sort of clear-cut puzzle affective neuroscience should be able to solve, yet all existing answers to this question are incoherent or circular. Giulio Tononi’s Integrated Information Theory (IIT) is an example of the kind of quantitative theory which could in theory address valence in a principled way, but unfortunately the current version of IIT is both flawed and incomplete. I offer a framework to resolve generalize IIT by distilling the problem of consciousness into eight discrete & modular sub-problems (of which IIT directly addresses five).
Finally, I return to valence, and offer a crisp, falsifiable hypothesis as to what it is in terms of something like IIT’s output, and discuss novel implications for neuroscience.
The most important takeaways are:
- My “Eight Problems” framework is a blueprint for a “full-stack” science of consciousness & qualia. Addressing all eight problems is both necessary & sufficient for ‘solving’ consciousness.
- One of IIT’s core shortcomings is that it doesn’t give any guidance for what its output means. I offer some useful heuristics.
- Emotional valence could be the easiest quale to reverse-engineer– the “c. elegans of qualia.”
I. Why some things feel better than others: the view from neuroscience (2600 words)
Affective neuroscience knows a lot about valence… but its knowledge is very haphazard, disorganized, and often circular. The techniques it uses are good at assembling data, but not so good at finding clear patterns in the data, or knowing what data to gather in the first place.
II. Clarifying the Problem of Valence (900 words)
It could be that there are no clean answers to be found, and this is the best we can do. But I don’t buy that- I think we’re just looking at it from the wrong level of abstraction.
To really get traction on the problem of what makes some things feel better than others, we need to look for universal principles true in all conscious systems, not just things that are often true in the human brain.
This also implies that any attempt to solve valence which tries to avoid addressing the larger mystery of consciousness simply won’t work.
III. The Integrated Information Theory of consciousness (1900 words)
IIT is an attempt at a fully quantitative theory of consciousness. Think of it like a mathematical translation function: give it a system (like the brain), and it gives you a mathematical representation of what it feels like to be that system. It’s currently the best (and only) attempt at a full theory of consciousness we have.
IV. Critiques of IIT (2600 words)
Unfortunately, the current iteration of IIT is probably wrong, and definitely incomplete. There are some problems with its math, it’s a little vague on what its input should be, and it says almost nothing about what its output means.
V. Alternative versions of IIT: Perceptronium and FIIH (1500 words)
Other people are trying to translate IIT’s math into the language of physics in order to fix these problems. Unfortunately, these attempts are mostly stuck at the ‘idea’ stage.
VI. Summary and synthesis: eight problems for a new science of consciousness (900 words)
(Sections I-V are a literature review; Sections VI onward are original work.)
Even if IIT is wrong, we can use it as a template for understanding what tasks a theory of consciousness should be able to do:
Secondly, these categories break down into well-defined sub-problems:
Solve all eight problems, and you’ve solved consciousness. (Easy, right??)
VII. Three principles for a mathematical derivation of valence (1000 words)
IIT’s general approach implies that the problem of valence is a problem of mathematical interpretation: i.e.,
“Given a mathematical representation of my qualia (e.g., IIT’s output), what mathematical property of this representation corresponds to how pleasant it feels to be me?”
Here’s a Venn Diagram of assumptions:
(I explain each of these further in the paper.)
VIII. Distinctions in qualia: charting the explanation space for valence (1000 words)
If valence is some mathematical property of IIT’s output (or some future version of IIT which addresses its flaws), then what kind of property should we look for?
Here’s the clever heuristic: if we assume consciousness has a mathematical representation, then for any distinction you can make about qualia, you get a corresponding distinction in the domain of mathematics ‘for free’. And vice-versa! We can use this to explore what kind of mathematical property valence could correspond with:
- Valence seems global rather than local: when you eat a candy bar, everything feels good, not just your mouth;
- Valence seems simple rather than complex: i.e., in terms of Kolmogorov complexity;
- Valence seems atomic rather than composite: it’s a building block for more complex emotions;
- Valence seems intuitively important rather than intuitively trivial: it’s hard to miss.
Why this matters: if valence is [global,simple,atomic,intuitively important], then its mathematical representation is too. This significantly narrows the search space.
IX. Summary of heuristics for reverse-engineering the pattern for valence (2000 words)
This is a “throw everything but the kitchen sink at the problem” section: let’s list facts that we know, and clever heuristics, and hope some patterns emerge. Friston’s Free Energy Principle, Seth’s Predictive Error Minimization, Smolensky’s Computational Harmony metric, and my own “Non-adaptedness Principle” all make an appearance here. Also, I have this neat triangle graphic describing the state space of valence (it’s not just a one-dimensional pain-pleasure axis):
X. A simple hypothesis about valence (2000 words)
There’s a lot of context in the paper that I can’t give justice to in this summary- but given a mathematical object isomorphic to a system’s qualia, I think I’ve identified the mathematical property which corresponds to its valence. This property is the object’s symmetry.
XI. Testing this hypothesis today (1700 words)
Obviously, a hypothesis is only as good as the predictions it makes. I propose three specific tests:
(1) More pleasant brain states should be more compressible (as measured by zipping EEG data, controlling for degree of consciousness);
(2) Low-power rhythmic TMS of consciousness centers such as the thalamus at consonant frequencies should feel dramatically better than such stimulation at subtly dissonant frequencies;
(3) Similarly, consonant stimulation of the vagus nerve should feel better than stimulation with dissonant patterns (upshot: possibly testable with consumer-grade gear).
XII. Taking stock (1800 words)
If I’m right that symmetry in the mathematical object isomorphic to a conscious experience maps to valence, it allows us to recontextualize certain things in neuroscience in interesting ways. For instance:
On the anatomy & network topology of valence:
My hypothesis strongly implies that ‘hedonic’ brain regions influence mood by virtue of acting as ‘tuning knobs’ for symmetry/harmony in the brain’s consciousness centers. Likewise, nociceptors, and the brain regions which gate & interpret their signals, will be located at critical points in brain networks, able to cause large amounts of salience-inducing antisymmetry very efficiently. We should also expect rhythm to be a powerful tool for modeling brain dynamics involving valence- for instance, we should be able to extend (Safron 2016)’s model of rhythmic entrainment in orgasm to other sorts of pleasure.
On valence & neuropharmacology:
Non-opioid painkillers and anti-depressants are complex, but it may turn out that a core mechanism by which they act is by introducing noise into neural activity and connectivity, respectively. This would explain the odd findings that acetaminophen blunts acute pleasure (Durso, Luttrell, and Way 2015), and that anti-depressants can induce long-term affective flattening.
This would also predict that psychedelic substances, although often pleasurable, actually increase emotional variance by biasing the brain toward symmetrical structure, and could result in enhanced pain if this structure is then broken– i.e., they are in this sense the opposite of painkillers. Additionally, we may find that some uncomfortable sensations are caused by ‘competing symmetries’- patterns that are internally symmetrical but not symmetrical to each other, which would predict complex and sometimes destructive interactions between different normally-pleasurable activities and psychoactives.
Furthermore, I would anticipate that severe tinnitus could lead to affective flattening for similar interference-based reasons: insofar as the brain’s subconscious preprocessing can’t tune it out, the presence of a constant pattern in consciousness would likely make it more difficult to generate symmetries (valence) on-the-fly. This would also imply that the specific frequency pattern of the perceived tinnitus sensation may matter more than is commonly assumed.
On self-organization & deep learning:
My hypothesis implies that symmetry/harmony is a core component of the brain’s organizational & computational syntax: specifically, we should think of symmetry as one (of many) dynamic attractors in the brain.
This suggests that mammals got a bit lucky that we evolved to seek out pleasure! But not that lucky, since symmetry is a very functionally-relevant anduseful property for systems to self-organize around, for at least two reasons:
First, self-organizing systems such as the brain must develop some way to perform error-correction, measure & maintain homeostasis, and guide & constrain morphological development. Symmetry-as-a-dynamic-attractor is a profoundly powerful solution to all of these which could evolve in incrementally-useful forms, and so symmetry-seeking seems like a common, perhaps nigh-universal evolutionary path to take. Indeed, it might be exceedingly difficult to develop a system with complex adaptive traits without heavy reliance upon principles of symmetry.
Second, the brain embodies principles of symmetry because it’s an efficient structure for modeling our world. (Lin and Tegmark 2016) note that physics and deep learning neural networks display cross-domain parallels such as “symmetry, locality, compositionality and polynomial log-probability”, and that deep learning can often avoid combinatorial explosion due to the fact that the physical world has lots of predictable symmetries, which enable unusually efficient neural network encoding schemes.
Why do we find pure order & symmetry boring, and not particularly beautiful? I posit boredom is a very sophisticated “anti-wireheading” technology which prevents the symmetry/pleasure attractor basin from being too ‘sticky’, and may be activated by an especially low rate of Reward Prediction Errors (RPEs). Musical features which add mathematical variations or imperfections to the structure of music– e.g., syncopated rhythms (Witek et al. 2014), vocal burrs, etc– seem to make music more addictive and allows us to find long-term pleasure in listening to it, by hacking the mechanic(s) by which the brain implements boredom.
XIII. Closing thoughts (200 words)
I’ve used valence as a ‘pilot project’, but ultimately, the goal is to build a full Science of Qualia — something that can turn qualia research from alchemy into chemistry, and unify our different modes of knowing in neuroscience.
If this intrigues you, I suggest checking out the full paper.
I’m proud to announce the immediate availability of Principia Qualia (download link).
I started this project over six years ago, when trying to understand the problem of valence, or what makes some things feel better than others. Over time, I realized that I couldn’t solve valence without at least addressing consciousness, and the project grew into a “full-stack” approach to understanding qualia.
The way we speak about consciousness is akin to how we spoke about alchemy in the middle ages.
Over time, and with great effort, we turned alchemy into chemistry.
Principia Qualia attempts to start a similar process for consciousness- to offer a foundation for a new Science of Qualia.
The first core takeaway is that ‘the problem of consciousness’ is actually made up of several smaller problems:
A solution to all of these subproblems would be both necessary and sufficient to completely solve the problem of consciousness.
The second core takeaway is that if we assume that consciousness is quantifiable (“for any conscious experience, there exists a mathematical object isomorphic to it”), then we can think of valence as some mathematical property or pattern within consciousness. The answer to what makes some experiences feel better than others will be found in math, not magic.
The third core takeaway is that, based on this framework, we can offer a specific hypothesis for exactly what valence is. Here’s a picture to frame this debate:
… now, either my hypothesis (found in Section X) is correct, or it’s incorrect. But it’s testable.
This is just a taste- I cover a lot of ground in Principia Qualia, and I won’t be able to adequately summarize it here. If you want to understand consciousness better, go read it!
I’ll be holding office hours in Berkeley next week for discussion & clarification of these topics– feel free to message me if you’d like to schedule something.
The two topics I’ve been thinking the most about lately:
- What makes some patterns of consciousness feel better than others? I.e. can we crisply reverse-engineer what makes certain areas of mind-space pleasant, and other areas unpleasant?
- If we make a smarter-than-human Artificial Intelligence, how do we make sure it has a positive impact? I.e., how do we make sure future AIs want to help humanity instead of callously using our atoms for their own inscrutable purposes? (for a good overview on why this is hard and important, see Wait But Why on the topic and Nick Bostrom’s book Superintelligence)
I hope to have something concrete to offer on the first question Sometime Soon™. And while I don’t have any one-size-fits-all answer to the second question, I do think the two issues aren’t completely unrelated. The following outlines some possible ways that progress on the first question could help us with the second question.
This originally started as a write-up for my friend James, but since it may be of general interest I decided to blog it here.
Effective Altruism (EA) is a progressive social movement that says altruism is good, but provably effective altruism is way better. According to EA this is particularly important, because charities can vary in effectiveness by 10x, 100x, or more, so being a little picky in which charity you give your money to can lead to much better outcomes.
To paraphrase Yudkowsky and Hanson, a lot of philanthropy tends to be less about accomplishing good, and more about signaling to others (and yourself) that you are a Good Person Who Helps People. Public philanthropy in particular can be a very effective way to ‘purchase’ social status and warm fuzzies, and while any philanthropist or charitable organization you ask would swear up and down that they were only interested in doing good, often the actual good can get lost in the shuffle.
EA tries to turn this around: it seldom discourages philanthropy, but it’s trying to build a culture and community where people gain more status and more warm fuzzies if their philanthropy is provably effective. The community is larger and more dynamic than I can give it credit for here, but some notable sites include givewell.org, givingwhatwecan.org, 80000hours.org, eaglobal.org, and eaventures.org. Peter Singer also has several books on the topic.
I love this movement, I endorse this movement, and I wish it had been founded a long time ago. But I have a few nits to pick about its quantitative foundation.
Okay, so EA likes measurements. What do they use?
QALYs are calculated as the average number of additional years of life gained from an intervention, multiplied by a utility judgment of the quality of life in each of those years. For example, a person might be placed on hypertension therapy for 30 years, which prolongs his life by 10 years at a slightly reduced quality level of 0.9. In addition, the need for continued drug therapy reduces his quality of life by 0.03. Hence, the QALYs gained would be 10 x 0.09 – 30 x 0.03 = 8.1 years. The valuations of quality may be collected from surveys; a subjective weight is given to indicate the quality or utility of a year of life with that disability.
The idea of QALYs can also be applied to years of life lost due to sickness, injuries or disability. This can illustrate the societal impact of disease. For example, a year lived following a disabling stroke may be judged worth 0.8 normal years. Imagine a person aged 55 years who lives for 10 years after a stroke and dies at age 65. In the absence of the stroke he might be expected to live to 72 years of age, so he has lost 7 potential years. As his last 10 years were in poor health, they were quality-adjusted downward to an equivalent of 8 years, so the quality-adjusted years of life lost would be 7 + (10 – 8), or 9.
In short, if we’re giving money to charity, we should look for opportunities which give a high QALY-per-dollar ratio (e.g., Malaria nets in Africa) and shun those that don’t (e.g., invasive cancer surgery on bedridden 90-year-olds). Every so often, givewell.org updates their shortlist for trusted, validated charities that can produce lots of QALYs for your donation. There are also different ‘flavors’ of the QALY model that focus on different things: the Disability-Adjusted Life Year (DALY), the Wellbeing-Adjusted Life Year (WELBY), and so on.
The QALY, a foundation of wire, particle board, and spackle: way better than nothing, but not ideal.
It’s great we have these metrics: they make intuitive sense, they’re handy for quickly summarizing the expected benefit of various humanitarian interventions, and they’re a lot better than nothing. But jeez, they depend on a lot of complex, kludgy, top-down simplifications. For instance, most variants of the QALY tend to:
– treat utility, quality-of-life, absence of disease, and health as the same thing;
– assume health states are well-defined and have exact durations;
– have no accommodation for differences in how people deal with health burdens, or have different baselines;
– essentially ignore the effects of interventions that might do something other than reduce some burden, generally disease burden (may be arguable);
– have all the limitations and biases characteristic of subjective evaluations and self-reports;
… and so on. The QALY is a wonderfully useful tool, but it’s very far from “carving reality at the joints”, and it’s trivial to think up long lists of scenarios where it’d be a terrible metric.
To be fair, this problem- measuring well-being- is inherently difficult, and it’s not that people don’t want to ‘build a better QALY’, it’s that innovation here is so constrained because it’s very, very hard to design and validate something better, especially when you have something simple that already sorta works. (The fact that QALYs are so closely tied in with the medical establishment, never a paragon of ideological nimbleness, doesn’t help things either.)
“Okay,” you say, “you’re skeptical of using the QALY to evaluate the effect of charitable interventions on well-being. What should we use instead?”
In the short term: let’s augment/validate the QALY with some bottom-up, quantitative proxies for well-being.
What gives me hope that the QALY is a brief stop-over point toward a better metric is that there’s so much overlap between the EA community and the Quantified Self (QS) community, and the QS community is absolutely nuts about novel, clever, and useful ways to measure stuff. The QS meetups I’ve attended have had people present on things ranging from the relative effectiveness of 5+ dating sites for meeting women and the quality of follow-up interactions from each, to the effects of 30+ different foods on poop consistency. Quantifying and tracking well-being is well within the QS scope!
So how would a QSer measure well-being in a more bottom-up, data-driven way?
First, there’s a lot that a QALY-style factor analysis does right. It gives us an expected effect on well-being from the environment, and helps sort through causes of well-being or suffering, something that purely biological proxies can’t do. So we wouldn’t throw it out, but we should set our sights on augmenting and validating it with QS-style data collection.
Second, we’d look for some really good (and ideally, really really simple) bottom-up, biological proxies that track well-being. I suspect we could throw something together from stress hormone levels (e.g., cortisol) and their dynamic range, and possibly heart-rate variability.
Third, we’d crunch the numbers on possible biological proxies, pick the best ones, and validate the heck out of them. Easier said than done, but simple enough in principle.
Why do we need to bother with biological proxies? Because they’re strong in the same ways that top-down metrics tend to be weak. E.g.,
– they don’t involve any subjective component… they may not tell the whole story, but they don’t lie;
– they can be frequently measured, and frequent measurements are important, because they let you have tight feedback loops;
– they can measure and compensate for the phenomenon of hedonic adaptation;
– there’s a significant amount of literature in animal models that explores and validates this approach, so we wouldn’t have to start from scratch.
Obviously we couldn’t do this sort of QS-style data collection on everybody all the time, but even if just a few people did it, it’d go pretty far toward improving the QALY for everybody else, by seeing where our predictions of environmental effects on well-being hold up and where they break down, at least compared to the window of biological data we can directly measure.
In the medium term: let’s figure out some common unit by which to measure human and non-human animal well-being.
One of the principles of EA is that all sentient beings matter- that humans don’t have a monopoly on ethical significance. I agree! But how do we compare different organisms?
First, those biological proxies from step (1) will definitely come in handy. Humans, dogs, cows, and chickens all share the same basic brain architecture, which implies that if we find something that’s a good proxy for well-being or suffering in humans, it should be at least a not-so-terrible proxy for the same in basically all non-human animals too.
But for numerical comparisons we’ll need more than just that. I suspect we’ll need some sort of a plausible method for adjusting for how sentient and/or capable of suffering something is. Most people, for instance, would agree that mice can suffer, and that mouse suffering is a bad thing. But can they suffer as much as humans can? Most people would say ‘no’, but we can’t put good numbers or confidence ranges on this. My intuition is that almost everything with a brain shares the basic emotional architecture, and so is technically capable of suffering, but various animals will differ significantly in their degree of consciousness, which acts as a multiplier of suffering/well-being for the purposes of ethics. E.g., ethical significance = [suffering]*[degree of consciousness]. The capacity to have a strong sense of self (i.e., the sense that there is a self that is being harmed) may also be important, which likely has a neuroanatomical basis. Call it the SQALY (Sentience and Quality-Adjusted Life-Year).
The road forward here is murky, but important. I hope people are thinking about ways to quantify this, because one of EA’s core strengths is that it argues that human and animal well-being is in principle commensurable, and quantifiable. My off-the-cuff intuition is that a mash-up of Panksepp’s work on mapping structural commonalities in “emotion centers” across vertebrate brains, and a comparison of relative amounts of brain mass in areas thought most significant for consciousness, could bear fruit. But I dunno. It’ll be tough to put numbers on this.
In the long term: let’s build a rigorous mathematical foundation that explains what suffering *is*, what happiness *is*, and how we can *literally* do utilitarian calculus.
EA wants to maximize well-being and minimize suffering. Cool! But what are these things? These tend to be “I know it when I see it” phenomena, and usually that’s good enough. But eventually we’re gonna need a properly rigorous, crisp definition of what we’re after. To paraphrase Max Tegmark: ’Some arrangements of particles feel better than others. Why?’ — if pain and pleasure are patterns within conscious systems… what can we say about these patterns? Can we characterize them mathematically?
I’ve spent most of my creative output of the last couple years on this problem, and though I don’t have a complete answer yet, I think I know more than I did when I started. It’s not really ready to share publicly, but feel free to track me down in private and I can share what I think is reasonable to say, and what I think is plausible to speculate.
We don’t need an answer to this right away. But technology is taking us to strange places (see e.g., If a brain emulation has a headache, does it hurt?) where it’ll be handy to have some guide for detecting suffering other than our (very fallible) intuitions.
This is way too much to worry about! And isn’t it a better use of resources to actually help sentient creatures we *know* are in pain, rather than slog through all these abstract philosophical details that seem impossible to overcome?
Very possibly! But I don’t think this would be wasted effort, at all.
First: EA is all about metrics. True, improving these metrics can be very difficult, and very difficult to validate, but it’s the sort of thing that pays huge long-term dividends. And if we don’t do it, is there anybody else who will? Abe Lincoln has this (possibly apocryphal) quote, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” The QALY is EA’s axe, and it’s worth trying to sharpen.
Second, this all sounds very hard, but it may be easier than it seems. This stuff reminds me of what Paul Graham said about startups: the best time to work on something is when technology has made what you want to do possible, but before it becomes obvious. With all the consciousness research that’s been going on, I suspect we have many more good tools to work with than most people- even (or perhaps especially?) ethicists- realize. We may not have solid answers to these questions, but we’re increasingly getting a decent picture as to what good answers will look like.
Third, and I think just as important, I fear that EA is going to be vulnerable to mission drift and ideological hijacking. This is not a criticism of EA, but rather a comment that almost everybody would love to think they’re engaging in effective altruism! Every activist and culture warrior thinks they’re improving the world by their actions, and that their special brand of activism is the most provably effective way to do so. And so I think it’s very plausible that people with external agendas will be highly attracted to EA (whether consciously or not), especially as it starts to gain traction. EA is going to need a sturdy rudder to avoid diversions, and strong (yet not overactive!) immune system to fend off ideological hijacking. I can’t help with latter, but I do think improvements in the core metric EA uses could help with ‘organic’ long-term mission drift.
Concrete recommendations for EA:
There’s no one person ‘in charge’ of EA, so the best, most effective, and least annoying way to get something done is generally to organize it yourself. That said, here are a few things I think the community could (and should) do:
– Consider forming an “EA Foundational Research” team for the sort of inquiry that might not happen effectively on its own. It need not be super formal- even an ‘EA Foundational Issues Journal Club’ could be helpful and would be fun;
– Foster relationships with receptive scientific communities to help with each step (short/medium/long)… the way Randal Koene’s carboncopies coordinates research between labs might be an interesting model to emulate;
– Think deeply about how to shield EA from unwanted appropriation by culture warriors without being too insular, and/or how to pick the culture battles worth fighting;
– Keep being awesome.
I went to Spark Weekend SF last weekend, a ‘lifehacking conference’. I had never been to a lifehacking conference, and expected it to maybe be a little bit fun, but a little bit hokey. It wasn’t hokey at all: it was pretty darn great, with high-quality people and really interesting content. Here’s what we talked about:
Abe Gong, “Using Force Multipliers for Willpower”
Abe is a data scientist focusing on wearables, and his message was the following: willpower is limited resource, and we don’t overcome our problems through sheer willpower, but rather by being smart about how we use it. And the smartest use of willpower is probably to spend it changing our environment, because the subconscious mind is shortsighted and ornery, but takes cues from the world around it. And very importantly: technology can help with all of this.
Abe’s vision of how tech can help: He talked about how he uses reminders to do things, that we can manufacture new habits, and even extend old habits (‘after brushing teeth, do X’), and more generally, integrating the question of ‘are you doing what you’re supposed to be doing?’ into our day.
Abe gave us all a task: change your environment in 3 ways to make yourself more productive.
I’m hardly doing his talk justice- he had lots of good foundational thoughts and actionable advice. My favorite part was how, since willpower fluctuates throughout the day, we should use our high-willpower peaks (“temporary ability to do hard things”) to make small changes that help all day long. He didn’t have a technological “willpower killer app”, but he’s working on launching a new company, Metta Smartware. So maybe he’s got something up his sleeve.
One question I find myself wondering: what *is* willpower? This is not anything like a settled issue, judging by SSC’s recent post on the topic. Abe said it was nigh-impossible to increase it, and this seems to be consistent with the literature, but if we had a better understanding exactly how the willpower dynamic worked, could we do anything cool with that knowledge?
Abe’s conception of willpower: temporary opportunity to do hard things, a quantity that varies considerably throughout the day.
Julia Galef, “Using Trigger Action Plans To Facilitate Change”
Julia (of CFAR fame) started out with the ‘Parable of the Sphex’, a story about a really rigidly OCD insect that can’t change its habits even when it’s obvious the habit isn’t working. And, she says, we all are more like the Sphex than we realize: often we have trigger-response patterns that fail, badly, and we fail to change them. But we have the *capacity* to change them, unlike the poor Sphex, and changing habits with poor outcomes is an absolutely critical skill.
Julia talked about this in the context of ‘rewriting our code’ with trigger-action plans. These are of the form, ‘if X, then Y’ — overwrite a bad one with a good one. Now, Julia was very careful to say it’s not necessarily easy to do this: you’ve gotta be smart about how to build a new trigger, put effort into following the plan, and smart about debugging it when it breaks. Her bottom-line message: this stuff isn’t magic… but it does work, and is a critical skill for self-improvement.
Julia’s challenge: come up with one TAP (Trigger->Action Plan) that will help with some goal.
Julia had a lot of good things to say, and I found myself agreeing with her advice and her attitude. Really a great speaker too, explaining a lot of dense stuff quickly and clearly (I think she does a lot of this with her organization, CFAR). Thinking about whether these techniques would work for me got me wondering:
I feel like belief in self-improvement / lifehacking methods is often a self-fulfilling prophecy: if you don’t believe they can work, they won’t, whereas if you believe in them, they can seem pretty easy, and I think it’s natural to sorta feel like people who aren’t doing them are just silly/lazy, and should just buckle down and do them and improve their lives. But belief is not as plastic as this! E.g., look at how sticky religious beliefs are, or the feeling of victimhood. We often don’t choose our beliefs, and even if they’re non-adaptive, mere self-fulfilling prophecies, or demonstrably false, they can be very durable.
So how do you get a person to viscerally believe lifehacks would work for them? I used to think lifehacks wouldn’t work for me, and they didn’t. Now, I think they sometimes can, and they sometimes do. How did I make this transition? I was talking with Will Eden about this, and mentioned my change in attitude probably came from hanging out with people I really really respected who really bought the concept of lifehacking. Over time, I slowly came around to the idea that no, it wasn’t just empty promises, and yes, it could work for me too. Our peer group really has a surprising amount of influence over our views— Will mentioned the saying, “You’re the average of the five people you spend the most time with,” and I think this might be a ‘hidden variable’ in a lot of lifehacking successes and failures.
Julia’s definition of a ‘Trigger-Action Plan” (TAP).
Aubrey de Grey, “Attempting to Defeat Aging”
Aubrey, a noted anti-aging researcher (whom I’ve heard speak quite a number of times now) opened with a contrast of infectious diseases vs. age-related diseases: medicine has been great at fixing the former, but absolutely terrible at fixing the latter.
Why? Aubrey says it’s because the way geriatric medicine thinks of age-related ‘diseases’ is all wrong. Aging is about accumulation of damage, and it’s unavoidable: metabolism leads to damage, damage leads to pathology. So these age-related diseases are just a side-effect of being alive. You can’t “cure” alzheimer’s or cancer the way you can malaria, because you can’t “cure” this genetic damage, only go in and fix it after it occurs.
He contrasts geriatric medicine with gerontology, which he says is better, but still bad, because it’s more focused on describing than intervention, and at best could only ‘slow down’ aging. Instead, he argues, we need a way to fix the damage— we can have old cars that function perfectly well a century after they were made— why not human bodies if we can give them periodic ‘tune-ups’?
Aubrey identifies seven types of damage, with options for repairing each. The argument is that if we fix all these seven things, we’ve effectively cured aging.
He finished with an exasperated plea: everybody he talks with seems entirely concerned with the sociological considerations of what could go wrong if we cured aging. He thinks we’re largely ignoring the sociological considerations of what could go *right* if we cured aging. Because aging really, really sucks!
I think de Grey is absolutely right on most things, especially his most important message: aging sucks and isn’t a law of nature we can’t do anything about! But I think his understanding of damage is limited and he’s much too attached to his original list of seven types of damage (he was proud of not changing his original list after 14+ years of progress in the field, but to me this is a bit of a warning sign he’s not open to conceptual or factual revisions). One way his and my understanding diverge: he would never say we’re born with damage, a.k.a. genetic load, whereas I would.
So I think his list of 7 types of damage is both overly narrow (he doesn’t directly address inherited *or* accumulated DNA damage) yet also too pessimistic: genetic engineering techniques could not only fix multiple kinds of damage at once, but also improve a fundamental driver of aging, inherited ‘genetic spelling errors’. Still, one has to admire his dedication- I think it’s been a long, hard road for him to get as far as he has, soldiering on through (very often misguided) criticism after criticism, and I think his work has really moved peoples’ attitudes.
Aubrey’s action item: was to buy his book and/or give him lots of money for his research, which seemed a bit not-super-practically-useful-as-a-lifehack.
Aubrey’s “seven types of damage” that he thinks add up to aging.
Mikey Siegel, “Transformative Technology: An Evolution of Medicine”
Mikey, an engineer-turned-consciousness-hacker, opened with an observation and a question: “Happiness depends on more than external circumstances. What does it mean to be truly content?”
He talked about his experience at a meditation retreat, where he had a fundamental change in perspective: after countless hours of meditation, something just ‘clicked’ and he felt “okay-ness with everything”. The pain of cramped muscles, hunger, and whatnot were still there, but somehow not bad. And he thought: “We all have the capacity to feel this, regardless of circumstances.” And now he wants to help people reach that state with technology.
He spoke about how Enlightenment isn’t some special Buddhist thing, but rather part of being human. And so is suffering. Meditation is a technology we invented, some 3000 years ago, to optimize our mind-states…. and it works. The problem is that meditation more-or-less involves being ‘unplugged’ from technology, which is legitimately hard in many circumstances. But maybe it can be adapted to work with, not against, the trends and constraints of modern life.
So, his thesis: If you can quantify enlightenment, you can increase it. We can quantify it with current technology- so that means it’s ripe for hacking. He ended with various comments on fMRI studies and the tech he’s worked on.
Mikey’s topic is near and dear to my heart. What is human flourishing? Can we measure it and optimize it? How can technology help? These are the right questions to ask. I am super glad he’s asking them.
However, I wonder if fMRI studies, brain region activity, and HeartMath’s tech are as good as they seem for understanding ‘how to measure enlightenment’: it often works, but will never give you a crisp understanding, and there are a million ways it can mislead. They’re pretty “leaky” abstractions. So I dig the first part of the message: let’s hack consciousness! But I’m leery of the second, that the paradigms we have available are a satisfying technical or philosophical basis for doing so. (I hope to put my money where my mouth is and publish something on this general topic soonish.) That said, a good implementation of a workable paradigm beats a non-existent implementation of a hypothetical perfect paradigm any day, and Mikey was working on some pretty amazing things last time I talked with him… things that I can’t wait to get my grubby paws on.
Adam Bornstein, “Engineering The Alpha In You”
Adam’s a fitness/diet guy- worked at Men’s Health, wrote some books, and is here to talk about diets.
First, diets are tough. Lots of people try them every year, and generally they fail. Mostly, they don’t fail because they’re weak: they fail because the common wisdom about how to think about losing weight is so completely wrong.
Actionable advice that’s not wrong:
– Cardio is 4x more popular for weightloss, but weight training is 2x more effective.
– All diets can work, but they differ in how sustainable they are! Don’t be a masochist. “If you love pasta, don’t go on the paleo diet.”
– “The pleasure from two cheeseburgers is equivalent to one orgasm” – your body needs that pleasure somehow, so have the orgasm instead of the cheeseburgers.
– Weightloss happens in chunks, not gradually. If your diet seems to plateau, keep going, give your body time to recalibrate its metabolism, and you’ll start losing weight again.
– “If you make it hard to fail, you will succeed.” Use positive reinforcement, one step at a time, form good habits (Eat More of the following at Each meal: protein of vegetables). And set realistic expectations.
– Carbs are probably not as evil as the low-carb/paleo crowd portrays them. Everything in moderation.
Adam was a great speaker, even if his slides were a little haphazard. He made some comment about ‘all diets work equally, assuming equal macronutrient (carb/fat/protein) proportions’ that seemed odd to me, as the premise of the low-carb/paleo diets is that we eat too many carbs and should eat fewer. Still, his large themes and specific recommendations seemed pretty spot-on, and a lot smarter and wiser than the conventional wisdom. If I was a billionaire I’d hire him as my diet coach.
James Norris, “Using Technology To Upgrade Yourself”
James had a short talk to cap the more formal part of the day. His basic message: The ‘Limitless pill’ doesn’t exist, but 750+ lifehacking tools do, and they can do a lot of the same things, with the bonus of actually existing.
He listed, rapid-fire-style, some lifehacking tools he recommends (judged on efficacy, ease, and evidence):
Coach.me – instant coaching for any goal.
Beeminder – they take your money if you don’t achieve your goals.
Mindbloom- they visualize your goals like a tree- if you fail it dies.
Stickk – a commitment tracker that takes your money if you fail, and either gives it to your choice of charity, or ‘anti-charity’ (organization you hate).
Mealsquares, Soylent, DIY Soylent – zero-effort nutritionally-complete(?) food-replacements.
Meal Snap – take a picture and it’ll tell you about your food.
Pre-made Paleo – a fully paleo meal shipped to your door.
Smart Body Analyzer – it tells you your weight…. and shares it on facebook.
7 Minute Workout, Stronglifts 5×5- apps that coach you through a workout.
Wearables – lots of options, find the one you like.
Eye masks – wear them! They really work. [Mike’s note: this is something I’m definitely going to try.]
Sleep Genius, other sleep trackers – check the quality of your sleep.
Various options for a bright blue light that will increase alertness.
F.lux – leech the blue from your screen late at night. [Mike’s note: I think this really works.]
Headspace – guided meditation app
Muse – neurofeedback/mindfulness/basic rhythms hardware+app combo.
Khan academy – offers a smart, step-by-step guide through learning complex topics.
Anki – digital flashcards.
Imperative – you enter what you like, it figures out what career might resonate with you.
80,000 hours – similar theme, but optimizes for social impact.
Mint – pay bills easily.
Betterment – investing made easy.
Levelmoney – it’ll watch your bank account and tell you when to stop spending.
Wealthminder – smart management for investments.
Rescue Time – an app that tells you exactly how much time you waste.
Maelstrom – inbox 0.
Tinder, Ok cupid – relatively easy ways to meet people romantically.
Combatant Gentleman, Trunk Club – click-to-door shopping for people who hate shopping.
Spirituality: psilocybin, aka mushrooms.
And finally, James mentioned The Kit, a site for open research on lifehacks, and showed a short video from BJ Fogg (message: look at change as a design challenge, not a willpower challenge… and focus on successes, because small changes have ripple effects).
I got the impression that James was very, very invested in everybodys’ success, and you could just feel his genuineness. It was very cool. Though now I’ll live in fear of letting him down if I fail on my goals (50% joke).
Afterwards, we broke up into smaller groups, tasked with making 3 goals for the next 30 days. Of course, instead of doing this, I typed up this review. I know— I’m totally busted.
He discusses the uncertainties inherent in whether emulations are ‘sentient’ (i.e., whether WBEs are conscious and can feel pain) and moral considerations and strategies thereof. I’m generally rather skeptical of ethics papers, but the signal-to-noise ratio here is very good.
The most striking passage for me was this quote from Thomas Metzinger:
What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development – we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby – no representatives in any ethics committee.
Metzinger probably overstates his case– the notion that WBEs “could” feel such pain is by no means a settled matter, and it feels like Sandberg includes this as a ‘worst-case’ scenario. But perhaps we can approach this question from a Bayesian perspective… if we take
[Suffering experienced by a mind insane-in-the-way-imperfect-
My intuition: the first quantity seems very large; the second quantity is probably very low, but may not be; the third quantity is almost certainly very large. If I give the second quantity even a 5% chance (or even 1%), that’s a lot of expected suffering.
Obviously this estimate is very rough, but it poses two questions: (1) how could we improve this sort of probabilistic estimate? And (2), *if* WBEs can suffer, how can we mitigate this*? I’m not convinced this issue matters, nor am I convinced this doesn’t matter. It’s not keeping me up at night… but it is an interesting question.
*Sandberg suggests the following:
Counterparts to practices that reduce suffering such as analgesic practices should be developed for use on emulated systems. Many of the best practices discussed in Schofield and Williams (2002) can be readily implemented: brain emulation technology by definition allows parameters of the emulation that can be changed to produce the same functional effect as the drugs have in the real nervous system. In addition, pain systems can in theory be perfectly controlled in emulations (for example by inhibiting their output), producing ‘perfect painkillers’.
However, this is all based on the assumption that we understand what is involved in the experience of pain: if there are undiscovered systems of suffering, careless research can produce undetected distress.
Last week I had the opportunity (thanks Alton!) to attend the launch event for the Palo Alto Longevity Prize. In short, it’s a $1m prize for substantial progress on “hacking the code of aging”. The Washington Post has more.
$1m is not a lot of money for this. But the hope is that by explicitly saying: “We want people to hack aging, and we will give anyone who does it a prestigious prize” it may attract key scientists already working on this, and also help legitimize the background assumption that aging *is* a disease which *should* be cured.
I have a lot of sympathy with this (h/t Nick Bostrom’s The Fable of the Dragon-Tyrant). I also see how curing aging could cause deep social problems. Some remarks I made in a facebook discussion:
I’m actually a little skeptical about the social effects of indefinite lifespan. Max Planck’s notion that “science advances one funeral at a time” is witty, yes, but I think it also largely rings true.
I think we can extend this and say a primary driver of social change is that there’s a constant influx of young people, with newer ideas and more open minds– and that the older, higher-status people who have their hands on the levers of power slowly die off to make room for them.
Death, being the great equalizer, may also incentivize altruism. He who dies with the most toys still dies, after all– unless indefinite lifespan is possible, at which point why give as much to charity, since you might need it later?
This system is terribly inefficient and rather cruel in many ways. But it’s also a time-tested process that sustains society’s dynamism. I am (of course!) in favor of curing aging, but I think there are many potential hidden ‘gotchas’ involved, and I don’t think they’ll be trivial. (Note: brain plasticity-enhancing drugs or prosthetics may help with some of these problems, but I think they’ll have to work around fundamental limitations.)
Trying to solve a scientific mystery is like starting a startup: timing is everything. Try to solve a problem too soon, and your efforts are wasted. Try to solve a problem too late, and you can’t contribute anything new. But there’s a sweet spot between when a problem becomes solvable in principle and when it becomes obvious, where efforts have the most leverage. A time when the problem you’re trying to solve still looks dreadfully out of reach, but nothing except inertia and muddled thinking is actually standing in your way. I think the mystery of pain and pleasure (i.e., what’s the intrinsic factor that makes things feel good or bad?) is in this sweet spot right now.
After the fold is a partial excerpt from a paper I’m working on. If it piques your interest, please do contact me.
Every year, literary-agent-to-famous-intellectuals John Brockman emails his 150+ clients a philosophical question to publicly weigh in on. The question he asked this year is, What should we be worried about?
I can’t say this list did much for my peace-of-mind, but it was interestingly diverse. Most comments aren’t anything you couldn’t find in the New York Times, but some seemed unusually pithy, prescient, or fresh. Here’s what stood out to me this year:
Rolf Dobelli on how goods that convey high status will always be in short supply:
As mammals, we are status seekers. Non-status seeking animals don’t attract suitable mating partners and eventually exit the gene pool. Thus goods that convey high status remain extremely important, yet out of reach for most of us. Nothing technology brings about will change that. Yes, one day we might re-engineer our cognition to reduce or eliminate status competition. But until that point, most people will have to live with the frustrations of technology’s broken promise. That is, goods and services will be available to everybody at virtually no cost. But at the same time, status-conveying goods will inch even further out of reach. That’s a paradox of material progress.
Yes, luxury used to define things that made life easier: clean water, central heating, fridges, cars, TVs, smart phones. Today, luxury tends to make your life harder. Displaying and safeguarding a Rauschenberg, learning to play polo and maintaining an adequate stable of horses, or obtaining access to visit the Pope are arduous undertakings. That doesn’t matter. Their very unattainability, the fact that these things are almost impossible to multiply, is what matters.
As global wealth increases, non-reproducible goods will appreciate exponentially. Too much status-seeking wealth and talent is eyeing too few status-delivering goods.