Rescuing Philosophy

October 2, 2017  |  No Comments

I.

Philosophy has lost much of its energy, focus, and glamor in our modern era. What happened?

I’d suggest that five things went wrong:

1. Historical illegibility. Historically, ‘philosophy’ is what you do when you don’t know what to do. This naturally involves a lot of error. Once you figure out a core ontology and methodology for your topic, you stop calling it ‘philosophy’ and start calling it ‘science’, ‘linguistics’, ‘modal logic’, and so on. This is a very important, generative process, but it also means that if you look back at the history of philosophy, you basically only see ideas that are, technically speaking, wrong. This gives philosophers trying to ‘carry on the tradition’ a skewed understanding of what philosophy is, and how to do it.

2. Evaporative cooling. The fact that the most successful people, ideas, ontologies/methodologies, and tools tend to leave philosophy to found their own disciplines leads to long-term problems with quality. We can think of this as an evaporative cooling effect where philosophy is left with the fakest of problems, and the worst, most incoherent and confused ways to frame what real problems are left.

3. Lack of feedback loops. Good abstract philosophy is really hard to do right, it’s hard to distinguish good philosophy from the bad, and the value of doing philosophy well isn’t as immediately apparent as, say, chemistry. This leads to ‘monkey politics’ playing a large role in which ideas gain traction, which in turn drives a lot of top talent away.

4. Professionalization. Turning metaphysical confusion into something clear enough to build a new science on tends to be a very illegible process, full of false-starts, recontextualizations, and unpredictable breakthroughs. This is really hard to systematically teach students how to do, and even harder to plan an academic budget around. As philosophy became regularized and professionalized– something that you can have a career in– it was also pushed toward top-down legibility. This resulted in less of a focus on grappling with metaphysical uncertainty and more focus on institutionally-legible things such as scholarship, incremental research, teaching, and so on. Today, the discipline is often taught and organized academically as a ‘history of ideas’, based on how past philosophers carved various problem-spaces.

5. Postmodernism. Philosophy got hit pretty hard by postmodernism — and insofar as philosophy was the traditional keeper of theories of meaning, and insofar as postmodernism attacked all sources of meaning, philosophy suffered more than other disciplines. Likewise, academic philosophy has inherited all the problems of modern academia, of which there are many.

I’m painting with a broad brush here, and I should note that there are pockets of brilliant academic philosophers out there doing good, and even heroic, work in spite of these structural conditions. #notallphilosophers. But I don’t think many of these would claim they’re happy with modern academic philosophy’s structural conditions or trajectory.

And this matters, since philosophy is still necessary. There’s a *lot* of value in having a solid philosophical toolset, and having a healthy intellectual tradition of being mindful about ontological questions, epistemology, and so on. As David Pearce often points out, there’s no clean way to abstain from thorny philosophical issues: “The penalty of not doing philosophy isn’t to transcend it, but simply to give bad philosophical arguments a free pass.”


II.

So philosophy is broken. What do we do about it?

My friend Sebastian Marshall describes the ‘evaporative cooling’ philosophy has undergone, and suggests that we should try to rescue and reclaim philosophy:

So, this bastardized divorced left-behind philosophy will be here to stay in some form or fashion. We can’t get rid of it… but it’s also not necessary to get rid of it.

Turning to better news, even in mainstream philosophy, there are still sane and sound branches doing good work, like logic (which is basically math) and philosophy of mind (which is rapidly becoming neuroscience but which hasn’t yet evaporatively cooled out of philosophy).

It wouldn’t take very many people reclaiming the word philosophy as a love of wisdom to begin to turn things around.

Genuinely good philosophy is happening all over the place – though it’s rarely people in fields that don’t fight back at all. Indeed, you see computer programming and financiers doing some of the best philosophy now – Paul Graham, Eliezer Yudkowsky, Ray Dalio, Charlie Munger, Nassim Taleb. When the computer scientist get something wrong, their code doesn’t work. When the financier gets something wrong, they lose a lot of money. Excellent philosophers still come of the military – John Boyd and Hyman Rickover to name two recent Americans – and they come out of industrial engineering, like Eli Goldratt.

That these people are currently not classified as philosophers is simply an error– let the people doing uselessness in the towers call themselves “theoretical screwaroundists” or whatever other more palatable name they might come up with for themselves; genuine philosophy is alive and well, even as the word points to decayed and failing institutions.

There would clearly be enormous benefits to reclaiming the word “philosophy” for serious generative work. But I worry it’s going to be really hard.

Words have a lifecycle- often, they start out full of focus, wit, and life, able to vividly convey some key relationship. As time goes on, however, they lose this special something as they become normalized and regress toward the linguistic mean. Part of being a good writer is being in-tune with what words & phrases still have life, and part of being a great writer (like Shakespeare) is minting new ones. My sense is that “philosophy” doesn’t have much sparkle left, and it may be preferable to coin a new word.

Unfortunately, I don’t know of any better words that would encapsulate everything we’d want, and it may be very difficult to rally people behind a new term unless it’s really good. Even though academic philosophy is in terrible shape, the term ‘philosophy’ is still an effective Schelling point; still prime memetic real-estate. So, pending that better option, I think Sebastian’s right and we do need to do what we can to rescue & reclaim philosophy.

III.

How do we rescue philosophy? — I think we need to think about this both in terms of individual tactics and collective strategy.

Individual tactics: survival and value creation in an unfriendly environment

Essentially, those that wish to make a notable, real, and durable contribution to philosophy should understand that association with academia is a double-edged sword. On the plus side, it can give people credibility, access, and fellowship with other academics, apprenticeships with established thinkers, maybe a steady income, and a great excuse to engage deeply with philosophy. On the other hand, by going into academic philosophy someone is essentially granting an unhealthy, partially moribund system broad influence over their local incentives, memetic milieu, and aesthetic. That’s a really big deal.

A personal aside: I struggled with how to navigate this while writing Principia Qualia. Clearly a new philosophical work on consciousness should engage with other work in the space– and there’s a lot of good philosophy of mind out there, work I could probably use and build upon. At the same time, if philosophy’s established ways of framing the problem of consciousness could lead to a solution, it would’ve been solved by now, and by using someone else’s packaged ontology, I’d be at risk of importing their confusion into my foundation. With this in mind I decided that being aware of key landmarks in philosophy was important, but being uncorrelated with philosophy’s past framing was equally important, so I took a minimalist first-principles approach to building my framework and was very careful about what I imported from philosophy and how I used it.

Collective strategy: Schelling points & positive-feedback loops

The machinery of modern academic philosophy is going to resist attempts at reformation, as all rudderless bureaucratic entities do, but it won’t be proactively hostile about it, and in fact a lot of philosophers desperately want change. This means people can engage in open coordination on this problem. I.e., if we can identify Schelling points and plant rallying flags which can help coordinate with potential allies, we could probably make a collective push to fix certain problems or subfields (my sources say this sort of ‘benign takeover’ is already in motion in certain departments of bioethics).

Ultimately, though, fixing philosophy from within probably looks like a better option than it actually is, since (1) entryism is sneaky, always has a bad faith component, and is never as simple as it sounds (if nothing else, you have to fight off other entryists!), and (2) meme flow always goes both ways, and a plan to fix philosophy’s norms faster than its bad norms subvert us is inherently risky. Plenty of good people with magnificent intentions of fixing philosophy go into grad school, only to get lost in the noise, fail to catalyze a positive-feedback-loop, burn out, and give up years later. If you’re going into academic philosophy anyway, then definitely try to improve it, but don’t go into academic philosophy in order to improve it.

Instead, it may be better to build institutions that are separate from modern academic philosophy, and compete against it. Right now, academic philosophy looks “too big to fail”- a juggernaut that, for all its flaws, is still the go-to arbiter of success, authority, and truth in philosophy. And as long as academic philosophy can keep its people stably supplied with money and status, and people on the outside have to scramble for scraps, this isn’t going to change much. But nothing is forever & there are hints of a shift, the world needs better alternatives, and now is a great time to start building them.

In short, I think the best way to fix philosophy may be to to build new (or revive ancient) competing metaphors for what philosophy should be, to solve problems that modern philosophy can’t, to offer a viable refuge for people fleeing academia’s dysfunction, and to make academia come to us if it wants to stay relevant.


IV.

This is essentially what we’re working toward at the Qualia Research Institute: building something new, outside of academic philosophy in order to avoid its dysfunction, but still very much focused on a core problem of philosophy.

I see this happening elsewhere, too: LessWrong is essentially a “hard fork” of epistemology, with different problem-carvings, norms, and methods, which are collectively slowly maturing into the notion of executable philosophy. Likewise, Leverage Research may be crazy, but I’ve got to give them credit for being crazy in a novel and generative way, one which is uncorrelated with the more mundane, depressing ways modern academic philosophy & psychology are crazy. Honorable mentions include Exosphere, an intentional community I’m pretty sure Aristotle would have felt right at home in, and Alexandros Pagidas, a refugee from academic philosophy who’s trying to revive traditional Greek-style philosophical fight clubs (which, to be honest, sound kind of fun).

There are a lot of these little seeds around. Not all of them will sprout into something magnificent. But I think most are worth watering.

Why I think the Foundational Research Institute should rethink its approach

July 20, 2017  |  No Comments

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

Read More

Taking ‘brain waves’ seriously: Neuroacoustics

June 14, 2017  |  No Comments

Our research collective has been doing a lot of work touching on brain dynamics, resonance, and symmetry: see here and here (video). Increasingly, a new implicit working ontology I’m calling ‘Neuroacoustics’ is taking shape. This is a quick outline of that new ontology.

Read More

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

May 26, 2017  |  No Comments

Why do we seek out pleasure- what Freud called the “pleasure principle“?

More accurately: why do we seem to seek out pleasure most of the time, but occasionally seem indifferent to it or even averse to it, e.g. in conditions such as anhedonia & depression?

My answer in a nutshell:

  1. Our brain networks are calibrated to the environment such that their symmetry gradients are tied to survival requirements. This is the core algorithm by which our brains regulate homeostasis. (Argued below)
  2. Symmetry in the mathematical representation of phenomenology corresponds to pleasure. (Argued in Principia Qualia)
  3. In combination, then, pleasure-seeking is what it feels like when our brain follows its core (ancestral/default) algorithm for maintaining homeostasis.

Read More

Symmetry Theory of Valence “Explain Like I’m 5” edition

April 15, 2017  |  No Comments

When someone on Reddit says “ELI5”, it means “I’m having a hard time understanding this, could you explain it to me like I’m 5 years old?”

Here’s my attempt at an “ELI5” for the Symmetry Theory of Valence (Part II of Principia Qualia).


We can think of conscious experiences as represented by a special kind of mathematical shape. The feeling of snowboarding down a familiar mountain early in the morning with the air smelling of pine trees is one shape; the feeling of waking up to your new kitten jumping on your chest and digging her claws into your blankets is another shape. There are as many shapes as there are possible experiences. 

Now, the interesting part: if we try to sort experiences by how good they feel, is there a pattern to which shapes represent more pleasant experiences? I think there is, and I think this depends on the symmetry of the shape.

There’s a lot of evidence for this, and if this is true, it’s super-important! It could lead to way better painkillers, actual cures for things like Depression, and it would also give us a starting point for turning consciousness research into a real science (just like how alchemy turned into chemistry). Basically, it would totally change the world.

But first thing’s first: we need to figure out if it’s true or not.

Principia Qualia: the executive summary

December 7, 2016  |  No Comments

Put simply, Principia Qualia (click for full version) is a blueprint for building a new Science of Qualia.

 


PQ begins by considering a rather modest question: what is emotional valence? What makes some things feel better than others?

This sounds like the sort of clear-cut puzzle affective neuroscience should be able to solve, yet all existing answers to this question are incoherent or circular. Giulio Tononi’s Integrated Information Theory (IIT) is an example of the kind of quantitative theory which could in theory address valence in a principled way, but unfortunately the current version of IIT is both flawed and incomplete. I offer a framework to resolve generalize IIT by distilling the problem of consciousness into eight discrete & modular sub-problems (of which IIT directly addresses five).

Finally, I return to valence, and offer a crisp, falsifiable hypothesis as to what it is in terms of something like IIT’s output, and discuss novel implications for neuroscience.

The most important takeaways are:

  1. My “Eight Problems” framework is a blueprint for a “full-stack” science of consciousness & qualia. Addressing all eight problems is both necessary & sufficient for ‘solving’ consciousness.
  2. One of IIT’s core shortcomings is that it doesn’t give any guidance for what its output means. I offer some useful heuristics.
  3. Emotional valence could be the easiest quale to reverse-engineer– the “c. elegans of qualia.”

Read More

Principia Qualia

November 16, 2016  |  2 Comments

I’m proud to announce the immediate availability of Principia Qualia (download link).

I started this project over six years ago, when trying to understand the problem of valence, or what makes some things feel better than others. Over time, I realized that I couldn’t solve valence without at least addressing consciousness, and the project grew into a “full-stack” approach to understanding qualia.

 

Context:

The way we speak about consciousness is akin to how we spoke about alchemy in the middle ages.

Over time, and with great effort, we turned alchemy into chemistry.

Principia Qualia attempts to start a similar process for consciousness- to offer a foundation for a new Science of Qualia.

 

Takeaways:

The first core takeaway is that ‘the problem of consciousness’ is actually made up of several smaller problems:

eight-problems2

A solution to all of these subproblems would be both necessary and sufficient to completely solve the problem of consciousness.

 

The second core takeaway is that if we assume that consciousness is quantifiable (“for any conscious experience, there exists a mathematical object isomorphic to it”), then we can think of valence as some mathematical property or pattern within consciousness. The answer to what makes some experiences feel better than others will be found in math, not magic.

 

The third core takeaway is that, based on this framework, we can offer a specific hypothesis for exactly what valence is. Here’s a picture to frame this debate:

venn-diagram-nov

… now, either my hypothesis (found in Section X) is correct, or it’s incorrect. But it’s testable.

 

This is just a taste- I cover a lot of ground in Principia Qualia, and I won’t be able to adequately summarize it here. If you want to understand consciousness better, go read it!

I’ll be holding office hours in Berkeley next week for discussion & clarification of these topics– feel free to message me if you’d like to schedule something.

 

How understanding valence could help make future AIs safer

September 28, 2015  |  6 Comments

The two topics I’ve been thinking the most about lately:

  • What makes some patterns of consciousness feel better than others? I.e. can we crisply reverse-engineer what makes certain areas of mind-space pleasant, and other areas unpleasant?
  • If we make a smarter-than-human Artificial Intelligence, how do we make sure it has a positive impact? I.e., how do we make sure future AIs want to help humanity instead of callously using our atoms for their own inscrutable purposes? (for a good overview on why this is hard and important, see Wait But Why on the topic and Nick Bostrom’s book Superintelligence)

I hope to have something concrete to offer on the first question Sometime Soon™. And while I don’t have any one-size-fits-all answer to the second question, I do think the two issues aren’t completely unrelated. The following outlines some possible ways that progress on the first question could help us with the second question.

Read More

Effective Altruism, and building a better QALY

June 4, 2015  |  2 Comments

This originally started as a write-up for my friend James, but since it may be of general interest I decided to blog it here.

Effective Altruism (EA) is a progressive social movement that says altruism is good, but provably effective altruism is way better. According to EA this is particularly important, because charities can vary in effectiveness by 10x, 100x, or more, so being a little picky in which charity you give your money to can lead to much better outcomes.

To paraphrase Yudkowsky and Hanson, a lot of philanthropy tends to be less about accomplishing good, and more about signaling to others (and yourself) that you are a Good Person Who Helps People. Public philanthropy in particular can be a very effective way to ‘purchase’ social status and warm fuzzies, and while any philanthropist or charitable organization you ask would swear up and down that they were only interested in doing good, often the actual good can get lost in the shuffle.

EA tries to turn this around: it seldom discourages philanthropy, but it’s trying to build a culture and community where people gain more status and more warm fuzzies if their philanthropy is provably effective. The community is larger and more dynamic than I can give it credit for here, but some notable sites include givewell.org, givingwhatwecan.org, 80000hours.org, eaglobal.org, and eaventures.org. Peter Singer also has several books on the topic.

I love this movement, I endorse this movement, and I wish it had been founded a long time ago. But I have a few nits to pick about its quantitative foundation.

Okay, so EA likes measurements. What do they use?

The current gold standard for measuring ‘utility’, which the EA community enthusiastically uses, is the Quality-Adjusted Life Year (QALY). Here’s the University of Ottawa’s explanation:

QALYs are calculated as the average number of additional years of life gained from an intervention, multiplied by a utility judgment of the quality of life in each of those years. For example, a person might be placed on hypertension therapy for 30 years, which prolongs his life by 10 years at a slightly reduced quality level of 0.9. In addition, the need for continued drug therapy reduces his quality of life by 0.03. Hence, the QALYs gained would be 10 x 0.09 – 30 x 0.03 = 8.1 years.  The valuations of quality may be collected from surveys; a subjective weight is given to indicate the quality or utility of a year of life with that disability.

The idea of QALYs can also be applied to years of life lost due to sickness, injuries or disability. This can illustrate the societal impact of disease.  For example, a year lived following a disabling stroke may be judged worth 0.8 normal years.  Imagine a person aged 55 years who lives for 10 years after a stroke and dies at age 65. In the absence of the stroke he might be expected to live to 72 years of age, so he has lost 7 potential years.  As his last 10 years were in poor health, they were quality-adjusted downward to an equivalent of 8 years, so the quality-adjusted years of life lost would be 7 + (10 – 8), or 9.

In short, if we’re giving money to charity, we should look for opportunities which give a high QALY-per-dollar ratio (e.g., Malaria nets in Africa) and shun those that don’t (e.g., invasive cancer surgery on bedridden 90-year-olds). Every so often, givewell.org updates their shortlist for trusted, validated charities that can produce lots of QALYs for your donation. There are also different ‘flavors’ of the QALY model that focus on different things: the Disability-Adjusted Life Year (DALY), the Wellbeing-Adjusted Life Year (WELBY), and so on.

The QALY, a foundation of wire, particle board, and spackle: way better than nothing, but not ideal.

It’s great we have these metrics: they make intuitive sense, they’re handy for quickly summarizing the expected benefit of various humanitarian interventions, and they’re a lot better than nothing. But jeez, they depend on a lot of complex, kludgy, top-down simplifications. For instance, most variants of the QALY tend to:
– treat utility, quality-of-life, absence of disease, and health as the same thing;
– assume health states are well-defined and have exact durations;
– have no accommodation for differences in how people deal with health burdens, or have different baselines;
– essentially ignore the effects of interventions that might do something other than reduce some burden, generally disease burden (may be arguable);
– have all the limitations and biases characteristic of subjective evaluations and self-reports;

… and so on. The QALY is a wonderfully useful tool, but it’s very far from “carving reality at the joints”, and it’s trivial to think up long lists of scenarios where it’d be a terrible metric.

To be fair, this problem- measuring well-being- is inherently difficult, and it’s not that people don’t want to ‘build a better QALY’, it’s that innovation here is so constrained because it’s very, very hard to design and validate something better, especially when you have something simple that already sorta works. (The fact that QALYs are so closely tied in with the medical establishment, never a paragon of ideological nimbleness, doesn’t help things either.)

Okay,” you say, “you’re skeptical of using the QALY to evaluate the effect of charitable interventions on well-being. What should we use instead?

In the short term: let’s augment/validate the QALY with some bottom-up, quantitative proxies for well-being.

What gives me hope that the QALY is a brief stop-over point toward a better metric is that there’s so much overlap between the EA community and the Quantified Self (QS) community, and the QS community is absolutely nuts about novel, clever, and useful ways to measure stuff. The QS meetups I’ve attended have had people present on things ranging from the relative effectiveness of 5+ dating sites for meeting women and the quality of follow-up interactions from each, to the effects of 30+ different foods on poop consistency. Quantifying and tracking well-being is well within the QS scope!

So how would a QSer measure well-being in a more bottom-up, data-driven way?

First, there’s a lot that a QALY-style factor analysis does right. It gives us an expected effect on well-being from the environment, and helps sort through causes of well-being or suffering, something that purely biological proxies can’t do. So we wouldn’t throw it out, but we should set our sights on augmenting and validating it with QS-style data collection.

Second, we’d look for some really good (and ideally, really really simple) bottom-up, biological proxies that track well-being. I suspect we could throw something together from stress hormone levels (e.g., cortisol) and their dynamic range, and possibly heart-rate variability.

Third, we’d crunch the numbers on possible biological proxies, pick the best ones, and validate the heck out of them. Easier said than done, but simple enough in principle.

Why do we need to bother with biological proxies? Because they’re strong in the same ways that top-down metrics tend to be weak. E.g.,
– they don’t involve any subjective component… they may not tell the whole story, but they don’t lie;
– they can be frequently measured, and frequent measurements are important, because they let you have tight feedback loops;
– they can measure and compensate for the phenomenon of hedonic adaptation;
– there’s a significant amount of literature in animal models that explores and validates this approach, so we wouldn’t have to start from scratch.

Obviously we couldn’t do this sort of QS-style data collection on everybody all the time, but even if just a few people did it, it’d go pretty far toward improving the QALY for everybody else, by seeing where our predictions of environmental effects on well-being hold up and where they break down, at least compared to the window of biological data we can directly measure.

In the medium term: let’s figure out some common unit by which to measure human and non-human animal well-being.

One of the principles of EA is that all sentient beings matter- that humans don’t have a monopoly on ethical significance. I agree! But how do we compare different organisms?

First, those biological proxies from step (1) will definitely come in handy. Humans, dogs, cows, and chickens all share the same basic brain architecture, which implies that if we find something that’s a good proxy for well-being or suffering in humans, it should be at least a not-so-terrible proxy for the same in basically all non-human animals too.

But for numerical comparisons we’ll need more than just that. I suspect we’ll need some sort of a plausible method for adjusting for how sentient and/or capable of suffering something is. Most people, for instance, would agree that mice can suffer, and that mouse suffering is a bad thing. But can they suffer as much as humans can? Most people would say ‘no’, but we can’t put good numbers or confidence ranges on this. My intuition is that almost everything with a brain shares the basic emotional architecture, and so is technically capable of suffering, but various animals will differ significantly in their degree of consciousness, which acts as a multiplier of suffering/well-being for the purposes of ethics. E.g., ethical significance = [suffering]*[degree of consciousness]. The capacity to have a strong sense of self (i.e., the sense that there is a self that is being harmed) may also be important, which likely has a neuroanatomical basis. Call it the SQALY (Sentience and Quality-Adjusted Life-Year).

The road forward here is murky, but important. I hope people are thinking about ways to quantify this, because one of EA’s core strengths is that it argues that human and animal well-being is in principle commensurable, and quantifiable. My off-the-cuff intuition is that a mash-up of Panksepp’s work on mapping structural commonalities in “emotion centers” across vertebrate brains, and a comparison of relative amounts of brain mass in areas thought most significant for consciousness, could bear fruit. But I dunno. It’ll be tough to put numbers on this.

In the long term: let’s build a rigorous mathematical foundation that explains what suffering *is*, what happiness *is*, and how we can *literally* do utilitarian calculus.

EA wants to maximize well-being and minimize suffering. Cool! But what are these things? These tend to be “I know it when I see it” phenomena, and usually that’s good enough. But eventually we’re gonna need a properly rigorous, crisp definition of what we’re after. To paraphrase Max Tegmark: ’Some arrangements of particles feel better than others. Why?’ — if pain and pleasure are patterns within conscious systems… what can we say about these patterns? Can we characterize them mathematically?

I’ve spent most of my creative output of the last couple years on this problem, and though I don’t have a complete answer yet, I think I know more than I did when I started. It’s not really ready to share publicly, but feel free to track me down in private and I can share what I think is reasonable to say, and what I think is plausible to speculate.

We don’t need an answer to this right away. But technology is taking us to strange places (see e.g., If a brain emulation has a headache, does it hurt?) where it’ll be handy to have some guide for detecting suffering other than our (very fallible) intuitions.

This is way too much to worry about! And isn’t it a better use of resources to actually help sentient creatures we *know* are in pain, rather than slog through all these abstract philosophical details that seem impossible to overcome?

Very possibly! But I don’t think this would be wasted effort, at all.

First: EA is all about metrics. True, improving these metrics can be very difficult, and very difficult to validate, but it’s the sort of thing that pays huge long-term dividends. And if we don’t do it, is there anybody else who will? Abe Lincoln has this (possibly apocryphal) quote, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” The QALY is EA’s axe, and it’s worth trying to sharpen.

Second, this all sounds very hard, but it may be easier than it seems. This stuff reminds me of what Paul Graham said about startups: the best time to work on something is when technology has made what you want to do possible, but before it becomes obvious. With all the consciousness research that’s been going on, I suspect we have many more good tools to work with than most people- even (or perhaps especially?) ethicists- realize. We may not have solid answers to these questions, but we’re increasingly getting a decent picture as to what good answers will look like.

Third, and I think just as important, I fear that EA is going to be vulnerable to mission drift and ideological hijacking. This is not a criticism of EA, but rather a comment that almost everybody would love to think they’re engaging in effective altruism! Every activist and culture warrior thinks they’re improving the world by their actions, and that their special brand of activism is the most provably effective way to do so. And so I think it’s very plausible that people with external agendas will be highly attracted to EA (whether consciously or not), especially as it starts to gain traction. EA is going to need a sturdy rudder to avoid diversions, and strong (yet not overactive!) immune system to fend off ideological hijacking. I can’t help with latter, but I do think improvements in the core metric EA uses could help with ‘organic’ long-term mission drift.

Concrete recommendations for EA:
There’s no one person ‘in charge’ of EA, so the best, most effective, and least annoying way to get something done is generally to organize it yourself. That said, here are a few things I think the community could (and should) do:
– Consider forming an “EA Foundational Research” team for the sort of inquiry that might not happen effectively on its own. It need not be super formal- even an ‘EA Foundational Issues Journal Club’ could be helpful and would be fun;
– Foster relationships with receptive scientific communities to help with each step (short/medium/long)… the way Randal Koene’s carboncopies coordinates research between labs might be an interesting model to emulate;
– Think deeply about how to shield EA from unwanted appropriation by culture warriors without being too insular, and/or how to pick the culture battles worth fighting;
– Keep being awesome.

Spark Weekend SF Review

March 22, 2015  |  No Comments

I went to Spark Weekend SF last weekend, a ‘lifehacking conference’. I had never been to a lifehacking conference, and expected it to maybe be a little bit fun, but a little bit hokey. It wasn’t hokey at all: it was pretty darn great, with high-quality people and really interesting content. Here’s what we talked about:

 

==============================
Abe Gong, “Using Force Multipliers for Willpower”
==============================

Abe is a data scientist focusing on wearables, and his message was the following: willpower is limited resource, and we don’t overcome our problems through sheer willpower, but rather by being smart about how we use it. And the smartest use of willpower is probably to spend it changing our environment, because the subconscious mind is shortsighted and ornery, but takes cues from the world around it. And very importantly: technology can help with all of this.

Abe’s vision of how tech can help: He talked about how he uses reminders to do things, that we can manufacture new habits, and even extend old habits (‘after brushing teeth, do X’), and more generally, integrating the question of ‘are you doing what you’re supposed to be doing?’ into our day.

Abe gave us all a task: change your environment in 3 ways to make yourself more productive.

Comments:

I’m hardly doing his talk justice- he had lots of good foundational thoughts and actionable advice. My favorite part was how, since willpower fluctuates throughout the day, we should use our high-willpower peaks (“temporary ability to do hard things”) to make small changes that help all day long. He didn’t have a technological “willpower killer app”, but he’s working on launching a new company, Metta Smartware. So maybe he’s got something up his sleeve.

One question I find myself wondering: what *is* willpower? This is not anything like a settled issue, judging by SSC’s recent post on the topic. Abe said it was nigh-impossible to increase it, and this seems to be consistent with the literature, but if we had a better understanding exactly how the willpower dynamic worked, could we do anything cool with that knowledge?

IMG_1783

Abe’s conception of willpower: temporary opportunity to do hard things, a quantity that varies considerably throughout the day.

 

==============================
Julia Galef, “Using Trigger Action Plans To Facilitate Change”
==============================

Julia (of CFAR fame) started out with the ‘Parable of the Sphex’, a story about a really rigidly OCD insect that can’t change its habits even when it’s obvious the habit isn’t working. And, she says, we all are more like the Sphex than we realize: often we have trigger-response patterns that fail, badly, and we fail to change them. But we have the *capacity* to change them, unlike the poor Sphex, and changing habits with poor outcomes is an absolutely critical skill.

Julia talked about this in the context of ‘rewriting our code’ with trigger-action plans. These are of the form, ‘if X, then Y’ — overwrite a bad one with a good one. Now, Julia was very careful to say it’s not necessarily easy to do this: you’ve gotta be smart about how to build a new trigger, put effort into following the plan, and smart about debugging it when it breaks. Her bottom-line message: this stuff isn’t magic… but it does work, and is a critical skill for self-improvement.

Julia’s challenge: come up with one TAP (Trigger->Action Plan) that will help with some goal.

Comments:
Julia had a lot of good things to say, and I found myself agreeing with her advice and her attitude. Really a great speaker too, explaining a lot of dense stuff quickly and clearly (I think she does a lot of this with her organization, CFAR). Thinking about whether these techniques would work for me got me wondering:

I feel like belief in self-improvement / lifehacking methods is often a self-fulfilling prophecy: if you don’t believe they can work, they won’t, whereas if you believe in them, they can seem pretty easy, and I think it’s natural to sorta feel like people who aren’t doing them are just silly/lazy, and should just buckle down and do them and improve their lives. But belief is not as plastic as this! E.g., look at how sticky religious beliefs are, or the feeling of victimhood. We often don’t choose our beliefs, and even if they’re non-adaptive, mere self-fulfilling prophecies, or demonstrably false, they can be very durable.

So how do you get a person to viscerally believe lifehacks would work for them? I used to think lifehacks wouldn’t work for me, and they didn’t. Now, I think they sometimes can, and they sometimes do. How did I make this transition? I was talking with Will Eden about this, and mentioned my change in attitude probably came from hanging out with people I really really respected who really bought the concept of lifehacking. Over time, I slowly came around to the idea that no, it wasn’t just empty promises, and yes, it could work for me too. Our peer group really has a surprising amount of influence over our views— Will mentioned the saying, “You’re the average of the five people you spend the most time with,” and I think this might be a ‘hidden variable’ in a lot of lifehacking successes and failures.

IMG_1788

Julia’s definition of a ‘Trigger-Action Plan” (TAP).

 

==============================
Aubrey de Grey, “Attempting to Defeat Aging”
==============================

Aubrey, a noted anti-aging researcher (whom I’ve heard speak quite a number of times now) opened with a contrast of infectious diseases vs. age-related diseases: medicine has been great at fixing the former, but absolutely terrible at fixing the latter.

Why? Aubrey says it’s because the way geriatric medicine thinks of age-related ‘diseases’ is all wrong. Aging is about accumulation of damage, and it’s unavoidable: metabolism leads to damage, damage leads to pathology. So these age-related diseases are just a side-effect of being alive. You can’t “cure” alzheimer’s or cancer the way you can malaria, because you can’t “cure” this genetic damage, only go in and fix it after it occurs.

He contrasts geriatric medicine with gerontology, which he says is better, but still bad, because it’s more focused on describing than intervention, and at best could only ‘slow down’ aging. Instead, he argues, we need a way to fix the damage— we can have old cars that function perfectly well a century after they were made— why not human bodies if we can give them periodic ‘tune-ups’?

Aubrey identifies seven types of damage, with options for repairing each. The argument is that if we fix all these seven things, we’ve effectively cured aging.

He finished with an exasperated plea: everybody he talks with seems entirely concerned with the sociological considerations of what could go wrong if we cured aging. He thinks we’re largely ignoring the sociological considerations of what could go *right* if we cured aging. Because aging really, really sucks!

Comments:

I think de Grey is absolutely right on most things, especially his most important message: aging sucks and isn’t a law of nature we can’t do anything about! But I think his understanding of damage is limited and he’s much too attached to his original list of seven types of damage (he was proud of not changing his original list after 14+ years of progress in the field, but to me this is a bit of a warning sign he’s not open to conceptual or factual revisions). One way his and my understanding diverge: he would never say we’re born with damage, a.k.a. genetic load, whereas I would.

So I think his list of 7 types of damage is both overly narrow (he doesn’t directly address inherited *or* accumulated DNA damage) yet also too pessimistic: genetic engineering techniques could not only fix multiple kinds of damage at once, but also improve a fundamental driver of aging, inherited ‘genetic spelling errors’. Still, one has to admire his dedication- I think it’s been a long, hard road for him to get as far as he has, soldiering on through (very often misguided) criticism after criticism, and I think his work has really moved peoples’ attitudes.

Aubrey’s action item: was to buy his book and/or give him lots of money for his research, which seemed a bit not-super-practically-useful-as-a-lifehack.

IMG_1795

Aubrey’s “seven types of damage” that he thinks add up to aging.

 

==============================
Mikey Siegel, “Transformative Technology: An Evolution of Medicine”
==============================

Mikey, an engineer-turned-consciousness-hacker, opened with an observation and a question: “Happiness depends on more than external circumstances. What does it mean to be truly content?”

He talked about his experience at a meditation retreat, where he had a fundamental change in perspective: after countless hours of meditation, something just ‘clicked’ and he felt “okay-ness with everything”. The pain of cramped muscles, hunger, and whatnot were still there, but somehow not bad. And he thought: “We all have the capacity to feel this, regardless of circumstances.” And now he wants to help people reach that state with technology.

He spoke about how Enlightenment isn’t some special Buddhist thing, but rather part of being human. And so is suffering. Meditation is a technology we invented, some 3000 years ago, to optimize our mind-states…. and it works. The problem is that meditation more-or-less involves being ‘unplugged’ from technology, which is legitimately hard in many circumstances. But maybe it can be adapted to work with, not against, the trends and constraints of modern life.

So, his thesis: If you can quantify enlightenment, you can increase it. We can quantify it with current technology- so that means it’s ripe for hacking. He ended with various comments on fMRI studies and the tech he’s worked on.

Comments:

Mikey’s topic is near and dear to my heart. What is human flourishing? Can we measure it and optimize it? How can technology help? These are the right questions to ask. I am super glad he’s asking them.

However, I wonder if fMRI studies, brain region activity, and HeartMath’s tech are as good as they seem for understanding ‘how to measure enlightenment’: it often works, but will never give you a crisp understanding, and there are a million ways it can mislead. They’re pretty “leaky” abstractions. So I dig the first part of the message: let’s hack consciousness! But I’m leery of the second, that the paradigms we have available are a satisfying technical or philosophical basis for doing so. (I hope to put my money where my mouth is and publish something on this general topic soonish.) That said, a good implementation of a workable paradigm beats a non-existent implementation of a hypothetical perfect paradigm any day, and Mikey was working on some pretty amazing things last time I talked with him… things that I can’t wait to get my grubby paws on.

 

==============================
Adam Bornstein, “Engineering The Alpha In You”
==============================

Adam’s a fitness/diet guy- worked at Men’s Health, wrote some books, and is here to talk about diets.

First, diets are tough. Lots of people try them every year, and generally they fail. Mostly, they don’t fail because they’re weak: they fail because the common wisdom about how to think about losing weight is so completely wrong.

Actionable advice that’s not wrong:
– Cardio is 4x more popular for weightloss, but weight training is 2x more effective.
– All diets can work, but they differ in how sustainable they are! Don’t be a masochist. “If you love pasta, don’t go on the paleo diet.”
– “The pleasure from two cheeseburgers is equivalent to one orgasm” – your body needs that pleasure somehow, so have the orgasm instead of the cheeseburgers.
– Weightloss happens in chunks, not gradually. If your diet seems to plateau, keep going, give your body time to recalibrate its metabolism, and you’ll start losing weight again.
– “If you make it hard to fail, you will succeed.” Use positive reinforcement, one step at a time, form good habits (Eat More of the following at Each meal: protein of vegetables). And set realistic expectations.
– Carbs are probably not as evil as the low-carb/paleo crowd portrays them. Everything in moderation.

Comments:
Adam was a great speaker, even if his slides were a little haphazard. He made some comment about ‘all diets work equally, assuming equal macronutrient (carb/fat/protein) proportions’ that seemed odd to me, as the premise of the low-carb/paleo diets is that we eat too many carbs and should eat fewer. Still, his large themes and specific recommendations seemed pretty spot-on, and a lot smarter and wiser than the conventional wisdom. If I was a billionaire I’d hire him as my diet coach.

==============================
James Norris, “Using Technology To Upgrade Yourself”
==============================

James had a short talk to cap the more formal part of the day. His basic message: The ‘Limitless pill’ doesn’t exist, but 750+ lifehacking tools do, and they can do a lot of the same things, with the bonus of actually existing.

He listed, rapid-fire-style, some lifehacking tools he recommends (judged on efficacy, ease, and evidence):

Goals:
Coach.me – instant coaching for any goal.
Beeminder – they take your money if you don’t achieve your goals.
Mindbloom- they visualize your goals like a tree- if you fail it dies.
Stickk – a commitment tracker that takes your money if you fail, and either gives it to your choice of charity, or ‘anti-charity’ (organization you hate).

Food:
Mealsquares, Soylent, DIY Soylent – zero-effort nutritionally-complete(?) food-replacements.
Meal Snap – take a picture and it’ll tell you about your food.
Pre-made Paleo – a fully paleo meal shipped to your door.
Smart Body Analyzer – it tells you your weight…. and shares it on facebook.

Exercise:
7 Minute Workout, Stronglifts 5×5- apps that coach you through a workout.
Wearables – lots of options, find the one you like.

Sleep/energy:
Eye masks – wear them! They really work. [Mike’s note: this is something I’m definitely going to try.]
Sleep Genius, other sleep trackers – check the quality of your sleep.
Various options for a bright blue light that will increase alertness.
F.lux – leech the blue from your screen late at night. [Mike’s note: I think this really works.]

Mental well-being:
Headspace – guided meditation app
Muse – neurofeedback/mindfulness/basic rhythms hardware+app combo.

Cognitive:
Khan academy – offers a smart, step-by-step guide through learning complex topics.
Anki – digital flashcards.
Imperative – you enter what you like, it figures out what career might resonate with you.
80,000 hours – similar theme, but optimizes for social impact.
Mint – pay bills easily.
Betterment – investing made easy.
Levelmoney – it’ll watch your bank account and tell you when to stop spending.
Wealthminder – smart management for investments.
Rescue Time – an app that tells you exactly how much time you waste.
Maelstrom – inbox 0.

Dating:
Tinder, Ok cupid – relatively easy ways to meet people romantically.

Appearance:
Combatant Gentleman, Trunk Club – click-to-door shopping for people who hate shopping.

Spirituality: psilocybin, aka mushrooms.

And finally, James mentioned The Kit, a site for open research on lifehacks, and showed a short video from BJ Fogg (message: look at change as a design challenge, not a willpower challenge… and focus on successes, because small changes have ripple effects).

I got the impression that James was very, very invested in everybodys’ success, and you could just feel his genuineness. It was very cool. Though now I’ll live in fear of letting him down if I fail on my goals (50% joke).

Afterwards, we broke up into smaller groups, tasked with making 3 goals for the next 30 days. Of course, instead of doing this, I typed up this review. I know— I’m totally busted.