Why are humans good?

March 19, 2018  |  No Comments

Are humans worthy of colonizing the universe? Are we particularly awesome and benevolent, moreso than a random mind sampled from mindspace?

The following isn’t a full argument, but I want to point toward two things humans seem to do:

  • First, our brains are set up in such a way that we stochastically seek out pleasant states of mind. (I think this is a contingent fact about humans, not a universal for intelligent beings);
  • Second, we model other beings through empathy (i.e. we’re ‘qualia resonators’ and take on aspects of the phenomenology of those around us, insofar as we can perceive it).

These two things combine in a very interesting way: left to our individual devices, we tend to adapt our environment such that it embodies many beings with positive valence, and their positive valence becomes our positive valence. The domestication and adaptation of wolves is a good example: dogs are basically four-legged happiness machines that we keep around because their fantastic qualia rubs off on us.

Now of course there are a million caveats: we’re often bad at these things, empathy-based behavior has a ton of failure-modes, this doesn’t address hedonic set-points or game-theoretic traps, etc. But the way these two things interact is a big reason I like humanity, and want to preserve it.

Rescuing Philosophy

October 2, 2017  |  No Comments

I.

Philosophy has lost much of its energy, focus, and glamor in our modern era. What happened?

I’d suggest that five things went wrong:

1. Historical illegibility. Historically, ‘philosophy’ is what you do when you don’t know what to do. This naturally involves a lot of error. Once you figure out a core ontology and methodology for your topic, you stop calling it ‘philosophy’ and start calling it ‘science’, ‘linguistics’, ‘modal logic’, and so on. This is a very important, generative process, but it also means that if you look back at the history of philosophy, you basically only see ideas that are, technically speaking, wrong. This gives philosophers trying to ‘carry on the tradition’ a skewed understanding of what philosophy is, and how to do it.

2. Evaporative cooling. The fact that the most successful people, ideas, ontologies/methodologies, and tools tend to leave philosophy to found their own disciplines leads to long-term problems with quality. We can think of this as an evaporative cooling effect where philosophy is left with the fakest of problems, and the worst, most incoherent and confused ways to frame what real problems are left.

3. Lack of feedback loops. Good abstract philosophy is really hard to do right, it’s hard to distinguish good philosophy from the bad, and the value of doing philosophy well isn’t as immediately apparent as, say, chemistry. This leads to ‘monkey politics’ playing a large role in which ideas gain traction, which in turn drives a lot of top talent away.

4. Professionalization. Turning metaphysical confusion into something clear enough to build a new science on tends to be a very illegible process, full of false-starts, recontextualizations, and unpredictable breakthroughs. This is really hard to systematically teach students how to do, and even harder to plan an academic budget around. As philosophy became regularized and professionalized– something that you can have a career in– it was also pushed toward top-down legibility. This resulted in less of a focus on grappling with metaphysical uncertainty and more focus on institutionally-legible things such as scholarship, incremental research, teaching, and so on. Today, the discipline is often taught and organized academically as a ‘history of ideas’, based on how past philosophers carved various problem-spaces.

5. Postmodernism. Philosophy got hit pretty hard by postmodernism — and insofar as philosophy was the traditional keeper of theories of meaning, and insofar as postmodernism attacked all sources of meaning, philosophy suffered more than other disciplines. Likewise, academic philosophy has inherited all the problems of modern academia, of which there are many.

I’m painting with a broad brush here, and I should note that there are pockets of brilliant academic philosophers out there doing good, and even heroic, work in spite of these structural conditions. #notallphilosophers. But I don’t think many of these would claim they’re happy with modern academic philosophy’s structural conditions or trajectory.

And this matters, since philosophy is still necessary. There’s a *lot* of value in having a solid philosophical toolset, and having a healthy intellectual tradition of being mindful about ontological questions, epistemology, and so on. As David Pearce often points out, there’s no clean way to abstain from thorny philosophical issues: “The penalty of not doing philosophy isn’t to transcend it, but simply to give bad philosophical arguments a free pass.”


II.

So philosophy is broken. What do we do about it?

My friend Sebastian Marshall describes the ‘evaporative cooling’ philosophy has undergone, and suggests that we should try to rescue and reclaim philosophy:

So, this bastardized divorced left-behind philosophy will be here to stay in some form or fashion. We can’t get rid of it… but it’s also not necessary to get rid of it.

Turning to better news, even in mainstream philosophy, there are still sane and sound branches doing good work, like logic (which is basically math) and philosophy of mind (which is rapidly becoming neuroscience but which hasn’t yet evaporatively cooled out of philosophy).

It wouldn’t take very many people reclaiming the word philosophy as a love of wisdom to begin to turn things around.

Genuinely good philosophy is happening all over the place – though it’s rarely people in fields that don’t fight back at all. Indeed, you see computer programming and financiers doing some of the best philosophy now – Paul Graham, Eliezer Yudkowsky, Ray Dalio, Charlie Munger, Nassim Taleb. When the computer scientist get something wrong, their code doesn’t work. When the financier gets something wrong, they lose a lot of money. Excellent philosophers still come of the military – John Boyd and Hyman Rickover to name two recent Americans – and they come out of industrial engineering, like Eli Goldratt.

That these people are currently not classified as philosophers is simply an error– let the people doing uselessness in the towers call themselves “theoretical screwaroundists” or whatever other more palatable name they might come up with for themselves; genuine philosophy is alive and well, even as the word points to decayed and failing institutions.

There would clearly be enormous benefits to reclaiming the word “philosophy” for serious generative work. But I worry it’s going to be really hard.

Words have a lifecycle- often, they start out full of focus, wit, and life, able to vividly convey some key relationship. As time goes on, however, they lose this special something as they become normalized and regress toward the linguistic mean. Part of being a good writer is being in-tune with what words & phrases still have life, and part of being a great writer (like Shakespeare) is minting new ones. My sense is that “philosophy” doesn’t have much sparkle left, and it may be preferable to coin a new word.

Unfortunately, I don’t know of any better words that would encapsulate everything we’d want, and it may be very difficult to rally people behind a new term unless it’s really good. Even though academic philosophy is in terrible shape, the term ‘philosophy’ is still an effective Schelling point; still prime memetic real-estate. So, pending that better option, I think Sebastian’s right and we do need to do what we can to rescue & reclaim philosophy.

III.

How do we rescue philosophy? — I think we need to think about this both in terms of individual tactics and collective strategy.

Individual tactics: survival and value creation in an unfriendly environment

Essentially, those that wish to make a notable, real, and durable contribution to philosophy should understand that association with academia is a double-edged sword. On the plus side, it can give people credibility, access, and fellowship with other academics, apprenticeships with established thinkers, maybe a steady income, and a great excuse to engage deeply with philosophy. On the other hand, by going into academic philosophy someone is essentially granting an unhealthy, partially moribund system broad influence over their local incentives, memetic milieu, and aesthetic. That’s a really big deal.

A personal aside: I struggled with how to navigate this while writing Principia Qualia. Clearly a new philosophical work on consciousness should engage with other work in the space– and there’s a lot of good philosophy of mind out there, work I could probably use and build upon. At the same time, if philosophy’s established ways of framing the problem of consciousness could lead to a solution, it would’ve been solved by now, and by using someone else’s packaged ontology, I’d be at risk of importing their confusion into my foundation. With this in mind I decided that being aware of key landmarks in philosophy was important, but being uncorrelated with philosophy’s past framing was equally important, so I took a minimalist first-principles approach to building my framework and was very careful about what I imported from philosophy and how I used it.

Collective strategy: Schelling points & positive-feedback loops

The machinery of modern academic philosophy is going to resist attempts at reformation, as all rudderless bureaucratic entities do, but it won’t be proactively hostile about it, and in fact a lot of philosophers desperately want change. This means people can engage in open coordination on this problem. I.e., if we can identify Schelling points and plant rallying flags which can help coordinate with potential allies, we could probably make a collective push to fix certain problems or subfields (my sources say this sort of ‘benign takeover’ is already in motion in certain departments of bioethics).

Ultimately, though, fixing philosophy from within probably looks like a better option than it actually is, since (1) entryism is sneaky, always has a bad faith component, and is never as simple as it sounds (if nothing else, you have to fight off other entryists!), and (2) meme flow always goes both ways, and a plan to fix philosophy’s norms faster than its bad norms subvert us is inherently risky. Plenty of good people with magnificent intentions of fixing philosophy go into grad school, only to get lost in the noise, fail to catalyze a positive-feedback-loop, burn out, and give up years later. If you’re going into academic philosophy anyway, then definitely try to improve it, but don’t go into academic philosophy in order to improve it.

Instead, it may be better to build institutions that are separate from modern academic philosophy, and compete against it. Right now, academic philosophy looks “too big to fail”- a juggernaut that, for all its flaws, is still the go-to arbiter of success, authority, and truth in philosophy. And as long as academic philosophy can keep its people stably supplied with money and status, and people on the outside have to scramble for scraps, this isn’t going to change much. But nothing is forever & there are hints of a shift, the world needs better alternatives, and now is a great time to start building them.

In short, I think the best way to fix philosophy may be to to build new (or revive ancient) competing metaphors for what philosophy should be, to solve problems that modern philosophy can’t, to offer a viable refuge for people fleeing academia’s dysfunction, and to make academia come to us if it wants to stay relevant.


IV.

This is essentially what we’re working toward at the Qualia Research Institute: building something new, outside of academic philosophy in order to avoid its dysfunction, but still very much focused on a core problem of philosophy.

I see this happening elsewhere, too: LessWrong is essentially a “hard fork” of epistemology, with different problem-carvings, norms, and methods, which are collectively slowly maturing into the notion of executable philosophy. Likewise, Leverage Research may be crazy, but I’ve got to give them credit for being crazy in a novel and generative way, one which is uncorrelated with the more mundane, depressing ways modern academic philosophy & psychology are crazy. Honorable mentions include Exosphere, an intentional community I’m pretty sure Aristotle would have felt right at home in, and Alexandros Pagidas, a refugee from academic philosophy who’s trying to revive traditional Greek-style philosophical fight clubs (which, to be honest, sound kind of fun).

There are a lot of these little seeds around. Not all of them will sprout into something magnificent. But I think most are worth watering.

Why I think the Foundational Research Institute should rethink its approach

July 20, 2017  |  No Comments

The following is my considered evaluation of the Foundational Research Institute, circa July 2017. I discuss its goal, where I foresee things going wrong with how it defines suffering, and what it could do to avoid these problems.

TL;DR version: functionalism (“consciousness is the sum-total of the functional properties of our brains”) sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements.

Read More

Taking ‘brain waves’ seriously: Neuroacoustics

June 14, 2017  |  No Comments

Our research collective has been doing a lot of work touching on brain dynamics, resonance, and symmetry: see here and here (video). Increasingly, a new implicit working ontology I’m calling ‘Neuroacoustics’ is taking shape. This is a quick outline of that new ontology.

Read More

Why we seek out pleasure: the Symmetry Theory of Homeostatic Regulation

May 26, 2017  |  No Comments

Why do we seek out pleasure- what Freud called the “pleasure principle“?

More accurately: why do we seem to seek out pleasure most of the time, but occasionally seem indifferent to it or even averse to it, e.g. in conditions such as anhedonia & depression?

My answer in a nutshell:

  1. Our brain networks are calibrated to the environment such that their symmetry gradients are tied to survival requirements. This is the core algorithm by which our brains regulate homeostasis. (Argued below)
  2. Symmetry in the mathematical representation of phenomenology corresponds to pleasure. (Argued in Principia Qualia)
  3. In combination, then, pleasure-seeking is what it feels like when our brain follows its core (ancestral/default) algorithm for maintaining homeostasis.

Read More

Symmetry Theory of Valence “Explain Like I’m 5” edition

April 15, 2017  |  No Comments

When someone on Reddit says “ELI5”, it means “I’m having a hard time understanding this, could you explain it to me like I’m 5 years old?”

Here’s my attempt at an “ELI5” for the Symmetry Theory of Valence (Part II of Principia Qualia).


We can think of conscious experiences as represented by a special kind of mathematical shape. The feeling of snowboarding down a familiar mountain early in the morning with the air smelling of pine trees is one shape; the feeling of waking up to your new kitten jumping on your chest and digging her claws into your blankets is another shape. There are as many shapes as there are possible experiences. 

Now, the interesting part: if we try to sort experiences by how good they feel, is there a pattern to which shapes represent more pleasant experiences? I think there is, and I think this depends on the symmetry of the shape.

There’s a lot of evidence for this, and if this is true, it’s super-important! It could lead to way better painkillers, actual cures for things like Depression, and it would also give us a starting point for turning consciousness research into a real science (just like how alchemy turned into chemistry). Basically, it would totally change the world.

But first thing’s first: we need to figure out if it’s true or not.

Principia Qualia: the executive summary

December 7, 2016  |  No Comments

Put simply, Principia Qualia (click for full version) is a blueprint for building a new Science of Qualia.

 


PQ begins by considering a rather modest question: what is emotional valence? What makes some things feel better than others?

This sounds like the sort of clear-cut puzzle affective neuroscience should be able to solve, yet all existing answers to this question are incoherent or circular. Giulio Tononi’s Integrated Information Theory (IIT) is an example of the kind of quantitative theory which could in theory address valence in a principled way, but unfortunately the current version of IIT is both flawed and incomplete. I offer a framework to resolve generalize IIT by distilling the problem of consciousness into eight discrete & modular sub-problems (of which IIT directly addresses five).

Finally, I return to valence, and offer a crisp, falsifiable hypothesis as to what it is in terms of something like IIT’s output, and discuss novel implications for neuroscience.

The most important takeaways are:

  1. My “Eight Problems” framework is a blueprint for a “full-stack” science of consciousness & qualia. Addressing all eight problems is both necessary & sufficient for ‘solving’ consciousness.
  2. One of IIT’s core shortcomings is that it doesn’t give any guidance for what its output means. I offer some useful heuristics.
  3. Emotional valence could be the easiest quale to reverse-engineer– the “c. elegans of qualia.”

Read More

Principia Qualia

November 16, 2016  |  2 Comments

I’m proud to announce the immediate availability of Principia Qualia (download link).

I started this project over six years ago, when trying to understand the problem of valence, or what makes some things feel better than others. Over time, I realized that I couldn’t solve valence without at least addressing consciousness, and the project grew into a “full-stack” approach to understanding qualia.

 

Context:

The way we speak about consciousness is akin to how we spoke about alchemy in the middle ages.

Over time, and with great effort, we turned alchemy into chemistry.

Principia Qualia attempts to start a similar process for consciousness- to offer a foundation for a new Science of Qualia.

 

Takeaways:

The first core takeaway is that ‘the problem of consciousness’ is actually made up of several smaller problems:

eight-problems2

A solution to all of these subproblems would be both necessary and sufficient to completely solve the problem of consciousness.

 

The second core takeaway is that if we assume that consciousness is quantifiable (“for any conscious experience, there exists a mathematical object isomorphic to it”), then we can think of valence as some mathematical property or pattern within consciousness. The answer to what makes some experiences feel better than others will be found in math, not magic.

 

The third core takeaway is that, based on this framework, we can offer a specific hypothesis for exactly what valence is. Here’s a picture to frame this debate:

venn-diagram-nov

… now, either my hypothesis (found in Section X) is correct, or it’s incorrect. But it’s testable.

 

This is just a taste- I cover a lot of ground in Principia Qualia, and I won’t be able to adequately summarize it here. If you want to understand consciousness better, go read it!

I’ll be holding office hours in Berkeley next week for discussion & clarification of these topics– feel free to message me if you’d like to schedule something.

 

How understanding valence could help make future AIs safer

September 28, 2015  |  6 Comments

The two topics I’ve been thinking the most about lately:

  • What makes some patterns of consciousness feel better than others? I.e. can we crisply reverse-engineer what makes certain areas of mind-space pleasant, and other areas unpleasant?
  • If we make a smarter-than-human Artificial Intelligence, how do we make sure it has a positive impact? I.e., how do we make sure future AIs want to help humanity instead of callously using our atoms for their own inscrutable purposes? (for a good overview on why this is hard and important, see Wait But Why on the topic and Nick Bostrom’s book Superintelligence)

I hope to have something concrete to offer on the first question Sometime Soon™. And while I don’t have any one-size-fits-all answer to the second question, I do think the two issues aren’t completely unrelated. The following outlines some possible ways that progress on the first question could help us with the second question.

Read More

Effective Altruism, and building a better QALY

June 4, 2015  |  2 Comments

This originally started as a write-up for my friend James, but since it may be of general interest I decided to blog it here.

Effective Altruism (EA) is a progressive social movement that says altruism is good, but provably effective altruism is way better. According to EA this is particularly important, because charities can vary in effectiveness by 10x, 100x, or more, so being a little picky in which charity you give your money to can lead to much better outcomes.

To paraphrase Yudkowsky and Hanson, a lot of philanthropy tends to be less about accomplishing good, and more about signaling to others (and yourself) that you are a Good Person Who Helps People. Public philanthropy in particular can be a very effective way to ‘purchase’ social status and warm fuzzies, and while any philanthropist or charitable organization you ask would swear up and down that they were only interested in doing good, often the actual good can get lost in the shuffle.

EA tries to turn this around: it seldom discourages philanthropy, but it’s trying to build a culture and community where people gain more status and more warm fuzzies if their philanthropy is provably effective. The community is larger and more dynamic than I can give it credit for here, but some notable sites include givewell.org, givingwhatwecan.org, 80000hours.org, eaglobal.org, and eaventures.org. Peter Singer also has several books on the topic.

I love this movement, I endorse this movement, and I wish it had been founded a long time ago. But I have a few nits to pick about its quantitative foundation.

Okay, so EA likes measurements. What do they use?

The current gold standard for measuring ‘utility’, which the EA community enthusiastically uses, is the Quality-Adjusted Life Year (QALY). Here’s the University of Ottawa’s explanation:

QALYs are calculated as the average number of additional years of life gained from an intervention, multiplied by a utility judgment of the quality of life in each of those years. For example, a person might be placed on hypertension therapy for 30 years, which prolongs his life by 10 years at a slightly reduced quality level of 0.9. In addition, the need for continued drug therapy reduces his quality of life by 0.03. Hence, the QALYs gained would be 10 x 0.09 – 30 x 0.03 = 8.1 years.  The valuations of quality may be collected from surveys; a subjective weight is given to indicate the quality or utility of a year of life with that disability.

The idea of QALYs can also be applied to years of life lost due to sickness, injuries or disability. This can illustrate the societal impact of disease.  For example, a year lived following a disabling stroke may be judged worth 0.8 normal years.  Imagine a person aged 55 years who lives for 10 years after a stroke and dies at age 65. In the absence of the stroke he might be expected to live to 72 years of age, so he has lost 7 potential years.  As his last 10 years were in poor health, they were quality-adjusted downward to an equivalent of 8 years, so the quality-adjusted years of life lost would be 7 + (10 – 8), or 9.

In short, if we’re giving money to charity, we should look for opportunities which give a high QALY-per-dollar ratio (e.g., Malaria nets in Africa) and shun those that don’t (e.g., invasive cancer surgery on bedridden 90-year-olds). Every so often, givewell.org updates their shortlist for trusted, validated charities that can produce lots of QALYs for your donation. There are also different ‘flavors’ of the QALY model that focus on different things: the Disability-Adjusted Life Year (DALY), the Wellbeing-Adjusted Life Year (WELBY), and so on.

The QALY, a foundation of wire, particle board, and spackle: way better than nothing, but not ideal.

It’s great we have these metrics: they make intuitive sense, they’re handy for quickly summarizing the expected benefit of various humanitarian interventions, and they’re a lot better than nothing. But jeez, they depend on a lot of complex, kludgy, top-down simplifications. For instance, most variants of the QALY tend to:
– treat utility, quality-of-life, absence of disease, and health as the same thing;
– assume health states are well-defined and have exact durations;
– have no accommodation for differences in how people deal with health burdens, or have different baselines;
– essentially ignore the effects of interventions that might do something other than reduce some burden, generally disease burden (may be arguable);
– have all the limitations and biases characteristic of subjective evaluations and self-reports;

… and so on. The QALY is a wonderfully useful tool, but it’s very far from “carving reality at the joints”, and it’s trivial to think up long lists of scenarios where it’d be a terrible metric.

To be fair, this problem- measuring well-being- is inherently difficult, and it’s not that people don’t want to ‘build a better QALY’, it’s that innovation here is so constrained because it’s very, very hard to design and validate something better, especially when you have something simple that already sorta works. (The fact that QALYs are so closely tied in with the medical establishment, never a paragon of ideological nimbleness, doesn’t help things either.)

Okay,” you say, “you’re skeptical of using the QALY to evaluate the effect of charitable interventions on well-being. What should we use instead?

In the short term: let’s augment/validate the QALY with some bottom-up, quantitative proxies for well-being.

What gives me hope that the QALY is a brief stop-over point toward a better metric is that there’s so much overlap between the EA community and the Quantified Self (QS) community, and the QS community is absolutely nuts about novel, clever, and useful ways to measure stuff. The QS meetups I’ve attended have had people present on things ranging from the relative effectiveness of 5+ dating sites for meeting women and the quality of follow-up interactions from each, to the effects of 30+ different foods on poop consistency. Quantifying and tracking well-being is well within the QS scope!

So how would a QSer measure well-being in a more bottom-up, data-driven way?

First, there’s a lot that a QALY-style factor analysis does right. It gives us an expected effect on well-being from the environment, and helps sort through causes of well-being or suffering, something that purely biological proxies can’t do. So we wouldn’t throw it out, but we should set our sights on augmenting and validating it with QS-style data collection.

Second, we’d look for some really good (and ideally, really really simple) bottom-up, biological proxies that track well-being. I suspect we could throw something together from stress hormone levels (e.g., cortisol) and their dynamic range, and possibly heart-rate variability.

Third, we’d crunch the numbers on possible biological proxies, pick the best ones, and validate the heck out of them. Easier said than done, but simple enough in principle.

Why do we need to bother with biological proxies? Because they’re strong in the same ways that top-down metrics tend to be weak. E.g.,
– they don’t involve any subjective component… they may not tell the whole story, but they don’t lie;
– they can be frequently measured, and frequent measurements are important, because they let you have tight feedback loops;
– they can measure and compensate for the phenomenon of hedonic adaptation;
– there’s a significant amount of literature in animal models that explores and validates this approach, so we wouldn’t have to start from scratch.

Obviously we couldn’t do this sort of QS-style data collection on everybody all the time, but even if just a few people did it, it’d go pretty far toward improving the QALY for everybody else, by seeing where our predictions of environmental effects on well-being hold up and where they break down, at least compared to the window of biological data we can directly measure.

In the medium term: let’s figure out some common unit by which to measure human and non-human animal well-being.

One of the principles of EA is that all sentient beings matter- that humans don’t have a monopoly on ethical significance. I agree! But how do we compare different organisms?

First, those biological proxies from step (1) will definitely come in handy. Humans, dogs, cows, and chickens all share the same basic brain architecture, which implies that if we find something that’s a good proxy for well-being or suffering in humans, it should be at least a not-so-terrible proxy for the same in basically all non-human animals too.

But for numerical comparisons we’ll need more than just that. I suspect we’ll need some sort of a plausible method for adjusting for how sentient and/or capable of suffering something is. Most people, for instance, would agree that mice can suffer, and that mouse suffering is a bad thing. But can they suffer as much as humans can? Most people would say ‘no’, but we can’t put good numbers or confidence ranges on this. My intuition is that almost everything with a brain shares the basic emotional architecture, and so is technically capable of suffering, but various animals will differ significantly in their degree of consciousness, which acts as a multiplier of suffering/well-being for the purposes of ethics. E.g., ethical significance = [suffering]*[degree of consciousness]. The capacity to have a strong sense of self (i.e., the sense that there is a self that is being harmed) may also be important, which likely has a neuroanatomical basis. Call it the SQALY (Sentience and Quality-Adjusted Life-Year).

The road forward here is murky, but important. I hope people are thinking about ways to quantify this, because one of EA’s core strengths is that it argues that human and animal well-being is in principle commensurable, and quantifiable. My off-the-cuff intuition is that a mash-up of Panksepp’s work on mapping structural commonalities in “emotion centers” across vertebrate brains, and a comparison of relative amounts of brain mass in areas thought most significant for consciousness, could bear fruit. But I dunno. It’ll be tough to put numbers on this.

In the long term: let’s build a rigorous mathematical foundation that explains what suffering *is*, what happiness *is*, and how we can *literally* do utilitarian calculus.

EA wants to maximize well-being and minimize suffering. Cool! But what are these things? These tend to be “I know it when I see it” phenomena, and usually that’s good enough. But eventually we’re gonna need a properly rigorous, crisp definition of what we’re after. To paraphrase Max Tegmark: ’Some arrangements of particles feel better than others. Why?’ — if pain and pleasure are patterns within conscious systems… what can we say about these patterns? Can we characterize them mathematically?

I’ve spent most of my creative output of the last couple years on this problem, and though I don’t have a complete answer yet, I think I know more than I did when I started. It’s not really ready to share publicly, but feel free to track me down in private and I can share what I think is reasonable to say, and what I think is plausible to speculate.

We don’t need an answer to this right away. But technology is taking us to strange places (see e.g., If a brain emulation has a headache, does it hurt?) where it’ll be handy to have some guide for detecting suffering other than our (very fallible) intuitions.

This is way too much to worry about! And isn’t it a better use of resources to actually help sentient creatures we *know* are in pain, rather than slog through all these abstract philosophical details that seem impossible to overcome?

Very possibly! But I don’t think this would be wasted effort, at all.

First: EA is all about metrics. True, improving these metrics can be very difficult, and very difficult to validate, but it’s the sort of thing that pays huge long-term dividends. And if we don’t do it, is there anybody else who will? Abe Lincoln has this (possibly apocryphal) quote, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” The QALY is EA’s axe, and it’s worth trying to sharpen.

Second, this all sounds very hard, but it may be easier than it seems. This stuff reminds me of what Paul Graham said about startups: the best time to work on something is when technology has made what you want to do possible, but before it becomes obvious. With all the consciousness research that’s been going on, I suspect we have many more good tools to work with than most people- even (or perhaps especially?) ethicists- realize. We may not have solid answers to these questions, but we’re increasingly getting a decent picture as to what good answers will look like.

Third, and I think just as important, I fear that EA is going to be vulnerable to mission drift and ideological hijacking. This is not a criticism of EA, but rather a comment that almost everybody would love to think they’re engaging in effective altruism! Every activist and culture warrior thinks they’re improving the world by their actions, and that their special brand of activism is the most provably effective way to do so. And so I think it’s very plausible that people with external agendas will be highly attracted to EA (whether consciously or not), especially as it starts to gain traction. EA is going to need a sturdy rudder to avoid diversions, and strong (yet not overactive!) immune system to fend off ideological hijacking. I can’t help with latter, but I do think improvements in the core metric EA uses could help with ‘organic’ long-term mission drift.

Concrete recommendations for EA:
There’s no one person ‘in charge’ of EA, so the best, most effective, and least annoying way to get something done is generally to organize it yourself. That said, here are a few things I think the community could (and should) do:
– Consider forming an “EA Foundational Research” team for the sort of inquiry that might not happen effectively on its own. It need not be super formal- even an ‘EA Foundational Issues Journal Club’ could be helpful and would be fun;
– Foster relationships with receptive scientific communities to help with each step (short/medium/long)… the way Randal Koene’s carboncopies coordinates research between labs might be an interesting model to emulate;
– Think deeply about how to shield EA from unwanted appropriation by culture warriors without being too insular, and/or how to pick the culture battles worth fighting;
– Keep being awesome.