How understanding valence could help make future AIs safer

September 28, 2015  |  2 Comments

The two topics I’ve been thinking the most about lately:

  • What makes some patterns of consciousness feel better than others? I.e. can we crisply reverse-engineer what makes certain areas of mind-space pleasant, and other areas unpleasant?
  • If we make a smarter-than-human Artificial Intelligence, how do we make sure it has a positive impact? I.e., how do we make sure future AIs want to help humanity instead of callously using our atoms for their own inscrutable purposes? (for a good overview on why this is hard and important, see Wait But Why on the topic and Nick Bostrom’s book Superintelligence)

I hope to have something concrete to offer on the first question Sometime Soon™. And while I don’t have any one-size-fits-all answer to the second question, I do think the two issues aren’t completely unrelated. The following outlines some possible ways that progress on the first question could help us with the second question.

Read More

Effective Altruism, and building a better QALY

June 4, 2015  |  2 Comments

This originally started as a write-up for my friend James, but since it may be of general interest I decided to blog it here.

Effective Altruism (EA) is a progressive social movement that says altruism is good, but provably effective altruism is way better. According to EA this is particularly important, because charities can vary in effectiveness by 10x, 100x, or more, so being a little picky in which charity you give your money to can lead to much better outcomes.

To paraphrase Yudkowsky and Hanson, a lot of philanthropy tends to be less about accomplishing good, and more about signaling to others (and yourself) that you are a Good Person Who Helps People. Public philanthropy in particular can be a very effective way to ‘purchase’ social status and warm fuzzies, and while any philanthropist or charitable organization you ask would swear up and down that they were only interested in doing good, often the actual good can get lost in the shuffle.

EA tries to turn this around: it seldom discourages philanthropy, but it’s trying to build a culture and community where people gain more status and more warm fuzzies if their philanthropy is provably effective. The community is larger and more dynamic than I can give it credit for here, but some notable sites include,,,, and Peter Singer also has several books on the topic.

I love this movement, I endorse this movement, and I wish it had been founded a long time ago. But I have a few nits to pick about its quantitative foundation.

Okay, so EA likes measurements. What do they use?

The current gold standard for measuring ‘utility’, which the EA community enthusiastically uses, is the Quality-Adjusted Life Year (QALY). Here’s the University of Ottawa’s explanation:

QALYs are calculated as the average number of additional years of life gained from an intervention, multiplied by a utility judgment of the quality of life in each of those years. For example, a person might be placed on hypertension therapy for 30 years, which prolongs his life by 10 years at a slightly reduced quality level of 0.9. In addition, the need for continued drug therapy reduces his quality of life by 0.03. Hence, the QALYs gained would be 10 x 0.09 – 30 x 0.03 = 8.1 years.  The valuations of quality may be collected from surveys; a subjective weight is given to indicate the quality or utility of a year of life with that disability.

The idea of QALYs can also be applied to years of life lost due to sickness, injuries or disability. This can illustrate the societal impact of disease.  For example, a year lived following a disabling stroke may be judged worth 0.8 normal years.  Imagine a person aged 55 years who lives for 10 years after a stroke and dies at age 65. In the absence of the stroke he might be expected to live to 72 years of age, so he has lost 7 potential years.  As his last 10 years were in poor health, they were quality-adjusted downward to an equivalent of 8 years, so the quality-adjusted years of life lost would be 7 + (10 – 8), or 9.

In short, if we’re giving money to charity, we should look for opportunities which give a high QALY-per-dollar ratio (e.g., Malaria nets in Africa) and shun those that don’t (e.g., invasive cancer surgery on bedridden 90-year-olds). Every so often, updates their shortlist for trusted, validated charities that can produce lots of QALYs for your donation. There are also different ‘flavors’ of the QALY model that focus on different things: the Disability-Adjusted Life Year (DALY), the Wellbeing-Adjusted Life Year (WELBY), and so on.

The QALY, a foundation of wire, particle board, and spackle: way better than nothing, but not ideal.

It’s great we have these metrics: they make intuitive sense, they’re handy for quickly summarizing the expected benefit of various humanitarian interventions, and they’re a lot better than nothing. But jeez, they depend on a lot of complex, kludgy, top-down simplifications. For instance, most variants of the QALY tend to:
– treat utility, quality-of-life, absence of disease, and health as the same thing;
– assume health states are well-defined and have exact durations;
– have no accommodation for differences in how people deal with health burdens, or have different baselines;
– essentially ignore the effects of interventions that might do something other than reduce some burden, generally disease burden (may be arguable);
– have all the limitations and biases characteristic of subjective evaluations and self-reports;

… and so on. The QALY is a wonderfully useful tool, but it’s very far from “carving reality at the joints”, and it’s trivial to think up long lists of scenarios where it’d be a terrible metric.

To be fair, this problem- measuring well-being- is inherently difficult, and it’s not that people don’t want to ‘build a better QALY’, it’s that innovation here is so constrained because it’s very, very hard to design and validate something better, especially when you have something simple that already sorta works. (The fact that QALYs are so closely tied in with the medical establishment, never a paragon of ideological nimbleness, doesn’t help things either.)

Okay,” you say, “you’re skeptical of using the QALY to evaluate the effect of charitable interventions on well-being. What should we use instead?

In the short term: let’s augment/validate the QALY with some bottom-up, quantitative proxies for well-being.

What gives me hope that the QALY is a brief stop-over point toward a better metric is that there’s so much overlap between the EA community and the Quantified Self (QS) community, and the QS community is absolutely nuts about novel, clever, and useful ways to measure stuff. The QS meetups I’ve attended have had people present on things ranging from the relative effectiveness of 5+ dating sites for meeting women and the quality of follow-up interactions from each, to the effects of 30+ different foods on poop consistency. Quantifying and tracking well-being is well within the QS scope!

So how would a QSer measure well-being in a more bottom-up, data-driven way?

First, there’s a lot that a QALY-style factor analysis does right. It gives us an expected effect on well-being from the environment, and helps sort through causes of well-being or suffering, something that purely biological proxies can’t do. So we wouldn’t throw it out, but we should set our sights on augmenting and validating it with QS-style data collection.

Second, we’d look for some really good (and ideally, really really simple) bottom-up, biological proxies that track well-being. I suspect we could throw something together from stress hormone levels (e.g., cortisol) and their dynamic range, and possibly heart-rate variability.

Third, we’d crunch the numbers on possible biological proxies, pick the best ones, and validate the heck out of them. Easier said than done, but simple enough in principle.

Why do we need to bother with biological proxies? Because they’re strong in the same ways that top-down metrics tend to be weak. E.g.,
– they don’t involve any subjective component… they may not tell the whole story, but they don’t lie;
– they can be frequently measured, and frequent measurements are important, because they let you have tight feedback loops;
– they can measure and compensate for the phenomenon of hedonic adaptation;
– there’s a significant amount of literature in animal models that explores and validates this approach, so we wouldn’t have to start from scratch.

Obviously we couldn’t do this sort of QS-style data collection on everybody all the time, but even if just a few people did it, it’d go pretty far toward improving the QALY for everybody else, by seeing where our predictions of environmental effects on well-being hold up and where they break down, at least compared to the window of biological data we can directly measure.

In the medium term: let’s figure out some common unit by which to measure human and non-human animal well-being.

One of the principles of EA is that all sentient beings matter- that humans don’t have a monopoly on ethical significance. I agree! But how do we compare different organisms?

First, those biological proxies from step (1) will definitely come in handy. Humans, dogs, cows, and chickens all share the same basic brain architecture, which implies that if we find something that’s a good proxy for well-being or suffering in humans, it should be at least a not-so-terrible proxy for the same in basically all non-human animals too.

But for numerical comparisons we’ll need more than just that. I suspect we’ll need some sort of a plausible method for adjusting for how sentient and/or capable of suffering something is. Most people, for instance, would agree that mice can suffer, and that mouse suffering is a bad thing. But can they suffer as much as humans can? Most people would say ‘no’, but we can’t put good numbers or confidence ranges on this. My intuition is that almost everything with a brain shares the basic emotional architecture, and so is technically capable of suffering, but various animals will differ significantly in their degree of consciousness, which acts as a multiplier of suffering/well-being for the purposes of ethics. E.g., ethical significance = [suffering]*[degree of consciousness]. The capacity to have a strong sense of self (i.e., the sense that there is a self that is being harmed) may also be important, which likely has a neuroanatomical basis. Call it the SQALY (Sentience and Quality-Adjusted Life-Year).

The road forward here is murky, but important. I hope people are thinking about ways to quantify this, because one of EA’s core strengths is that it argues that human and animal well-being is in principle commensurable, and quantifiable. My off-the-cuff intuition is that a mash-up of Panksepp’s work on mapping structural commonalities in “emotion centers” across vertebrate brains, and a comparison of relative amounts of brain mass in areas thought most significant for consciousness, could bear fruit. But I dunno. It’ll be tough to put numbers on this.

In the long term: let’s build a rigorous mathematical foundation that explains what suffering *is*, what happiness *is*, and how we can *literally* do utilitarian calculus.

EA wants to maximize well-being and minimize suffering. Cool! But what are these things? These tend to be “I know it when I see it” phenomena, and usually that’s good enough. But eventually we’re gonna need a properly rigorous, crisp definition of what we’re after. To paraphrase Max Tegmark: ’Some arrangements of particles feel better than others. Why?’ — if pain and pleasure are patterns within conscious systems… what can we say about these patterns? Can we characterize them mathematically?

I’ve spent most of my creative output of the last couple years on this problem, and though I don’t have a complete answer yet, I think I know more than I did when I started. It’s not really ready to share publicly, but feel free to track me down in private and I can share what I think is reasonable to say, and what I think is plausible to speculate.

We don’t need an answer to this right away. But technology is taking us to strange places (see e.g., If a brain emulation has a headache, does it hurt?) where it’ll be handy to have some guide for detecting suffering other than our (very fallible) intuitions.

This is way too much to worry about! And isn’t it a better use of resources to actually help sentient creatures we *know* are in pain, rather than slog through all these abstract philosophical details that seem impossible to overcome?

Very possibly! But I don’t think this would be wasted effort, at all.

First: EA is all about metrics. True, improving these metrics can be very difficult, and very difficult to validate, but it’s the sort of thing that pays huge long-term dividends. And if we don’t do it, is there anybody else who will? Abe Lincoln has this (possibly apocryphal) quote, “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.” The QALY is EA’s axe, and it’s worth trying to sharpen.

Second, this all sounds very hard, but it may be easier than it seems. This stuff reminds me of what Paul Graham said about startups: the best time to work on something is when technology has made what you want to do possible, but before it becomes obvious. With all the consciousness research that’s been going on, I suspect we have many more good tools to work with than most people- even (or perhaps especially?) ethicists- realize. We may not have solid answers to these questions, but we’re increasingly getting a decent picture as to what good answers will look like.

Third, and I think just as important, I fear that EA is going to be vulnerable to mission drift and ideological hijacking. This is not a criticism of EA, but rather a comment that almost everybody would love to think they’re engaging in effective altruism! Every activist and culture warrior thinks they’re improving the world by their actions, and that their special brand of activism is the most provably effective way to do so. And so I think it’s very plausible that people with external agendas will be highly attracted to EA (whether consciously or not), especially as it starts to gain traction. EA is going to need a sturdy rudder to avoid diversions, and strong (yet not overactive!) immune system to fend off ideological hijacking. I can’t help with latter, but I do think improvements in the core metric EA uses could help with ‘organic’ long-term mission drift.

Concrete recommendations for EA:
There’s no one person ‘in charge’ of EA, so the best, most effective, and least annoying way to get something done is generally to organize it yourself. That said, here are a few things I think the community could (and should) do:
– Consider forming an “EA Foundational Research” team for the sort of inquiry that might not happen effectively on its own. It need not be super formal- even an ‘EA Foundational Issues Journal Club’ could be helpful and would be fun;
– Foster relationships with receptive scientific communities to help with each step (short/medium/long)… the way Randal Koene’s carboncopies coordinates research between labs might be an interesting model to emulate;
– Think deeply about how to shield EA from unwanted appropriation by culture warriors without being too insular, and/or how to pick the culture battles worth fighting;
– Keep being awesome.

Spark Weekend SF Review

March 22, 2015  |  No Comments

I went to Spark Weekend SF last weekend, a ‘lifehacking conference’. I had never been to a lifehacking conference, and expected it to maybe be a little bit fun, but a little bit hokey. It wasn’t hokey at all: it was pretty darn great, with high-quality people and really interesting content. Here’s what we talked about:


Abe Gong, “Using Force Multipliers for Willpower”

Abe is a data scientist focusing on wearables, and his message was the following: willpower is limited resource, and we don’t overcome our problems through sheer willpower, but rather by being smart about how we use it. And the smartest use of willpower is probably to spend it changing our environment, because the subconscious mind is shortsighted and ornery, but takes cues from the world around it. And very importantly: technology can help with all of this.

Abe’s vision of how tech can help: He talked about how he uses reminders to do things, that we can manufacture new habits, and even extend old habits (‘after brushing teeth, do X’), and more generally, integrating the question of ‘are you doing what you’re supposed to be doing?’ into our day.

Abe gave us all a task: change your environment in 3 ways to make yourself more productive.


I’m hardly doing his talk justice- he had lots of good foundational thoughts and actionable advice. My favorite part was how, since willpower fluctuates throughout the day, we should use our high-willpower peaks (“temporary ability to do hard things”) to make small changes that help all day long. He didn’t have a technological “willpower killer app”, but he’s working on launching a new company, Metta Smartware. So maybe he’s got something up his sleeve.

One question I find myself wondering: what *is* willpower? This is not anything like a settled issue, judging by SSC’s recent post on the topic. Abe said it was nigh-impossible to increase it, and this seems to be consistent with the literature, but if we had a better understanding exactly how the willpower dynamic worked, could we do anything cool with that knowledge?


Abe’s conception of willpower: temporary opportunity to do hard things, a quantity that varies considerably throughout the day.


Julia Galef, “Using Trigger Action Plans To Facilitate Change”

Julia (of CFAR fame) started out with the ‘Parable of the Sphex’, a story about a really rigidly OCD insect that can’t change its habits even when it’s obvious the habit isn’t working. And, she says, we all are more like the Sphex than we realize: often we have trigger-response patterns that fail, badly, and we fail to change them. But we have the *capacity* to change them, unlike the poor Sphex, and changing habits with poor outcomes is an absolutely critical skill.

Julia talked about this in the context of ‘rewriting our code’ with trigger-action plans. These are of the form, ‘if X, then Y’ — overwrite a bad one with a good one. Now, Julia was very careful to say it’s not necessarily easy to do this: you’ve gotta be smart about how to build a new trigger, put effort into following the plan, and smart about debugging it when it breaks. Her bottom-line message: this stuff isn’t magic… but it does work, and is a critical skill for self-improvement.

Julia’s challenge: come up with one TAP (Trigger->Action Plan) that will help with some goal.

Julia had a lot of good things to say, and I found myself agreeing with her advice and her attitude. Really a great speaker too, explaining a lot of dense stuff quickly and clearly (I think she does a lot of this with her organization, CFAR). Thinking about whether these techniques would work for me got me wondering:

I feel like belief in self-improvement / lifehacking methods is often a self-fulfilling prophecy: if you don’t believe they can work, they won’t, whereas if you believe in them, they can seem pretty easy, and I think it’s natural to sorta feel like people who aren’t doing them are just silly/lazy, and should just buckle down and do them and improve their lives. But belief is not as plastic as this! E.g., look at how sticky religious beliefs are, or the feeling of victimhood. We often don’t choose our beliefs, and even if they’re non-adaptive, mere self-fulfilling prophecies, or demonstrably false, they can be very durable.

So how do you get a person to viscerally believe lifehacks would work for them? I used to think lifehacks wouldn’t work for me, and they didn’t. Now, I think they sometimes can, and they sometimes do. How did I make this transition? I was talking with Will Eden about this, and mentioned my change in attitude probably came from hanging out with people I really really respected who really bought the concept of lifehacking. Over time, I slowly came around to the idea that no, it wasn’t just empty promises, and yes, it could work for me too. Our peer group really has a surprising amount of influence over our views— Will mentioned the saying, “You’re the average of the five people you spend the most time with,” and I think this might be a ‘hidden variable’ in a lot of lifehacking successes and failures.


Julia’s definition of a ‘Trigger-Action Plan” (TAP).


Aubrey de Grey, “Attempting to Defeat Aging”

Aubrey, a noted anti-aging researcher (whom I’ve heard speak quite a number of times now) opened with a contrast of infectious diseases vs. age-related diseases: medicine has been great at fixing the former, but absolutely terrible at fixing the latter.

Why? Aubrey says it’s because the way geriatric medicine thinks of age-related ‘diseases’ is all wrong. Aging is about accumulation of damage, and it’s unavoidable: metabolism leads to damage, damage leads to pathology. So these age-related diseases are just a side-effect of being alive. You can’t “cure” alzheimer’s or cancer the way you can malaria, because you can’t “cure” this genetic damage, only go in and fix it after it occurs.

He contrasts geriatric medicine with gerontology, which he says is better, but still bad, because it’s more focused on describing than intervention, and at best could only ‘slow down’ aging. Instead, he argues, we need a way to fix the damage— we can have old cars that function perfectly well a century after they were made— why not human bodies if we can give them periodic ‘tune-ups’?

Aubrey identifies seven types of damage, with options for repairing each. The argument is that if we fix all these seven things, we’ve effectively cured aging.

He finished with an exasperated plea: everybody he talks with seems entirely concerned with the sociological considerations of what could go wrong if we cured aging. He thinks we’re largely ignoring the sociological considerations of what could go *right* if we cured aging. Because aging really, really sucks!


I think de Grey is absolutely right on most things, especially his most important message: aging sucks and isn’t a law of nature we can’t do anything about! But I think his understanding of damage is limited and he’s much too attached to his original list of seven types of damage (he was proud of not changing his original list after 14+ years of progress in the field, but to me this is a bit of a warning sign he’s not open to conceptual or factual revisions). One way his and my understanding diverge: he would never say we’re born with damage, a.k.a. genetic load, whereas I would.

So I think his list of 7 types of damage is both overly narrow (he doesn’t directly address inherited *or* accumulated DNA damage) yet also too pessimistic: genetic engineering techniques could not only fix multiple kinds of damage at once, but also improve a fundamental driver of aging, inherited ‘genetic spelling errors’. Still, one has to admire his dedication- I think it’s been a long, hard road for him to get as far as he has, soldiering on through (very often misguided) criticism after criticism, and I think his work has really moved peoples’ attitudes.

Aubrey’s action item: was to buy his book and/or give him lots of money for his research, which seemed a bit not-super-practically-useful-as-a-lifehack.


Aubrey’s “seven types of damage” that he thinks add up to aging.


Mikey Siegel, “Transformative Technology: An Evolution of Medicine”

Mikey, an engineer-turned-consciousness-hacker, opened with an observation and a question: “Happiness depends on more than external circumstances. What does it mean to be truly content?”

He talked about his experience at a meditation retreat, where he had a fundamental change in perspective: after countless hours of meditation, something just ‘clicked’ and he felt “okay-ness with everything”. The pain of cramped muscles, hunger, and whatnot were still there, but somehow not bad. And he thought: “We all have the capacity to feel this, regardless of circumstances.” And now he wants to help people reach that state with technology.

He spoke about how Enlightenment isn’t some special Buddhist thing, but rather part of being human. And so is suffering. Meditation is a technology we invented, some 3000 years ago, to optimize our mind-states…. and it works. The problem is that meditation more-or-less involves being ‘unplugged’ from technology, which is legitimately hard in many circumstances. But maybe it can be adapted to work with, not against, the trends and constraints of modern life.

So, his thesis: If you can quantify enlightenment, you can increase it. We can quantify it with current technology- so that means it’s ripe for hacking. He ended with various comments on fMRI studies and the tech he’s worked on.


Mikey’s topic is near and dear to my heart. What is human flourishing? Can we measure it and optimize it? How can technology help? These are the right questions to ask. I am super glad he’s asking them.

However, I wonder if fMRI studies, brain region activity, and HeartMath’s tech are as good as they seem for understanding ‘how to measure enlightenment’: it often works, but will never give you a crisp understanding, and there are a million ways it can mislead. They’re pretty “leaky” abstractions. So I dig the first part of the message: let’s hack consciousness! But I’m leery of the second, that the paradigms we have available are a satisfying technical or philosophical basis for doing so. (I hope to put my money where my mouth is and publish something on this general topic soonish.) That said, a good implementation of a workable paradigm beats a non-existent implementation of a hypothetical perfect paradigm any day, and Mikey was working on some pretty amazing things last time I talked with him… things that I can’t wait to get my grubby paws on.


Adam Bornstein, “Engineering The Alpha In You”

Adam’s a fitness/diet guy- worked at Men’s Health, wrote some books, and is here to talk about diets.

First, diets are tough. Lots of people try them every year, and generally they fail. Mostly, they don’t fail because they’re weak: they fail because the common wisdom about how to think about losing weight is so completely wrong.

Actionable advice that’s not wrong:
– Cardio is 4x more popular for weightloss, but weight training is 2x more effective.
– All diets can work, but they differ in how sustainable they are! Don’t be a masochist. “If you love pasta, don’t go on the paleo diet.”
– “The pleasure from two cheeseburgers is equivalent to one orgasm” – your body needs that pleasure somehow, so have the orgasm instead of the cheeseburgers.
– Weightloss happens in chunks, not gradually. If your diet seems to plateau, keep going, give your body time to recalibrate its metabolism, and you’ll start losing weight again.
– “If you make it hard to fail, you will succeed.” Use positive reinforcement, one step at a time, form good habits (Eat More of the following at Each meal: protein of vegetables). And set realistic expectations.
– Carbs are probably not as evil as the low-carb/paleo crowd portrays them. Everything in moderation.

Adam was a great speaker, even if his slides were a little haphazard. He made some comment about ‘all diets work equally, assuming equal macronutrient (carb/fat/protein) proportions’ that seemed odd to me, as the premise of the low-carb/paleo diets is that we eat too many carbs and should eat fewer. Still, his large themes and specific recommendations seemed pretty spot-on, and a lot smarter and wiser than the conventional wisdom. If I was a billionaire I’d hire him as my diet coach.

James Norris, “Using Technology To Upgrade Yourself”

James had a short talk to cap the more formal part of the day. His basic message: The ‘Limitless pill’ doesn’t exist, but 750+ lifehacking tools do, and they can do a lot of the same things, with the bonus of actually existing.

He listed, rapid-fire-style, some lifehacking tools he recommends (judged on efficacy, ease, and evidence):

Goals: – instant coaching for any goal.
Beeminder – they take your money if you don’t achieve your goals.
Mindbloom- they visualize your goals like a tree- if you fail it dies.
Stickk – a commitment tracker that takes your money if you fail, and either gives it to your choice of charity, or ‘anti-charity’ (organization you hate).

Mealsquares, Soylent, DIY Soylent – zero-effort nutritionally-complete(?) food-replacements.
Meal Snap – take a picture and it’ll tell you about your food.
Pre-made Paleo – a fully paleo meal shipped to your door.
Smart Body Analyzer – it tells you your weight…. and shares it on facebook.

7 Minute Workout, Stronglifts 5×5- apps that coach you through a workout.
Wearables – lots of options, find the one you like.

Eye masks – wear them! They really work. [Mike’s note: this is something I’m definitely going to try.]
Sleep Genius, other sleep trackers – check the quality of your sleep.
Various options for a bright blue light that will increase alertness.
F.lux – leech the blue from your screen late at night. [Mike’s note: I think this really works.]

Mental well-being:
Headspace – guided meditation app
Muse – neurofeedback/mindfulness/basic rhythms hardware+app combo.

Khan academy – offers a smart, step-by-step guide through learning complex topics.
Anki – digital flashcards.
Imperative – you enter what you like, it figures out what career might resonate with you.
80,000 hours – similar theme, but optimizes for social impact.
Mint – pay bills easily.
Betterment – investing made easy.
Levelmoney – it’ll watch your bank account and tell you when to stop spending.
Wealthminder – smart management for investments.
Rescue Time – an app that tells you exactly how much time you waste.
Maelstrom – inbox 0.

Tinder, Ok cupid – relatively easy ways to meet people romantically.

Combatant Gentleman, Trunk Club – click-to-door shopping for people who hate shopping.

Spirituality: psilocybin, aka mushrooms.

And finally, James mentioned The Kit, a site for open research on lifehacks, and showed a short video from BJ Fogg (message: look at change as a design challenge, not a willpower challenge… and focus on successes, because small changes have ripple effects).

I got the impression that James was very, very invested in everybodys’ success, and you could just feel his genuineness. It was very cool. Though now I’ll live in fear of letting him down if I fail on my goals (50% joke).

Afterwards, we broke up into smaller groups, tasked with making 3 goals for the next 30 days. Of course, instead of doing this, I typed up this review. I know— I’m totally busted.


If a brain emulation has a headache, does it hurt?

November 14, 2014  |  No Comments
Anders Sandberg has a new article out, “Ethics of brain emulations”.

He discusses the uncertainties inherent in whether emulations are ‘sentient’ (i.e., whether WBEs are conscious and can feel pain) and moral considerations and strategies thereof. I’m generally rather skeptical of ethics papers, but the signal-to-noise ratio here is very good.

The most striking passage for me was this quote from Thomas Metzinger:

What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development – we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby – no representatives in any ethics committee.

Metzinger probably overstates his case– the notion that WBEs “could” feel such pain is by no means a settled matter, and it feels like Sandberg includes this as a ‘worst-case’ scenario. But perhaps we can approach this question from a Bayesian perspective… if we take

[Suffering experienced by a mind insane-in-the-way-imperfect-emulations-will-be-insane]*[probability that conventional software-based WBEs can suffer]*[aggregate number of hours of imperfect WBE necessary to ‘get it right’], do we get a big quantity or small quantity?

My intuition: the first quantity seems very large; the second quantity is probably very low, but may not be; the third quantity is almost certainly very large. If I give the second quantity even a 5% chance (or even 1%), that’s a lot of expected suffering.

Obviously this estimate is very rough, but it poses two questions: (1) how could we improve this sort of probabilistic estimate? And (2), *if* WBEs can suffer, how can we mitigate this*? I’m not convinced this issue matters, nor am I convinced this doesn’t matter. It’s not keeping me up at night… but it is an interesting question.

*Sandberg suggests the following:

Counterparts to practices that reduce suffering such as analgesic practices should be developed for use on emulated systems. Many of the best practices discussed in Schofield and Williams (2002) can be readily implemented: brain emulation technology by definition allows parameters of the emulation that can be changed to produce the same functional effect as the drugs have in the real nervous system. In addition, pain systems can in theory be perfectly controlled in emulations (for example by inhibiting their output), producing ‘perfect painkillers’.

However, this is all based on the assumption that we understand what is involved in the experience of pain: if there are undiscovered systems of suffering, careless research can produce undetected distress.

Notes from the PA Longevity Prize

September 15, 2014  |  No Comments

Last week I had the opportunity (thanks Alton!) to attend the launch event for the Palo Alto Longevity Prize. In short, it’s a $1m prize for substantial progress on “hacking the code of aging”. The Washington Post has more.

$1m is not a lot of money for this. But the hope is that by explicitly saying: “We want people to hack aging, and we will give anyone who does it a prestigious prize” it may attract key scientists already working on this, and also help legitimize the background assumption that aging *is* a disease which *should* be cured.

I have a lot of sympathy with this (h/t Nick Bostrom’s The Fable of the Dragon-Tyrant). I also see how curing aging could cause deep social problems. Some remarks I made in a facebook discussion:

I’m actually a little skeptical about the social effects of indefinite lifespan. Max Planck’s notion that “science advances one funeral at a time” is witty, yes, but I think it also largely rings true.

I think we can extend this and say a primary driver of social change is that there’s a constant influx of young people, with newer ideas and more open minds– and that the older, higher-status people who have their hands on the levers of power slowly die off to make room for them.

Death, being the great equalizer, may also incentivize altruism. He who dies with the most toys still dies, after all– unless indefinite lifespan is possible, at which point why give as much to charity, since you might need it later?

This system is terribly inefficient and rather cruel in many ways. But it’s also a time-tested process that sustains society’s dynamism. I am (of course!) in favor of curing aging, but I think there are many potential hidden ‘gotchas’ involved, and I don’t think they’ll be trivial. (Note: brain plasticity-enhancing drugs or prosthetics may help with some of these problems, but I think they’ll have to work around fundamental limitations.)

Read More

The mystery of pain and pleasure

March 2, 2014  |  No Comments

Trying to solve a scientific mystery is like starting a startup: timing is everything. Try to solve a problem too soon, and your efforts are wasted. Try to solve a problem too late, and you can’t contribute anything new. But there’s a sweet spot between when a problem becomes solvable in principle and when it becomes obvious, where efforts have the most leverage. A time when the problem you’re trying to solve still looks dreadfully out of reach, but nothing except inertia and muddled thinking is actually standing in your way. I think the mystery of pain and pleasure (i.e., what’s the intrinsic factor that makes things feel good or bad?) is in this sweet spot right now.

After the fold is a partial excerpt from a paper I’m working on. If it piques your interest, please do contact me.

Read More

What the trendy smart people are worrying about

February 2, 2013  |  2 Comments

Every year, literary-agent-to-famous-intellectuals John Brockman emails his 150+ clients a philosophical question to publicly weigh in on. The question he asked this year is, What should we be worried about?

I can’t say this list did much for my peace-of-mind, but it was interestingly diverse. Most comments aren’t anything you couldn’t find in the New York Times, but some seemed unusually pithy, prescient, or fresh. Here’s what stood out to me this year:

Rolf Dobelli on how goods that convey high status will always be in short supply:

As mammals, we are status seekers. Non-status seeking animals don’t attract suitable mating partners and eventually exit the gene pool. Thus goods that convey high status remain extremely important, yet out of reach for most of us. Nothing technology brings about will change that. Yes, one day we might re-engineer our cognition to reduce or eliminate status competition. But until that point, most people will have to live with the frustrations of technology’s broken promise. That is, goods and services will be available to everybody at virtually no cost. But at the same time, status-conveying goods will inch even further out of reach. That’s a paradox of material progress.

Yes, luxury used to define things that made life easier: clean water, central heating, fridges, cars, TVs, smart phones. Today, luxury tends to make your life harder. Displaying and safeguarding a Rauschenberg, learning to play polo and maintaining an adequate stable of horses, or obtaining access to visit the Pope are arduous undertakings. That doesn’t matter. Their very unattainability, the fact that these things are almost impossible to multiply, is what matters.

As global wealth increases, non-reproducible goods will appreciate exponentially. Too much status-seeking wealth and talent is eyeing too few status-delivering goods.

Read More

Baumeister on Sexual Economics

November 9, 2012  |  Evolution  |  2 Comments

Many academic papers are dry. Baumeister’s latest is definitely not. From Sexual Economics, Culture, Men, and Modern Sexual Trends:

The fact that men became useful members of society as a result of their efforts to obtain sex is not trivial, and it may contain important clues as to the basic relationship between men and culture (see Baumeister 2010). Although this may be considered an unflattering characterization, and it cannot at present be considered a proven fact, we have found no evidence to contradict the basic general principle that men will do whatever is required in order to obtain sex, and perhaps not a great deal more. (One of us characterized this in a previous work as, “If women would stop sleeping with jerks, men would stop being jerks.”) If in order to obtain sex men must become pillars of the community, or lie, or amass riches by fair means or foul, or be romantic or funny, then many men will do precisely that. This puts the current sexual free-for-all on today’s college campuses in a somewhat less appealing light than it may at first seem. Giving young men easy access to abundant sexual satisfaction deprives society of one of its ways to motivate them to contribute valuable achievements to the culture.

How to Fix Politics: Celebrity Edition

October 23, 2012  |  5 Comments

The American political scene is in sorry shape. If you’re reading this blog– or indeed, if you have a pulse– you likely agree with this, so I won’t belabor the point.

The standard prescription is to get out and vote. While it’s important that people vote, the idea that ‘our problems would melt away if only everyone got out and voted’ is troubling, because if you vote and feel you’ve done your duty yet voting doesn’t actually do much, it’s ’empty calories’ of a very dangerous sort.

As I get older and (I think) wiser, I find the choices voters get are hardly choices at all. We only get exposed to– let alone get to vote for– candidates that have passed through a huge gauntlet of vested interests. Candidates who won’t rock the boat too much, candidates who will “play ball”, candidates who have essentially sold out, beholden to and dependent on their party, media alliances, and funders. “Get out and vote” is hardly a viable prescription for change when we can choose to vote for Goldman Sachs or Goldman Sachs.

The Powers That Be have always been able to vet candidates to some extent, but in the past few elections it’s gotten particularly stark: before, a wildcard like Perot might’ve snuck in, but (love him or hate him) witness what happened to Ron Paul when he tried to circumvent the gatekeepers’ gauntlet.

It’s a hard, complex problem. But I see a way to short-circuit a lot of this gatekeeping. Convince more celebrities to run for office.

It sounds like a joke, but I’m entirely serious. Celebrities already have their own power base, their own media exposure. They don’t need to mortgage their ideals to get access to voters. They get a (mostly) free pass through the gatekeepers’ gauntlet, and many would stand a good chance at getting elected going head-to-head against the sorts of candidates the major parties field.

Clearly we wouldn’t want any old celebrity running for president, but there are celebrities who would genuinely be great candidates. Matt Damon, Jon Stewart, Bruce Willis, Meryl Streep, Leonardo DiCaprio– all potential candidates who would be more electable, likely more competent, freer to speak their minds, and much less likely to respect sacred cows on both the right and the left than anybody the standard party nomination process can produce. And perhaps we see the past with rose-tinted glasses, but it seems like the celebrities we’ve already elected have done pretty well by us.


I’m sure I missed a lot of celebrities who would make good candidates. Who are they?


Startup School 2012

October 19, 2012  |  No Comments

I’ll be at Y Combinator‘s startup school at Stanford this weekend. If you see me (Mike Johnson) there, feel free to say hello.