If a brain emulation has a headache, does it hurt?

November 14, 2014
Anders Sandberg has a new article out, “Ethics of brain emulations”.

http://www.tandfonline.com/doi/pdf/10.1080/0952813X.2014.895113

He discusses the uncertainties inherent in whether emulations are ‘sentient’ (i.e., whether WBEs are conscious and can feel pain) and moral considerations and strategies thereof. I’m generally rather skeptical of ethics papers, but the signal-to-noise ratio here is very good.

The most striking passage for me was this quote from Thomas Metzinger:

What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development – we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby – no representatives in any ethics committee.

Metzinger probably overstates his case– the notion that WBEs “could” feel such pain is by no means a settled matter, and it feels like Sandberg includes this as a ‘worst-case’ scenario. But perhaps we can approach this question from a Bayesian perspective… if we take

[Suffering experienced by a mind insane-in-the-way-imperfect-emulations-will-be-insane]*[probability that conventional software-based WBEs can suffer]*[aggregate number of hours of imperfect WBE necessary to ‘get it right’], do we get a big quantity or small quantity?

My intuition: the first quantity seems very large; the second quantity is probably very low, but may not be; the third quantity is almost certainly very large. If I give the second quantity even a 5% chance (or even 1%), that’s a lot of expected suffering.

Obviously this estimate is very rough, but it poses two questions: (1) how could we improve this sort of probabilistic estimate? And (2), *if* WBEs can suffer, how can we mitigate this*? I’m not convinced this issue matters, nor am I convinced this doesn’t matter. It’s not keeping me up at night… but it is an interesting question.

————————————-
*Sandberg suggests the following:

Counterparts to practices that reduce suffering such as analgesic practices should be developed for use on emulated systems. Many of the best practices discussed in Schofield and Williams (2002) can be readily implemented: brain emulation technology by definition allows parameters of the emulation that can be changed to produce the same functional effect as the drugs have in the real nervous system. In addition, pain systems can in theory be perfectly controlled in emulations (for example by inhibiting their output), producing ‘perfect painkillers’.

However, this is all based on the assumption that we understand what is involved in the experience of pain: if there are undiscovered systems of suffering, careless research can produce undetected distress.

Liked it? Take a second to support MJ on Patreon!

No Comments


Trackbacks

  1. Effective Altruism, and building a better QALY | Opentheory.net
  2. Reverse-engineering valence as a way to reduce AI x-risk | Opentheory.net

Leave a Reply