A friend asked me if I thought future AIs could be conscious; my answer was ‘kind of, but not in the way most people think.’
I. Computations don’t have objective existence:
Imagine you have a bag of popcorn. Now shake it. There will exist a certain ad-hoc interpretation of bag-of-popcorn-as-computational-system where you just simulated someone getting tortured, and other interpretations that don’t imply that. Did you torture anyone? If you’re a computationalist, no clear answer exists- you both did, and did not, torture someone. This sounds like a ridiculous edge-case that would never come up in real life, but in reality it comes up all the time, since there is no principled way to *objectively derive* what computation(s) any physical system is performing. (Against functionalism, 2017)
Commentary: there are essentially two ways to approach formalizing consciousness: sizing up a system by its bits or by its atoms. I believe the physicalist approach (atoms, electromagnetic fields, etc) is the only method that could lead to something useful, because there’s no objective fact of the matter about “which computations” a system is performing. A computer program isn’t real (i.e. frame invariant) in the same way atoms are real.
II. Computers might be conscious, but AIs are not:
Dual-aspect monism (aka ‘neutral monism’) essentially argues the physical and the phenomenal are ultimately different aspects of the same thing, similar to different shadows (mathematical projections) cast by the same object. … if the physical and the phenomenal really are mathematical projections from the same object, they’ll have an identical deep structure, and we can ‘port’ theories from one projection to the other. (Taking monism seriously, 2019)
Commentary: if consciousness is physical, then it inherits and requires certain properties from physics. Most relevant to AI consciousness: physical things (such as consciousness) have a location in spacetime. If something has no location in spacetime, it’s a pointer to a level of description in which phenomenal consciousness isn’t well-defined. And so instead of “is this AI conscious?” we should ask questions like “what does it feel to be this specific datacenter server?” — which we can define as a specific 4d chunk of spacetime.
III. We should expect computer consciousness to be really weird:
IVa: Qualia Fragments, aka ‘qualia fraggers’ – technological artifacts created for some instrumental functional purpose, e.g. digital computers. A key lens I would offer is that the functional boundary of our brain and the phenomenological boundary of our mind overlap fairly tightly, and this may not be the case with artificial technological artifacts. And so artifacts created for functional purposes seem likely to result in unstable phenomenological boundaries, unpredictable qualia dynamics and likely no intentional content or phenomenology of agency, but also ‘flashes’ or ‘peaks’ of high order, unlike primordial qualia. We might think of these as producing ‘qualia gravel’ of very uneven size (mostly small, sometimes large, [with] odd contents very unlike human qualia). (What’s out there? 2019)
Commentary: panpsychist approaches to consciousness say “everything is conscious”. But consciousness is likely usually very simple, “consciousness fuzz” that blips into existence and then out. Humans are special in that we bind these tiny blips together and get something more hefty and interesting. I’m generally a fan of EM theories of consciousness, and think that whatever binding is happening on human scales is happening via the EM field (Barrett 2014). Computers also make a lot of interesting patterns in the EM field. But the ways humans and computers store, connect, and process information haven’t been shaped by the same evolutionary pressures. Very likely, computer consciousness would seem very ‘otherworldly’ to us, missing standard human qualia such as free will, and exhibiting substantially different tacit dynamical rules.
TL;DR: AIs aren’t conscious, but computers are (because everything is!). But computer consciousness is probably very weird, in ways it’ll take a formal theory of consciousness to really comprehend.