Recently, the IAI released Bernardo Kastrup’s piece, ‘The Lunacy of "Machine Consciousness"' where he reflected on a recent disagreement with Susan Schneider at January’s IAI Live debate ‘Consciousness in the machine’. Susan Schneider responds to Bernardo Kastrup’s critique of her position, and argues for the ‘wait and see’ approach to machine consciousness.
The idea of conscious AI doesn’t strike me as conceptually or logically impossible—we can understand Asimov’s robot stories, for instance. I’ve discussed this matter in detail after mulling over various philosophical thought experiments. This doesn’t mean conscious machines will walk the Earth or even that synthetic consciousness even exists in the universe. For an idea can be logically consistent and conceptually coherent but still be technologically unfeasible.
Instead, I take a Wait and See Approach: we do not currently have sufficient information to determine whether AI consciousness will exist on Earth or elsewhere in the universe. Consider the following:
a) We do not understand the neural basis of consciousness in humans, nor do we have a clear, uncontroversial philosophical answer to the hard problem of consciousness—the problem of why all the information processing the brain engages in has a felt quality to it. This makes a theoretical, top down approach to building machine consciousness difficult.
b) We don’t know if building conscious machines is even compatible with the laws of nature.
c) We don’t know if Big Tech will want to build conscious machines due to the ethical sensitivity of creating conscious systems.
d) We don’t know if building conscious AI would be technologically feasible — it might be ridiculously expensive.
In my interview, I stressed that if humans need a top down theory of how to create conscious AI, we are in big trouble. For humans to deliberately create conscious AIs based on a theoretical understanding of consciousness itself, we would need to create it using a recipe we currently haven’t discovered, and with a list of ingredients that we may not even be able to grasp.
SUGGESTED READING All-knowing machines are a fantasy By Emily M.Bender
In a recent piece for the IAI, after we took part in a debate ‘Consciousness in the machine’, Bernardo Kastrup accuses me of not merely claiming that conscious machines are logically or conceptually possible but of making the stronger claim that conscious machines are technologically feasible and compatible with the laws. (See his rant about the “Flying Spaghetti Monster”). But as you can see, I’m not saying this: I’m advocating the Wait and See Approach.
But why do I have a “wait and see approach” at all, rather than following Bernardo in rejecting the possibility of conscious AI altogether? I have a variety of reasons. First, since the jury is out on the above issues, the Wait and See Approach seems warranted. Second, I see several paths to the deliberate construction of conscious AI, assuming it is compatible with the laws of nature to do so. For example, here are two:
Consciousness Engineering
On this path, conscious AI is engineered by biological or artificial superintelligences that know the recipe. A superintelligence is an entity that surpasses human intelligence in every respect: scientific reasoning, moral reasoning, etc. Perhaps a (non-conscious) super intelligent AI will eventually be built on Earth, and it will want to build conscious AI.
Or perhaps an alien superintelligence (either biological or artificial) in a distant part of the universe will decide it wants to build conscious machines. For example, a super intelligent AI might regard consciousness as a feature that is beneficial to add to its own architecture. Or perhaps the AI will long to build its own AI mind children due to a simple curiosity to understand consciousness and emotion. Such a superintelligence, with recipe in hand, would embark upon the task of consciousness engineering.
___
You don’t need to be conscious outside of the program for machine consciousness to be possible. If you are conscious in the simulation, machine consciousness exists!
___
Easter Egging Consciousness
Here is something we can attempt now, without the recipe, as dumb as we are. I remember when I was a broke UC Berkeley student and my first computer died. I took it to a smoke and incense filled shop on Telegraph Avenue, hoping it could be fixed cheaply. Grooving to the sound of Jefferson Airplane, the technician fixed it without knowing what was wrong. The technician called his method “Easter egging” the machine. Akin to searching a garden for a hidden surprise (i.e., the Easter Egg), he plugged in a component to my computer, not knowing where the problem was, but hoping that eventually, it would fix the problem. Lucky for me, his second component worked. Alas, an Easter egg surprise! Good thing, since I couldn’t afford a new computer!
Easter egging consciousness is about as bottom up of an approach as it gets to building synthetic consciousness as bottom up gets. Even our humble brains may be able to do it. Here’s how:
Medicine, in effort to cure brain disorders involving consciousness, may eventually seek to develop brain chips to fix (or enhance) parts of the brain that underlie consciousness (e.g., stroke). (Indeed, there are already efforts to put chips in other areas of the brain, for the purposes of carrying out lost brain function, such as Ted Berger’s artificial hippocampus, but to the best of my knowledge, these efforts are not in areas of the brain implicated as a neural basis of consciousness.)
In order for these “consciousness producing chips” to work, the researchers would have to develop brain chips able to suffice as a substitute basis for consciousness. Otherwise, patients would still have deficits in consciousness. The need to help patients suffering from disorders of consciousness could encourage researchers to try, over a period of decades, to develop brain chips for therapies (or even enhancements) that are “the right stuff” for consciousness. If neuroscience could achieve this, we could, in principle, have a sense of whether it is possible, given our technological limitations, to build conscious AI.
This is Easter egging consciousness. Without a solid theoretical understanding of the neural basis of consciousness, we put a chip in the brain, and we had a pleasant surprise — a chip that was capable of underlying conscious experience. The disorder of consciousness resolved. From this, we would learn a good deal, both about the neural basis of consciousness in humans (e.g., about the underlying algorithm the brain ‘runs’ for consciousness, whether consciousness is sensitive to quantum mechanical phenomena, etc.) and about machine consciousness—what kind of microchip, if any, might be a feasible component in a machine if the effort was to construct a conscious AI system.
But this would only be a humble baby step towards the creation of machine consciousness. And the point is: we aren’t there yet, and this will take long time. Far longer, in fact, than it will to make savant AIs. So again, a “wait and see” approach is prudent. We would need to build a machine architecture made of these hypothetical chips, and run consciousness tests, in order to determine whether the machine itself was conscious. (For interestingly, something built of a part that contributes to consciousness in the human system may not itself be part of an AI system that is conscious.). For tests for consciousness see here and here.
Let’s delve into some other points Bernardo made:
1.The Simulation is Not Reality Objection
Bernardo writes:
“I can run a detailed simulation of kidney function, exquisitely accurate down to the molecular level, on the very iMac I am using to write these words. But no sane person will think that my iMac might suddenly urinate on my desk upon running the simulation, no matter how accurate the latter is. After all, a simulation of kidney function is not kidney function; it’s a simulation thereof, incommensurable with the thing simulated. We all understand this difference without difficulty in the case of urine production. But when it comes to consciousness, some suddenly part with their capacity for critical reasoning: they think that a simulation of the patterns of information flow in a human brain might actually become conscious like the human brain. How peculiar.”
The consciousness and kidney function analogy is misleading. To see why, for the purpose of argument, consider the question: what would happen if you were a purely AI-based conscious virtual reality being, like the Smith program in the Matrix film? Would you need to cause events outside of the simulation to be a conscious being? Suppose you tried to do something, say, drink an espresso (or virtual espresso, rather). Notice you would have the ability to cause events within the program, and you would have conscious experiences inside the program. You don’t need to be conscious outside of the program for machine consciousness to be possible. If you are conscious in the simulation, machine consciousness exists! For inside the program, you are conscious. You don’t need to cause events outside of the program, as in the kidney case, to be a conscious being. Now, I don’t know if machine consciousness is more than conceptually or logically possible. It may not turn out to be possible in the universe we live in; the point is that his analogy doesn’t rule any of this out.
___
Consciousness could have vastly different biological instantiations elsewhere as well as synthetic instantations
___
2. Bernardo’s Criticism of Substrate Independence and Multiple Realizability of Consciousness.
Bernardo writes:
“Where does this abandonment of a healthy sense of plausibility come from? Those who take the hypothesis of conscious AI seriously do so based on an appallingly biased notion of isomorphism—a correspondence of form, or similarity—between how humans think and AI computers process data…. After all, if you lay an actual human brain and an actual silicon computer open on a table before you, you will be overwhelmed by how different they are, structurally and functionally. A moist brain is based on carbon, burns ATP for energy, functions through metabolism, processes data through neurotransmitter releases, etc. A dry computer, on the other hand, is based on silicon, uses a differential in electric potential for energy, functions by moving electric charges around, processes data through opening and closing electrical switches called transistors, etc.”
The question here is whether consciousness is substrate independent—whether it can be realized in different substrates. This is an ongoing debate and merely pointing out that there are different instantiations that look different is dialectically useless, as everyone agrees on the different candidate instantiations look quite different. It is important to bear in mind that if we prematurely define consciousness in terms of our own — specifically, in terms of the language of neuroscience — we might miss instances of consciousness that are non-neural. During my time at nasa a similar issue arose the debates over how to define life. The worry, in that context, was that in searching for life on other worlds, if we define life too narrowly we miss other instantiations of life. Consciousness could have vastly different biological instantiations elsewhere as well as synthetic instantiations.
3. Concern with the issue is a waste of resources.
Bernardo writes:
“Entertaining ‘conscious AI’ is counterproductive; it legitimizes the expenditure of scarce human resources—including tax-payer money—on problems that do not exist, such as the ethics and rights of AI entities. It endangers the sanity of our culture by distorting our natural sense of plausibility and conflating reality with (bad) fiction. AIs are complex tools, like a nuclear power plant is a complex tool. We should take safety precautions about AIs just as we take safety precautions about nuclear power plants, without having ethics discussions about the rights of power plants. Anything beyond”
As indicated in my interview, a lot is at stake in this debate. As I write this, big tech is creating increasingly humanlike chatbots, chatbots which have some experts asking about whether the bots are conscious. This situation will get more and more pressing, and intelligent debate is essential. For another thing, we need to ask whether it is a good idea for AIs to be capable of impersonating sentient beings, or whether this should be banned, as Dan Dennett and myself have suggested.
SUGGESTED READING The AI containment problem By Roman V.Yampolskiy
We do not have clear tests for machine consciousness, and it would be catastrophic if we mistakenly classified beings that are not sentient into the class of conscious beings, or vice versa. Relatedly, there will need to be deep conversations about the impact machine consciousness could have on AI safety, the ethics of digital suffering, that is, whether AIs, including beings residing in a computer simulation, could have capacity to feel pleasure and pain. Other important issues include if and how machines should be produced with sentience in the first place, whether it is right to tweak the quality of felt experience in a machine (say, for instance, to make the AI feel pleasure in serving us, or to dial out consciousness in an AI), what ethical obligations we would have to conscious machines, etc. (While digital suffering or dialling down suffering may seem hard to fathom, these possibilities become salient in reading works like Robin Hanson’s The Age of Em, Isaac Asimov’s robots series, and Aldous Huxley’s prophetic novel, Brave New World.)
Bernardo’s error, I believe, is in providing a negative answer to these questions where only uncertainty is warranted.