The question of Artificial Consciousness transcends the debate of whether Superintelligent AI will exist in the near future. Almost everyone in the field agrees that Superintelligence is on the horizon, but there is still no clear unanimous agreement about whether such a being would possess a sense of consciousness as we humans do. People have debated what consciousness is and what causes it. Still, we are yet to find an all-encompassing theory of consciousness. With the rise of AI, one must consider the possibility of Artificial Consciousness, as well as the philosophical and societal implications. Here are some views on it:
Pentti Haikonen considers classical rule-based computing inadequate for achieving Artificial Consciousness. He states,
“The brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers."
Rather than trying to achieve consciousness by identifying and implementing their underlying computational rules, Haikonen proposes a form of ‘Cognitive Architecture’ to reproduce the processes of perception, emotion and the cognitive functions behind these. This Cognitive Architecture functions through the use of artificial neutrons, without algorithms or programs as in a traditional computational device. When implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterised by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."
One of the most explicit arguments for the plausibility of Artificial Consciousness comes from David Chalmers. His proposal, is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. He states:
“Computers perform computations. Computations can capture other systems' abstract causal organisation.”
By extension, if consciousness is just a complex string of causal actions, then AI should be able to develop it at a certain point. However, others argue that this view is begging the question in the first place, in believing that consciousness is an abstract causal organisation.
Neuroscientist Michael Graziano explains that most theories of consciousness ‘rely on magic’.
“They point to a feature of the brain — vibrating neurons for instance — and claim that feature to be the source of consciousness. The story ends there. The magician points to his hat — vibrating neurons — and pulls out a rabbit—consciousness.”
But how does the hat produce the rabbit? By what mechanism would neural vibrations lead a brain to become aware of itself? This itself alludes to the importance of questioning initial assumptions and working by first principles. We really don’t understand enough about consciousness as we would like to think.
Graziano states that to him, the kind of consciousness in the brain is very clear. It’s part of the style of information processing.
He says:
“There are things that I think are coming if you look into the future. If consciousness is buildable, which I think it is, if the human brain is just giant, massive information processor, which I think it is, if the technology for scanning the brain improves, which it obviously will, you reach this kind of conclusion that at some point we will be scanning the pattern of functional connectivity in a brain and collecting the data and simulating it or duplicating it in other formats, artificial computer formats.”
Ultimately, it’s not just building conscious machines, but copying conscious minds and creating copies. This consequently leads to some unprecedented philosophical issues. The idea of Individuality is nullified if multiple copies of your consciousness can be created or emulated. What happens to the sanctity of life when consciousness may be replicated to any degree, and say a Superintelligent AI uses consciousness in simulations to attain their goals.
However, there are those who argue to the contrary, believing that consciousness likely cannot be replicated in AI. ‘Type-Identity’ theorists and other skeptics hold the view that consciousness and qualia can only be realised in particular physical systems (like animals) to different degrees, because consciousness has properties that necessitate that particular physical constitution. Academics like Giorgio Buttazzo have argued for example, that ‘a computer, like a washing machine, is a slave operated by its components’. Along this line of reasoning, it is not necessarily true that a Superintelligent AI cannot exist without the simultaneous property of Artificial Consciousness. Others employ the examples of ‘The Chinese Room’ and the ‘Symbol Manipulator’ thought experiments to show that while our minds understand and deal with concepts, machines don’t and only deal with sequences and payloads. The mind is thus not a machine, and neither a machine nor a machine simulation could ever be a mind. They believe machines don’t learn, they pattern match and only pattern match. There doesn’t have to be a conscious experience associating one thing to another and developing recognition patterns that may seem like a conscious experience.
It can be agreed upon that Superintelligent AI is on the horizon, but also that this Superintelligence may exist without the same consciousness as currently living, sentient beings. What is clear is that there is a long way to go in our understanding of consciousness. It is not simply black and white like some may portray, but it is also not out of reach of human understanding. It is important to focus on our understanding and continue research in order to be sufficiently prepared for the eventual rise of AI. Knowledge is power.