With growing interest in AI and rapid developments in consciousness science, tackling the question of machine consciousness is now unavoidable.
Author

Federico D’Agostino

Published

Thursday, the 6th of May, 2021

The issue of machine consciousness (MC) could be considered the culmination of all philosophical problems that trouble the field of Artificial Intelligence (AI) since its onset. It forces us to consider the hard problem and its (for some, putative) implications, and stretch the ideas of functionalism to their theoretical and practical limits.

Thanks to the renewed interest in consciousness studies in the past twenty years, we may start to have the concepts and frameworks needed to tackle long-standing elusive questions. The sub-field of machine consciousness is playing a valuable role in this effort, continuously challenging intuitions and forcing to put theories to empirical test.

Interestingly, according to current consensus, the short answer to the question “Could machines ever be conscious?” turns out to be a sound “Yes.” Even John Searle, famous for his critiques of strong AI attempts, famously said “[…] of course some machines can think and be conscious. Your brain and mine, for example.” (Searle, 1997, p. 202).

However, the problems for Machine Consciousness lie in the wildly different reasons different theorists argue how and why a machine could ever be conscious.

In this essay, we are going to survey some of the most popular and theoretically relevant approaches to Machine Consciousness, with a critical eye for the problems that burden the field and the inevitable ethical issues associated with it.

Against Machine Consciousness

Before delving into our discussion on the possibilities of MC – and the wide range of debates internal to the field regarding how this possibility could be achieved – it is worth considering some arguments against MC in toto. Some of these positions have already been dismissed and criticised in the past, as early as Turing’s seminal analysis of AI (Turing, 1950).

First, we have old-fashioned dualism, or the notion that consciousness is somehow a peculiar property of the nonphysical mind. Especially if religiously motivated, this position is a priori excluded from scientific discourse of any kind, but not surprisingly arises frequently in discussions on consciousness. Viewed by some as a desire to protect the mind from science (Dennett et al., 1994), claims of this kind can be rejected by asking why, of the many complex physical objects in the universe, the brain should be the only capable of interfacing with another realm of being. All of the usual problems of dualism then apply here.

Second, we have arguments on the importance of biological, organic brains in supporting consciousness. While it is possible that the computational efficiency achieved by biochemical processes is unreproducible in other physical systems, there is no reason to defend this claim in principle. If it is just a matter of efficiency, it can conceivably be overcome by technological progress or the adoption of neuromorphic engineering. If it is also a matter of supposed primality of some organisation of atoms over others, then it is just a dogmatic claim (Blackmore & Troscianko, 2018, p. 322; Dennett et al., 1994).

Third, there is the more popular notion that some processes are just too complex to be implemented in machines. Even if “scientifically boring” (Dennett et al., 1994), this may well be a real possibility. Nonetheless, there is no in-principle reason to back it, and most of the tasks once thought to be impossible for machines to carry have been solved in the past twenty years.

Overall, even beyond those surveyed here, no argument proves the impossibility of building conscious machines convincingly. This may be the reason underlying the excitement and diversity of work in the field.

On strong and weak MC

A separate theoretical challenge for MC, superficially far more serious than claims on its total impossibility, stems from Searle’s distinction between strong and weak AI. Here we will not tackle Searle’s original arguments, especially not its infamous Chinese room (Searle, 1980). Not only is a discussion of the Chinese room far beyond the scope of this essay, but it is also not a very pertinent argument against MC. In fact, even if it is often used in this context, the argument was initially designed to deal with intentionality (here in the philosophical use of aboutness), and not consciousness. Before re-using the argument in this context, the relationship between the two must be made explicit, and this is delicate on its own.

Leaving aside Chinese rooms and their controversies, we are only adopting the strong and weak distinction to illustrate two kinds of approaches in MC (Seth, 2009). Weak MC – like weak AI – does only want to model part of the putative mechanisms underlying consciousness (derived from theories) to help reveal explanatory links and advance our understanding of the phenomenon. The models so built are not claimed to be conscious, much as simulated rainstorms are not claimed to be wet. On the other hand, the explicit aim of strong MC is to create phenomenally conscious systems. This pursuit is much more problematic, but it is also what we are more interested in in this discussion.

However, it must be noted how weak, and not strong, MC is the one that can perhaps better advance our scientific understanding of consciousness and its possible reproducibility in other media (Seth, 2009). The reason for this is an inherent circularity in most strong MC proposals: researchers set out to create an instantiation of consciousness that would reveal general principles, but the principles that would validate the interpretation of such models are either absent to begin with or even built in the model from the start. This is a complicated and pivotal issue, to which we shall return later.

As a last note on this matter, it is argued that we may once reach strong AI with weak MC (Gamez, 2008): on what grounds will we base our distinction at that point? This is where theory-driven approaches to MC come in.

Top-down approaches to MC

Most of the proponents of strong MC move their search from solid theoretical grounds. They either want to find the key “ingredient” of consciousness to reproduce it in machines or seek to reproduce some putative neural architecture somehow associated with consciousness.

A first paradigmatic example of this approach are Aleksander’s axioms (Aleksander & Dunmall, 2003). These are based on introspectively derived features of consciousness – volition, emotion, imagination, attention, presence – which are claimed necessary to produce MC in the strong sense. Far from being self-evident, Aleksander’s axioms are high-level targets for explanation themselves (Clowes & Seth, 2008) and indeed have not been very helpful in engineering MC.

Another famous example of an axiomatic approach comes from “integrated information theory” (IIT; Oizumi et al., 2014; Tononi et al., 2016), which equates consciousness with the intrinsic, irreducible cause-effect power of any complex physical system upon itself (i.e., its integrated information; Koch, 2019, chapter 8). In this view, the physical, architectural, cause-effect relationships within a system matter for consciousness. While challenged on many grounds, IIT has at least the merit to provide precise and testable predictions, and a working measure of consciousness in any physical system (i.e., the notorious F).

Directly opposed to IIT in many ways, but also coming from a strong Neuroscientific inspiration, Global Workspace Theory (GWT; Baars, 1995), is the last of the top-down approaches we will consider. While IIT stresses the importance of organisation at the level of the physical substrate for consciousness (thus predicting it cannot be simulated in von Neumann machines; Koch, 2019, chapter 12), GWT is more strongly functionalist in predicting that consciousness is only global-availability and self-monitoring in information processing systems (Dehaene et al., 2017; Dehaene & Naccache, 2001). Consciousness is fully computational, and could thus be instantiated even in transistor-based, von Neumann architectures.

Are there any attempts of such conscious machines already out there then?

Bottom-up approaches

A complementary approach to theoretical analyses of MC is just to try to build MC from the bottom-up. Of course, such a pursuit must be theory-laden, but practitioners who commit to engineering first usually put on hold theoretical debates to focus on empirical attempts. Additionally, in approaches of this kind, there is no ambition to reproduce human-level consciousness but an inclination towards building a new type of consciousness altogether. Most, but not all, of these approaches strongly focus on embodiment.

An early example of this approach is MIT’s COG (Dennett et al., 1994), a humanoid robot capable of learning through an artificial infancy, loosely inspired by human cognition and designed to interact with other humans. Though the program is now discontinued, it inspired many other attempts at social robots (Olaronke & Ikono, 2017).

CRONOS (Holland, 2007) is another anthropomimetic robot using a strongly embodied approach to machine consciousness, also including internal models of the world in its design (Marques & Holland, 2009). It differs from other attempts because it does not interact with people, nor was it built to show emotions or communicate with language.

Attempts of this kind are usually based on subsumption architectures and strong versions of functionalism such as those proposed by Sloman and Chrisley (2003) with their CogAff architecture.

More recently, the rise of Deep Learning has caused a return to disembodied, function-specific AI systems while also completely disregarding philosophical discourse and the quest for MC, even if this seems to be changing now (Krauss & Maier, 2020).

In any case, the merit of practice-first, bottom-up approaches is of continuously challenging our intuitions on attributions of intentionality, even if some models were not designed for this purpose in the first place. Here we are bound to ask: are robots like COG, CRONOS, Kismet, or complex neural networks like GPT-3 (Brown et al., 2020) conscious? How can we find out?

An epistemological obstacle

Without convincing enough theories of consciousness, MC is destined to reach an impasse. How could we decide, for example, if a robot is conscious or not without a good understanding of consciousness in the first place?

This is precisely why Seth (2009) argues for the pursuit of weak MC over strong MC: only by studying and modelling consciousness in the physical systems we are completely sure to have it can we possibly generalise to machine and escape circularity.

Nonetheless, for the time being, the debates are open. Some think there is no difference between as-if consciousness and real consciousness, and others think there is. Certain theorists argue for the importance of physical implementations (i.e., holding that transistors in a von Neumann architecture will not produce consciousness no matter how intricate the simulations they instantiate are. Simulation is just fundamentally different from implementation; Koch, 2019; chapter 13), while others are entirely computationalists. Or again, some think conscious machines are still in the realm of a far future, while others think they are already here.

An additional intriguing possibility is that we may have such wrong intuitions on the nature of our own consciousness to ask the wrong questions on MC to begin with.

Overall, one thing is clear: even if we were able to build a strikingly anthropomorphic robot that reaches Artificial General Intelligence, thinks, and behaves like a human, there would still be massive disagreement whether it is really conscious. Would it matter at that point?

Ethical implications

There is growing legal and ethical interest in MC issues (Bryson et al., 2017; David & Calverley, 2005; Solaiman, 2017), and for a reason. We are already tempted to take the intentional stance when faced with anthropomorphic machines, and studies of consciousness seem to lag behind developments in Artificial Intelligence, at least in popularity. Should we grant ethical consideration to machines that exhibit conscious-like behaviour?

Some argue that we should ban research on MC until we are sure that machines cannot suffer (Metzinger, 2021). Given that there are other beings surely capable of suffering – namely animals – that are still not given sufficient ethical consideration despite solid arguments in favour (Singer, 1995), granting rights to machines might be, if not premature, at least hypocritical.

Conclusion

The field of Machine Consciousness is as variegated as it gets in philosophy of mind and consciousness studies. This essay gave a brief overview of some of the currently most popular ideas and historically relevant frameworks. However, this is barely scratching the surface.

With all its controversies, the field of MC can be very beneficial to the study of consciousness. On the one hand, we have Neuroscientific studies. On the other hand, attempts at strong MC and weak MC simulations. “Uphill analysis versus downhill synthesis” (Braitenberg, 1984), all carefully held together by philosophical appraisal.

References

Aleksander, I. L., & Dunmall, B. (2003). Axioms and Tests for the Presence of Minimal Consciousness in Agents I: Preamble. Journal of Consciousness Studies, 10(4–5), 7–18.

Baars, B. J. (1995). A cognitive theory of consciousness (Reprinted). Cambridge University Press.

Blackmore, S. J., & Troscianko, E. (2018). Consciousness (3rd edition). Routledge.

Braitenberg, V. (1984). Vehicles: Experiments in synthetic psychology (9. print). MIT Press.

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., … Amodei, D. (2020). Language Models are Few-Shot Learners. ArXiv:2005.14165 [Cs]. http://arxiv.org/abs/2005.14165

Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law, 25(3), 273–291. https://doi.org/10.1007/s10506-017-9214-9

Clowes, R. W., & Seth, A. K. (2008). Axioms, properties and criteria: Roles for synthesis in the science of consciousness. Artificial Intelligence in Medicine, 44(2), 91–104. https://doi.org/10.1016/j.artmed.2008.07.009

David, J., & Calverley, D. (2005). Toward a method for determining the legal status of a conscious machine. 75–84.

Dehaene, S., Lau, H., & Kouider, S. (2017). What is consciousness, and could machines have it? Science, 358(6362), 486–492. https://doi.org/10.1126/science.aan8871

Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition, 79(1), 1–37. https://doi.org/10.1016/S0010-0277(00)00123-2

Dennett, D. C., Dretske, F., Shurville, S., Clark, A., Aleksander, I., & Cornwell, J. (1994). The Practical Requirements for Making a Conscious Robot [and Discussion]. Philosophical Transactions: Physical Sciences and Engineering, 349(1689), 133–146.

Gamez, D. (2008). Progress in machine consciousness. Consciousness and Cognition, 17(3), 887–910. https://doi.org/10.1016/j.concog.2007.04.005

Holland, O. (2007). A Strongly Embodied Approach to Machine Consciousness. Journal of Consciousness Studies, 14(7), 97–110.

Koch, C. (2019). The feeling of life itself: Why consciousness Is widespread but can’t be computed. MIT Press.

Krauss, P., & Maier, A. (2020). Will We Ever Have Conscious Machines? Frontiers in Computational Neuroscience, 14. https://doi.org/10.3389/fncom.2020.556544

Marques, H. G., & Holland, O. (2009). Architectures for functional imagination. Neurocomputing, 72(4), 743–759. https://doi.org/10.1016/j.neucom.2008.06.016

Metzinger, T. (2021). Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology. Journal of Artificial Intelligence and Consciousness, 08(01), 43–66. https://doi.org/10.1142/S270507852150003X

Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLOS Computational Biology, 10(5), e1003588. https://doi.org/10.1371/journal.pcbi.1003588

Olaronke, I., & Ikono, R. (2017, October 30). A systematic review of emotional intelligence in social robots.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/S0140525X00005756

Searle, J. R., Dennett, D. C., & Chalmers, D. J. (1997). The mystery of consciousness (1st ed). New York Review of Books.

Seth, A. (2009). The strength of weak artificial consciousness. International Journal of Machine Consciousness, 01(01), 71–82. https://doi.org/10.1142/S1793843009000086

Singer, P. (1995). Animal liberation (2nd ed., with a new preface by the author). Pimlico.

Sloman, A., & Chrisley, R. (2003). Virtual machines and consciousness. Journal of Consciousness Studies, 10(4–5), 133–172.

Solaiman, S. M. (2017). Legal personality of robots, corporations, idols and chimpanzees: A quest for legitimacy. Artificial Intelligence and Law, 25(2), 155–179. https://doi.org/10.1007/s10506-016-9192-3

Tononi, G., Boly, M., Massimini, M., & Koch, C. (2016). Integrated information theory: From consciousness to its physical substrate. Nature Reviews Neuroscience, 17(7), 450–461. https://doi.org/10.1038/nrn.2016.44

Turing, A. M. (1950). I.—COMPUTING MACHINERY AND INTELLIGENCE. Mind, LIX(236), 433–460. https://doi.org/10.1093/mind/LIX.236.433