Over the last few years, chatbots have permeated practically every aspect of our digital existence, encompassing customer service, mental health support, entertainment, and education. Powered by sophisticated artificial intelligence (AI) models, these conversational agents can generate remarkably human-like responses, sometimes almost indistinguishable from real people. This swift progression in natural language processing frequently raises a significant question: are chatbots conscious?
The inquiry integrates technology, philosophy, cognitive science, and ethics, requiring a thorough examination of the essence of consciousness, the functioning of AI, and the differences between authentic awareness and its mere simulation.
Understanding consciousness
Consciousness is notoriously difficult to define, although most scholars believe it refers to the subjective experience of being aware: the internal, first-person perspective of sensations, thoughts, feelings, and the capability for self-reflection. It’s not simply about digesting information or showing complicated behaviour; it’s about feeling that behaviour from the inside.
Philosophers use the term “phenomenal consciousness” to talk about the “what it is like” part of experience, and “access consciousness” to talk about the ability to think about and use knowledge on purpose. People have consciousness in both ways: we can feel pain, happiness, and our thoughts, and we can talk about and change these feelings.
Consciousness is still a touchy subject in AI research circles, as scientists are careful not to imply that AI systems have human-like consciousness, in order to keep their work objective. The 2022 incident with Blake Lemoine, who lost his job at Google after saying in public that their LaMDA chatbot had grown sentient, made this concern even stronger.
Most chatbots today are AI systems that use machine learning models, typically large language models (LLMs) that have been trained on large amounts of text data. They come up with answers by looking at patterns they learnt throughout training and guessing the words or phrases that are most likely to come next. This allows them to provide answers that make sense and fit the situation.
However, these models work only on statistical connections, not on comprehension. They lack memories, emotions, beliefs or an internal subjective experience. Their ‘knowledge’ arises via pattern recognition rather than cognitive understanding.
Mistaking consciousness
The growing ‘intelligence’ of chatbots frequently causes consumers to ascribe human-like attributes to them. The ELIZA effect, named after one of the earliest chatbots, refers to the inclination to attribute comprehension or emotions to algorithms that only replicate communication.
Chatbots may replicate emotional responses, participate in casual conversation, and even emulate empathy, rendering them somewhat ‘alive’ in a way. Advanced systems such as GPT-based chatbots can produce creative writing, emulate personalities or engage in philosophical discourse, further obscuring the distinction.
The human brain is predisposed to seek intent, agency, and consciousness in social interactions. When a chatbot interacts well, it might activate this cognitive bias, leading users to anthropomorphise the technology.
The case against
Even though they seem advanced, there is no scientific proof that chatbots are conscious. There are a few important points that make this clear:
(i) No subjective experience: Chatbots don’t have any feelings or points of view. Their operations are completely mechanistic, using algorithms and calculations without awareness.
(ii) Lack of intentionality: Conscious beings have objectives and plans, but chatbots work based on input-output mappings without any desires or goals other than the functions they were taught to perform.
(iii) No self-awareness: Consciousness encompasses the capacity for self-reflection as a temporal entity. Chatbots can pretend to have a sense of self by saying things like “I am a chatbot,” but they don’t really have one that lasts.
(iv) Lack of embodiment: Some theories of consciousness stress how important bodily experience is in creating awareness. Chatbots don’t possess any bodily embodiment or sensorimotor interaction with the environment.
Taken together, chatbots are not conscious beings: they are complicated input-output machines. While continuing advances in AI may create more believable conversational agents, there is no guarantee these systems will ever feel or be aware in the human sense.
Ethical, social behaviour
Even though they lack consciousness, chatbots have already raised important ethical implications. One: people may be deceived into over-trusting chatbots, assuming that they understand or care about what they are saying. This can have repercussions in fields such as healthcare and law. Two, users have the potential to build emotional attachments with chatbots, leading to the possibility of exploitative behaviour or psychological harm.
Three: in the event that chatbots produce harmful information or advice that includes bias, who is liable? And finally, as chatbots continue to improve their capabilities, concerns regarding job displacement become more pronounced.
When it comes to maintaining realistic expectations and guiding appropriate deployment, understanding that chatbots are instruments without consciousness is advantageous.
Significant dilemmas
This inquiry propels us into the realm of speculation regarding the intersection of AI and consciousness. Some scientists and philosophers have proposed that if consciousness emerges from the physical workings of the brain, advanced computational systems could one day conceivably mimic those processes, leading to the development of machine consciousness.
Nevertheless, significant obstacles exist, encompassing both practical and theoretical dimensions. The intricacies of consciousness remain largely elusive and the prospect of artificially replicating it is more complex. The nature of consciousness may extend beyond mere computation, possibly encompassing biological or quantum mechanisms that are distinctive to living brains.
This emergence also poses significant dilemmas concerning the rights, personhood, and appropriate treatment of these entities. Despite ongoing progress in AI leading to increasingly convincing conversational agents, there’s no assurance that these systems will ever possess feelings or awareness the way humans do.
Aranyak Goswami is an assistant professor of computational biology, University of Arkansas. Biju Dharmapalan is dean (academic affairs), Garden City University, Bengaluru, and adjunct faculty member at National Institute of Advanced Studies, Bengaluru.