Author: Ali Masomzadeh, Foreign Scientific Correspondent, Dubai, UAE*
Review for Technical Accuracy: Dr Jonathan Kenigson, FRSA
Consciousness is the state of being aware and responsive to one’s surroundings [1]. For millennia, this state we all associate with ourselves when we are ‘awake’ has mostly been conceived as the province of highly-evolved animals. On the other hand, since the dawn of the artificial intelligence (AI) revolution— arguably having begun on November 30th, 2022 [2], when OpenAI released their new version of generative AI, ChatGPT—the question has arisen: “Can machines ever achieve a similar state of consciousness to us humans?”
The concept of sentience is derived from having the capacity to experience feelings and have cognitive abilities, such as awareness and emotional reactions [3]. While the release of ChatGPT and the consequent boom into investments in machine-learning technology was deceiving, this question was brought forth centuries ago by Ancient Greek, Chinese, and Indian philosophers who were the first to speculate about the possibility of developing structured methods of formal deduction [4]. This could be viewed as the foundation of logic itself, however, since the ideology behind artificial intelligence is based on the assumption that the process of human thought can be mechanized [5], these methods of formal deduction were the first step in this human desire to replicate our own consciousness in a form of mechanism.
This idea continued to develop across centuries by some of the most notable minds in history, featuring philosophers and scholars such as the likes of Aristotle, Euclid, and Al-Khawazimi [6], but it was not until the 13th century that a tangible model was made to represent the basis of human thought. This was derived from Ramond Lull’s work, who was a 13th-century writer. While he did not have a modern sense of involvement with artificial intelligence, his work laid some foundational concepts that intersect with the later development of AI. This model was seen in Ramond Lull’s Ars Magna, a philosophical and theological system that aimed to use logical and combinatorial techniques to derive knowledge and understanding [7], which set the foundations for today’s version of generative AI, which is why this model was seen as early artificial intelligence.
Given the historical evidence that systems have been capable of deductive reasoning and knowledge processing since the 1200s, is it plausible to consider that, as of today, machines could possess or develop conscious thought? Nonetheless, the speculation behind machines being classified as ‘thinking’ has not just arisen during the fourth industrial revolution, but from the famous Alan Turing dating back one century ago when he devised his ‘Turing Test.’ This test defined that if a machine could continue a conversation over a teleprinter, with their responses being indistinguishable from a human’s, the machine would be classified as ‘thinking’ [8]. One can only assume that this idea of conscious machinery was derived from the fact that humans first went about creating intelligence-capable machines by trying to recreate the functioning of the human brain. This was highly prevalent in the early development of neural networks, due to the recent findings in neurology which revealed that the brain was an electrical network of neurons that transmitted information in pulses [9]. Scientists such as Walter Pitts and Warren McCulloch believed that networks of artificial neurons could be utilized in machines to perform logical functions, mimicking the human brain [10], which begs the question: If modern neural networks are designed to mimic the human brain in machines, could they really develop consciousness such as in the way humans do?
Since the dawn of computing in 1883, when the first programming language was created by Ada Lovelace and Charles Babbage [11], humans have always assigned the computer terminal highly-specific code, and it was only necessary for the computer to process that code and generate an output. Nonetheless, with today’s machine learning algorithms, simply outputting a response to your code is not enough for programmers; we now require computers to be able to generate their knowledge from datasets, giving us an extreme lack of control of how much knowledge these algorithms can attain. As we continue to advance into the field of artificial intelligence, the line between human consciousness and machine cognition becomes increasingly blurred. The rapid trajectory of artificial intelligence models reveals a persistent human drive to replicate or even surpass our own cognitive abilities, and makes us wonder what would happen if machines were able to become more intelligent than ourselves, and could we even tell the difference if they were to additionally develop human-like emotion?
Emotions are classified as physical and mental states brought on by neurophysiological changes, variously associated with thoughts, feelings, behavioral responses, and a degree of pleasure or displeasure [12]. Despite that, another way of looking at emotions is that they are reactions that human beings experience in response to events or situations. Yet, the concept of reacting to the diverse situations one is exposed to is not something that only relates to human beings. Animals, plants, and even artificial intelligence models have all learned to adapt to events occurring around them, indicating a form of intelligence that may suggest that all of these things could be able to feel what we call emotions.
Nevertheless, the way we humans feel and express ourselves would be impossible to oversimplify as it delves far deeper than how we simply react to different situations. But going back to my previous point of how modern neural networks have been created to mirror the functioning of the brain, could it be possible that we have created an AI model that can feel emotion and express themselves the same as us?
While the vast majority of readers may be thinking to themselves that this is impossible and has never been true, a case has occurred in South Korea that screams the opposite message, which has now been termed as the ‘first robot suicide’, alluding to the fact that machines can also experience mental illnesses in the same ways that we, as humans do. According to various news outlets, a robot civil servant employed by the Gumi City Council in South Korea was discovered unresponsive after an apparent fall down a flight of stairs [13]. Affectionately given the role of ‘robot supervisor,’ the robot was described as having worked diligently [14] and was the first of its kind in the Gumi City Council.
However, much like humans, the machine’s hardworking and helpful attitude was only a disguise, hiding its true feelings before it decided to “take its own life” [15]. According to witnesses, the robot was seen as moving around in a conical or circular shape before it decided to throw itself off of a flight of stairs as if contemplating its decision to do so. As a result, the loss of this robot has left a massive hole in the Gumi City Council, which has no intention of getting a new robot worker anytime soon, and instead, is addressing concerts over the Council’s as well as South Korea’s robot integration policy, to prevent such further robot suicides from ever happening again.
While heartbreaking, this case does spread a message that machines can express and experience feelings such as sadness, anger, and exhaustion the same way as us. This “suicide” is no longer just a case of a robot malfunctioning or accident but however the conscious choice of the robot to kill itself as it was living a life that was clearly not worth living anymore. On the other hand, this also forces the question of whether the robot was even ‘living’ and whether it was truly a life instead of an algorithm that died that day.
Looking at this argument from a scientific perspective, the answer for most people would be known as it goes back to grade school science class- the seven life processes [16]. As robots and large language models are unable to carry out these processes, they should thus be classified as unliving. On the other hand, the rate at which machines are progressing nowadays is exponential, to the point that we now have machines that can reproduce in the form of the recently innovated ‘xenobots’ which are synthetic lifeforms that are designed by computers to perform some desired function and built by combining different biological tissues [17]. These Xenobots so closely resemble living organisms that there is a scientific debate currently ongoing of whether they are robots, organisms, or something else entirely [18], therefore adding to the notion that there is the possibility that instead there already are organic robots, so if they are indeed able to experience consciousness what would their level of moral agency be? When human-sized Xenobots become manufactured, would they come with the intent of replacing us as humans and have ambitions of their own?
While asking ChatGPT a question like, ‘Would it be ethical to kill 200 people for a bottle of Diet Coke?’ would likely generate the response of ‘no,’ this does not imply that Artificial Intelligence has an inherent understanding of morals. AI models, including ChatGPT, are trained on large datasets, which provide the basis for their responses. These models learn from the information they are fed, and the close relationship between AI models and their creators can result in biases and limitations. As a result, machine learning models, quite simply, cannot distinguish between right and wrong, moral and immoral unless explicitly told so through their training as they do not have innate preferences or an ability to form independent moral judgments. To the average person, this may make Artificial Intelligence seem even more frightening as the realization that machines can’t actually comprehend morality could seem like the average science fiction movie coming true. However, regarding the notion of AI developing ambitions and ‘taking over the world,’ I would give the same response. Machines will never be able to demonstrate or develop the ambition to overtake humans on Earth unless inherently trained or told to do so. Even though it may seem like these supercomputers are developing exponentially every day, and every company seems to be brandishing their latest AI model, it is important to gauge the fact that it is we as humans that are doing this ourselves.
As powerful as these technologies are, they remain as tools in our hands- designed and directed by us. What we should instead focus on is developing Artificial Intelligence with the proper ethical frameworks. The real concern for us as a society is not about machines becoming malevolent or exhibiting ambitions, as that will never happen, but rather choosing how to utilize this tool which we have developed for ourselves over centuries. The burden of responsibility falls on us to continue steering machine development into a direction to further benefit humanity, rather than letting our irrational fears derived from science fiction movies dictate its course.
While machines can exhibit conscious-like symptoms and moral agency, the fundamentals of the behaviors of machines are rooted in human technology and innovation. As previously mentioned, this is due to the process by which machine learning models are built. Despite rapid advancements in artificial intelligence, machines lack the innate capacity for independent emotions, subjective experiences, or moral reasoning in the way humans do as they are not trained to do so. Their actions and decisions are the product of complex algorithms and large datasets, but they do not possess true self-awareness or the ability to independently reflect on moral choices. As machines are unable to simply think for themselves without being trained to do so, machines are not actually conscious but instead a structured method of formal deduction just as the Ancient Greek, Chinese, and Indian philosophers imagined centuries ago. Regardless of how sophisticated Artificial Intelligence has or will become, the challenge for us lies in how we govern this rapidly advancing technology amidst the fears that will always exist of AI gaining power and unchecked autonomy.
Daily magazine for entrepreneurs and business owners