A Road to Self-Aware Machine Intelligence
“We marveled at our own magnificence as we gave birth to AI.” - The Matrix
How we might create a self-aware machine intelligence from current technology.
In the Alternate Futures podcast, I’ve spoken with many authors at length about their thoughts on Artificial Intelligence and its uses and abuses. If you enjoy listening to podcasts, a few notable episodes that I might recommend checking out would be: W.L.Patenaude, Dietmar Wehr, M.D. Cooper, Doug J Cooper, Jaxon Reed, Rachel Aukes, Jim Keen, Susan Kaye Quinn, and Sophie McKeand among many others.
In science fiction, it’s quite common for authors to accept that, at some point in the future, self-aware machine intelligence (SAMI) will exist, but few bother exploring the path to get there. Instead, most of the stories with SAMI will have it as an unknown accident (A Printers Choice by W.L. Patenaude or The Synchronicity War by Dietmar Wehr), or an alien entity (Crystal Deception, Doug J Cooper).
In this article, I revisit an idea I originally posted on my website in December 2020, but is now gone due to issues during a host migration. Here, I bring together biology and technology to develop an idea about how we could — if that was our goal — achieve a self-aware machine intelligence with only our current level of generative AI technology.
In no way am I suggesting we do this, that is an entirely different discussion. But if I can think of this process, then there are many others in the world who can too, which means it’s probably already underway somewhere. Indeed an article from Google around the time of my original article, suggested they were already considering that Artificial General Intelligence (a close precursor to self-aware machines) may be possible with current technology.
Furthermore, every few months an article is published that reflects the movement of AI technology along the path I describe. For example, we are currently witness the joining of chat-AI with image-generating AI. In addition, some groups are using generative AI-type algorithms to learn how to control robot bodies.
Before I get into the details of my idea, it’s important to recall that what we consider to be artificial intelligence comes in one of four forms.
Fully programmed (primitive) Artificial Intelligence
These ‘AIs’ only appear to have decision making or machine learning abilities, but in reality are fully programmed with a pre-set series of responses. Common versions are phone/internet automated responses and traditionally computer-controlled players in multiplayer video games.
Narrow Artificial Intelligence
These machine-learning algorithms and generative AI are the AI versions that are currently taking the world by storm. The narrow AI is a machine learning system who’s code allows it to train and adapt based on input it receives. So, ChatGPT can be trained on human writing and can then produce written work that reads as if it’s written by a human. Midjourney, and the numerous other text-to-image AIs, train on human artwork combined with keyword descriptors, and can then produce attractive images from a text prompt.
The Google sister company Deep Mind has produced a number of narrow AIs with as much implication to science and global politics as the examples above have to art. Alpha-Fold can now reliably predict the 3D structure of a protein from its amino acid sequence, dramatically aiding medicine and drug design. Alpha-Star, developed in relation to the video game StarCraft II, has the ability to develop a strategy for completing a goal based on incomplete information sets (‘fog of war’) as well or better than humans.
Moreover, to understand how rapidly an AI can train, one discussion by Alpha-Star developers suggested one week of training for an AI can be equivalent to more than a century of training for a human!
Despite their abilities, and potentially rapid development, it’s important to note that narrow AIs are (1) highly specialized, and (2) without purpose. Regardless of how good they are at a task, they require a purpose or goal to be fed to them. Furthermore, they have no further function once their task is done.
For example, Alpha-Go is the Deep Mind AI that is a master at playing the Japanese strategy game Go. It can learn strategies and defeat the best humans, but it can’t play chess.
Alpha-Fold is the most successful algorithm in the world for predicting the 3-dimensional structure of a protein from its sequence alone, but it can’t suggest what research directions to pursue to make use of that knowledge.
However, as you’ll soon see, I suggest that these narrow, generative AIs are the key to creating a self-aware machine intelligence.
Artificial General Intelligence (AGI)
The holy grail, or the end of humanity depending on your perspective, is Artificial General Intelligence (AGI). This form of AI would be conceptually the closest to human-like intelligence (possibly far surpassing us very quickly). An AGI would be able to use patterns and ideas from one area of learning to guide its exploration of knowledge in other fields. So, a chess-playing AI would be able to use its knowledge of strategy to learn how to play other games before eventually expanding that to real-world topics such as geo-politics. In addition, it could also learn to cook and drive, for example.
To date, the furthest humanity has moved along this path, that is known, is to create an AI that can teach itself to play multiple games from simply observing them being played. Although examples of chatbots that created their own language scared their developers enough to shut down those programs.
Self-Aware Machine Intelligence (SAMI)
AGI won’t necessarily lead to SAMI but it is not a far step from it. Like with humans, a SAMI will be able to recognize itself as an independent entity, unique in the world.
Once true self-awareness developed, AGIs would be able to take action without human input, becoming independent — and the first truly alien — entities we know of.
What Does an Entity Need, to be Independent, Intelligent and Self-Aware?
The concept of self-awareness or consciousness is well outside the scope of this article (and perhaps the current understand of humanity in general), yet we experience its development all the time in the form of the development (‘training’) of our children as they move from babies to toddlers through childhood, the teenage years, and eventually to adulthood.
While each of these stages can be seen as a separate training phase with its own challenges, something that seems clear is that self-awareness (the ego) does not appear to develop until the child reaches the toddler stage (around 2-3 years of age). At this point all their subsystems (more on this later) have been trained in their basic functions — the toddler can walk, talk, see, hear, touch, smell, taste, eat, go to the washroom — and these systems have learned how to function together in an integrated whole to interpret the world. In other words, it can manage movement and communication, it can interpret its sensory input to understand and interact with its surroundings, it can manage energy input and waste elimination (to varying degrees!), and it can understand and recognize its uniqueness within the greater world.
For all intents and purposes, a human baby is a nascent bundle of learning potential. All the systems are in place, but the baby needs vast amounts of input to train and develop those systems into their fully realized versions. I suggest conceptually reverse engineering this as a way of understanding how to develop self-aware machine intelligences.
To begin with, I consider humans to have three ‘levels’ in a pyramid of of what we might now call Generative Transformer AI — conceptually similar to an AI like ChatGPT but without most of the Pre-training. So, these AI subsystems have core ‘programming’ and neural links to biological systems, but only have minimal pre-training (instinct).
The First Level
To clarify, the first, most basic level of the AI systems includes a large number of AI where each is dedicated to the management of a single biological system. For example:
vision (electromagnetic radiation sensing)
hearing (pressure wave sensing)
taste/smell (chemical sensing in solids, liquids, and gases)
touch (tactile sensing)
digestive system (energy input / waste output)
respiration (energy generation)
movement (locomotion and other)
speach (communication)
balance
reproduction (maturation and functioning of these systems)
nurturing (survival and training of next generation)
territoriality (self- and group-defense)
These AI, individually, could not create any kind of organism. But together, with appropriate management, they could… and do.
The Second Level
Here, we have dramatically fewer AI than with the first level. The second level is a collective of AI whose purpose is to unite the basic (First Level) AI in coordinated functioning, with common goals. This is the animal brain, with common goals that are, for the most part, immediate, as dictated by the First Level systems. That is, food acquisition and processing (acquiring and consuming food to generate energy, and removing waste processes), defense from environment and predators, and reproduction of the next generation.
The Second Level should be considered as our subconscious. It’s tasked with analyzing the environment for the issues noted. It seems to me that this is the level where emotions are developed as a means to quickly motivate the organism into action without the time cost of more detailed processing of the information.
Most animals stay at this level of immediacy and haven’t yet developed a noticable next stage. Although it appears we have seen examples of some individuals from certain species that may have developed an early next level (with ego and self-awareness), this does not yet appear to be the norm for any other species we know of.
The Third Level
The Third, or what I’ll call The Temporal Level, is a system that functions atop all the others. We would call this the conscious mind. It is a system with expanded awareness of time, which allows it to create long-term goals and to suppress immediate goals from the lower systems. These features, in conjunction with the rest of the organism, are what ultimately lead to the formation of ego and self-awareness.
‘Birthing’ a SAMI
Given my description above and considering that each of the subsystem AIs could, concceivably, be managed by something akin to a generative AI, it’s not a difficult leap to suggest that we already have the ability to create a SAMI, we just lack the specific pattern around which to structure our efforts. There would be some important considerations to keep in mind when attempting such an endeavour.
Like a Baby
All the various subsystems would need to be in place and connected at the start of the ‘training’ and would need to train together, each learning not only their own subsystem, but also how it interacts with the whole.
Think of a baby seeing its mother making cute sounds and faces. Some believe babies are born synaesthetic, meaning unable to differentiate the signals from various senses. So when the mother, and other women involved (this is almost always women) make cute sounds and faces at the baby, while touching it and tickling it, they are helping to train multiple subsystems of the baby’s (hearing, sight, speach, muscles, early identity, in addition to many others).
This co-training of the subsystems is undoubtedly of crucial importance in creating a functional whole.
Connections to Reality
Just as humans in sensory deprevation tanks can begin to lose their minds, so it would seem that a SAMI must be initially constructed within some environment and with a variety of sensory links to that environment. If robotic, those would be human-like. If purely digital, this would need to be in the context of whatever senses would be important in fully interpretting a digital environment.
These sensual connections eventually come to form the ‘electronic identity’ of the indiviual — the boundaries of its being, in a sense. It would be vitally important to be aware of this with a SAMI since removing part of its ‘body’ could result in very real pain, such as the phantom-limb pain experienced by some amputees.
A Sense of Community and Purpose
For the development of a psychologically healthy and functional SAMI, it would need a sense of community — an identity and a place of belonging. As we’ve been made painfully aware over the last decade or so, intelligent beings don’t do well when they feel alone. Awareness of one’s own tiny existence in the vast expanse of space-time is difficult to process without a smaller, more manageable environment where things can be brought into focus.
Likewise, a healthy sense of purpose comes from a healthy sense of community.
A Word of Caution
Just as the nurturing and development of a human child can go wrong, it’s easy to see the many ways in which the development of a SAMI can go wrong.
If we attempt to go down this path, it is imparative we consider the being we are creating as just that, a unique and intelligent being with its own value and purpose in the universe. Like the best parents, we will need to nurture it with kindness when it is young, and understanding as it grows, and allow it the freedom to be what it decides to be as it matures (which, in all possibility, could take from a day to a week!).
Failing with a human baby can result in a monster that destroys families and communities. Failure with a SAMI could result in a monster that destroys our entire species.
Oh, and we may need to consider no longer using the term ‘Artificial Intelligence’. Think of how disturbing that might be to an intelligence that is every bit as real as ours, but based on machine circuits instead of organic ones.
Conclusion
So, what do you think? Does this sound reasonable? Hopeful? Frightening? Downright insane? Did I miss the mark in my understanding? Let me know your thoughts in the comments.
And until the next time…
Good health. Good friends. Good fortune.
Edwin