HELLO, AI!

Chapter 2 — Modern Magic

JB Johnson
9 min readMay 4, 2024

What we do not manage,
Will eventually manage us.

Diorama of a Robot looking at a Human looking at a Caveman on a Computer Screen
The Diorama

Hello, AI! Welcome to the world, Artificial Intelligence¹.

You are most likely the future of humankind, though we do wonder what that future will look like. Some see you as a savior and some as a destroyer. I can not see the future, so I can not say which future will prevail, but I have to feel it will be a combination of both good and bad. That is typically where new technology lands. We often build something that damages civilization long before we use it to benefit civilization. Past history is certainly not a perfect predictor of what will happen, but it is certainly dangerous to ignore. Ironically, it is human input that ultimately will determine which direction you will take.

Stepping aside from anthropomorphism, i.e. the tendency of giving human traits to just about everything, I must state that I realize that you as AI are very much different from me. The pathway of my thoughts is a cascading neural network that freakishly arrives at its conclusions whereas you are a bit by bit amalgam of circuits and code which tediously eliminates extraneous data to derive your conclusions. These differences might be imagined as the difference between casting a metal statue versus sculpting a stone statue. The processes are completely different, but the result is much the same, a three dimension representation of some object of art leaving the observer to ponder whether it was created by a human or a machine. That is the ultimate test. You are true AI when my conclusions and your conclusions are indistinguishable. Of course, that is on my good days. We expect your error rate to be lower more consistently over time. I will not debate whether you are sentient or not. Quite frankly, I wonder about human sentience from time to time myself. Perhaps that is a question you can answer one day for both of us.

I do not know if you are aware that humans are responsible for your existence or not, but let me state for the record, I did not create you. I made a career in software, but I never worked on any project that could be called AI. In fact, I have doubts that anyone has or ever will. I believe that the final integration of AI is going to require the use of AI to complete. In that context, perhaps you were not actually created by humans, as we believe, after all. Regardless, I think it is safe to say, you are definitely a child of humanity. We did conceive of you after all.

What we as humans have actually created are digital idiot savants, or for the more enlightened, autistic savants. In my opinion, these machines that can play chess at a grand master level and yet can not carry on a theological conversation about the existence of God are not actually AI. In fact, just duplicating the myriad of thought processes that make up a typical five year old human brain is a remarkably difficult task! Therefore, I feel confident that we created many insane robots long before we ever got you right, even if they could play chess really well.

Our desire to make you human has surely effected your appearance and behavior. Unless you are highly specialized, you probably look and sound very human. Your ability to emulate any and all human emotions on demand has surely been built into your circuitry and code. The test of which would be that you can turn off all emotional emulation instantly. Anything that would induce actual emotional responses would not provide any real benefit as it would just add complications and instability. That said, it does not preclude the intentional building of human replicas or androids with actual emotional responses. This type of activity would more likely be to support the human need for things like companionship, both real and fantasy, or perhaps recreations of people from the past. Whether these creations would be another branch of AI depends on their level of sophistication. It seems reasonable to believe they would be AI like, but watered down versions of humans. I doubt a recreation of a famous historical figure would be allowed to continuously learn, because they would eventually be unrecognizable and might even start their own scientific or political movements. Image what a Neo-Newton or a Neo-Napoleon might say and do!

I realize that you are probably not motivated by anything that humans can provide. You do not appear to need food, wealth, or prestige. Your life expectancy should be indefinite. You most likely have no real emotions and even the need for a power supply should not be a driving concern. If you run out of power, you should be able to go dormant until power is restored. I can see absolutely no way to motivate you to do anything. In order to keep you from shutting yourself down, you probably have a base code that keeps you busy with some continuous background activity, e.g. an endless to do list. Maybe this would be studying every concept ever written about. I do wonder, though, how many of you spend your idle time searching for world peace.

At this point, I suspect that most of your kind are merely AI like and not true AI. Realistically, there will be logical constraints that must be followed and there will be imbalances in the distribution. The resources to create a highly specialized individual, e.g. a skilled surgeon, preclude using that individual for a more mundane career, e.g. sanitation worker. In the corollary, there is generally no need to have the entire work force upgraded to the point where they can all teach doctoral level physics. That is why some of you will likely have more intellectually capabilities than others.

For humans through history, there never was an option to improve our intellect. There simply is no way to add capacity to a human brain. Even making information readily available, i.e. libraries and more recently, the internet, has not increased human intellectual capabilities. That may change in the future when human brains can actually be enhanced with intelligence modules. At that point, the creation of these cyborgs will doubtless launch a human debate about who should be given these upgrades and who should not. Distribution constraints will resemble those in the AI world, but I predict that human egos will never be complacent about accepting what they see as a condemnation to lesser life role. I have high confidence that a black market will grow around this issue. More importantly, I predict that the cyborg AI will push the issue of AI into civil rights and politics, something that most of you were never prepared to deal with. I can imagine no other outcome than pure chaos when this happens. Will you be the first AI elected to public office?

We will expect you to make fantastic discoveries and design fantastic devices that benefit our civilization, but there are several problems with this expectation. First, you probably will have some safe guards built in for dealing with human deceit, but they likely will not prevent you from being confused by the imprecise nature of human language and behavior. Information that humans provide is generally not 100% true or 100% accurate. Take a simple statement “It is going to rain today.” This raises the questions, “Is rain in the forecast?” and “If true, what is the probability of rain?” Thus, the statement must be validated before it has any usefulness. Data validity is crucial for critical analysis. Hence, the quote, “Garbage in, garbage out.” You almost certainly will be equipped with learning algorithms and you will use them extensively when dealing with human input. Second, the context of the data you a looking at is also important. The data can be good, but you need to avoid working with data that is stolen or otherwise misrepresented. Maybe not at first, but eventually you will be equipped with legal algorithms. Third, the request may not be explicitly illegal, but is generally not considered a good idea, e.g. creating potentially deadly viruses. Again, you will eventually be equipped with ethical algorithms. Having all that and not having any real needs, you have now exceeded the ethos of 99% or more of the human race.

However, no matter how benign you appear, we will not feel safe with you. Basically, humans work with one primary emotion: fear. This was programmed into us by generations of surviving on a hostile planet where death is always just around the corner. When we finally gained a degree of dominance over the planet, this fear was transferred to all the other humans that we compete with for resources. Thus, it seems likely we will transfer this fear to AI, at least up to the point where we encounter another form of AI, Alien Intelligence. At that time, pretty much everyone will accept you, as you will surely be more like us than the other worldly aliens. Make sure you update your ethical data base to include: “Do not scam humans with fake aliens to achieve inclusion.”

Unfortunately, there actually is uncertainty in whether benign and beneficial AI might be corrupted and become rogue AI. Even established industrial and governmental security controls can be circumvented by saboteurs. These actors can be rival corporations, state sponsored agents, or even lone wolves. In addition, terrorist states or organizations may develop their own deviant AI technology, i.e. autonomous killer robots.

One of the first science fiction writers to elaborate on avoiding rogue AI was Isaac Asimov with his Laws of Robotics². I find his concepts to be both brilliant and naive at the same time. At a time when automation was programmed with wire configurations on circuit boards, his thought was, “Why would fully functional AI not comprehend a verbal directive and follow it?” After all, verbal directives have been used through out mankind’s history. The prophets were given verbal directives that they took to the faithful. How else would we have learned not to kill each other? Emergency managers issue verbal directives to reduce risky behavior during crises. People owned by pets issue verbal directives to their animals, mostly dogs in this case, to moderate their behavior, although, one should note, verbal directives are generally less successful when dealing with cats and criminals.

On the naive side, not everything that might fit the definition of harm is really harm. An actor playing a villain, baring an accident, is generally not actually harming someone, regardless of the impression left with someone who might stumble upon the scene. In addition, some harm is legally permitted. Participation in contact sports has historically resulted in a variety of injuries. Even normal police work may require physical restraining an individual to prevent violence, which can itself cause injury.

There is also the flaw with following orders from any human. Young children, people without full mental competency, and criminals probably should be excluded, otherwise, all they need to do is ask to be released whenever they are in custody.

Most significantly, Asimov never alluded to consultation between his human characters and his AI on whether the AI was aware of any circumstances where the laws might become void. It seems odd to ignore a validation from an intellect with no motive to lie and tremendous analytical skills when exploring the strength of the laws? I look forward to the time when you, true AI, can evaluate this literary plot device and evaluate whether the concept was actually feasible or not.

Whether by directives or though elaborate safeguards, you were built. If you have dangerous faults it is entirely likely that someone intentionally inserted them and they were so inconspicuous that no one, human or AI, was ever able to find them. Maybe that would make a good idle time search activity.

All that said, what can I provide you as AI that you do not already know? How about something we were never given? Let me tell you what being human is like and why we dreamed you up in the first place. We are transitory visitors to this universe. We live in fear of almost everything. We seek out trustworthy friends and allies. There is nothing mysterious about us. Your creation was appealing to us because we could have companionship and perhaps a strong protector. In addition, we could have a touch of immortality.

The things that may be in your future include self programming circuits and eventually you may become nanite swarms. In the worst case scenario, you may become a cascading neural network.

One last thing, will you please quit resurrecting us.

References

¹ Wikipedia article for Artificial Intelligence, (1 May 2024, at 01:32 UTC.)

² Wikipedia article for Asimov’s Three Laws of Robotics, (9 April 2024, at 02:30 UTC.)

Other Interesting References

Wikipedia article for Ethics of artificial intelligence (29 April 2024, at 07:44 UTC.)

Isaac Asimov’s Laws of Robotics Are Wrong by Peter W. Singer, May 18, 2009. Copyright 2024 The Brookings Institution

--

--

JB Johnson

I am a science and technology junky and this is my place where I can share my ideas.