AI Will Neither Save Us Nor Destroy Us
Why algorithms will never be sentient, moral, evil, or even just plain dull at parties

With our little ape-brains as our only mechanism for occasionally attempting the arduous task of thinking, it’s not surprising that nearly everything we believe and imagine is wildly wrong. This is because, lacking cognitive apparatus of sufficient capability, we automatically simplify reality, boiling it down until the residue is sufficiently tiny that we can almost encompass it. As may be expected, however, in this process of gross simplification much of importance is lost.
Today it appears fashionable to worry about the looming threat of Artificial Intelligence. Some people who really ought to have known better but are too addicted to seeing their views in print have opined that AI is the greatest threat facing humanity. Which is akin to saying that, faced with the Mongol invasion, plague, and famine, the greatest threat to Europe in 1279 was a small boy named Gaston who was in possession of a toy catapult.
Let’s begin by asking: what is AI?
Simply put, all artificial intelligence programs are algorithms. That is, they are mathematical formulations written in one or more programming languages that create structures capable of performing prodigious feats of pattern recognition. This is mildly similar to the way in which some parts of the human brain work, but there the similarity ends.
AI can do a creditable job of discerning the difference between pictures of cats and dogs; it can be used for facial recognition (though it’s still laughably easy to defeat), and it can be used to find patterns in all kinds of datasets, some of which are spurious and some of which are interesting.
What AI cannot do is operate outside of its extremely narrow confines, any more than a bicycle can aspire to transmuting itself into a passenger jet airliner.
One of the reasons people get confused about supposed risks from AI is because they know little or nothing about how it works, and even less about consciousness. So let’s explore these two topics in order to see why fears of AI are as rational as imagining that your pet hamster is scheming to take over the world.
Well, OK. Maybe your pet hamster is indeed scheming away. But AI never will, and we shall now see why.
AI programs are instantiated in a variety of ways, but all depend essentially on parsing large amounts of specially selected data and looking for similarities by means of automatically adjusting the weighting they give as they recursively attempt to match certain data elements to other data elements. Think of an AI program as essentially a set of connections that are strengthened or weakened according to whatever similarities or differences are detected in the dataset, with each iteration getting slightly better at seeing that A has features similar to B.

This is not the same way personal digital assistants work, however. Although such assistants as one can engage on one’s smartphone attempt to provide the illusion of personal interaction, the reality is that these are pre-programmed playthings that are risibly limited in scope. These toys are called chatbots and although they strive to create the illusion of question-and-response they are no more “intelligent” than a pocket calculator.
Real AI includes systems to detect anomalies in MRI and x-ray images, weather patterns, traffic patterns, speech patterns, credit card purchase patterns, and pretty much any other realm where there are large datasets from which to extract statistically significant similarities across vast numbers of individual instances.
Obviously, a system programmed to detect patterns in freeway traffic during commute hours will have little or no value if we want to look for potential cancer in the x-ray of a patient’s lung. That is like imagining the visual processing neurons in the human brain should also be suitable for the composition of symphonies. While the general principles behind most AI programs are the same, the precise instantiation and the datasets used to “train” the system (by means of weighting inputs and outputs so as to maximize the recognition of the desired patterns) are very different.
There is thus no way for any AI program to break out and take over the world, any more than the automatic parallel-parking system in a Lexus can suddenly decide to drive to Florida for a pleasant vacation.
Now let’s look at the other unexamined part of the Skynet equation: consciousness.
We humans don’t actually possess this quality, unless we define it very loosely indeed. What we do have is an illusion of consciousness even though much of the time we’re operating unconsciously. We don’t have a lot of processing power available to us for generating consciousness, because most of our brain is busy with auto-regulatory tasks. The human brain is busy processing sensory inputs, maintaining heart rate, breathing, blood pressure, balance, movement, hormone levels, and all manner of other physical systems that demand constant attention. Yet we’re aware of little or none of this activity.
We are sometimes, and never with any great precision, aware of our surroundings. But all the studies show our awareness is very partial and very fallible. Our brain fabricates a sense of wholeness, but it’s easy to show that in reality we perceive only a tiny percentage of what’s actually going on, and even of this small percentage we are often grossly mistaken. Eyewitness accounts, for example, are mostly worthless even when the witness is supposedly a trained professional.
Furthermore we’re often unaware of our true motivations and we are utterly helpless in the face of our hardwired behaviors. Thus our illusion of consciousness is partial, fallible, and sporadic. Think of the last time you set out on a car journey, at some point became lost in thought, and some time later realized you were nearly at your destination.
So much for consciousness.
Furthermore, studies on sensory deprivation show very clearly that without constant input from our sense organs we rapidly become disoriented and begin to hallucinate. So how would a disembodied computer program fare?
Further still, we know that all of our desires, thoughts, feelings, and actions are the result of millions of years of evolution and are primarily mediated through our emotional centers and hormonal systems. Lacking such drivers, how would an AI program, for our purposes magically imbued with some self-awareness thanks to the wonders of lazy script-writing, have any sense of purpose and therefore any sense of intent?
Personally I think it would be a very useful thing indeed for humans to become more conscious, and perhaps if we manage to avoid exterminating ourselves and all life on Earth larger than an ant, in a few hundred thousand years it may even be possible. But one thing is absolutely certain: even if we do manage to become slightly more conscious, a general-purpose artificial intelligence with similar self-awareness will at best mimic us — and only then if we’ve gone to the considerable trouble of recreating everything such a system would require: sensory organs, the equivalent of such hardwired drives as are common to all group primates, the equivalent of hormonal mediators, and all manner of pre-programed automatic responses to a wide variety of internal and external stimuli.
Which is rather like saying to a Victorian inventor, “Yes, perhaps one day we shall be able to build an automaton that looks just like a person, running on steam and using cogs and gears in order to ambulate.”
But why bother? What possible use could such a contraption serve? By the time the technology was available to perform such a feat of engineering there would be far more interesting and useful challenges to tackle.
Precisely the same is true of AI.