Troy M. Miller
Artificial Intelligence is definitely a touchy subject for the
human race. The very mention of the term conjures up images of
apocalyptic societies where intelligent super-computers have either
enslaved the human race or eradicated the inferior species altogether.
For some, the connotation of "artificial intelligence"
attacks the very core of the human spirit, the pride of our race.
The very thought of an "intelligent" computer that is
on par, or more likely superior, to our own brain sends chills
down the spine.
Are these concerns realistic? Or are they unfounded worries of
people who don't understand the issue? Some proponents of artificial
intelligence insist that such concerns are the result of semantic
misunderstanding. Artificial intelligence does not equate to artificial
life, they claim. AI refers to only a computer that is able to
"seem" intelligent by analyzing data and producing a
response. One example is "smart agents" that ask you
certain questions and then return recommendations based on your
answers, all within a friendly user-friendly environment. Other
examples include computers that can "learn" from mistakes
in a limited way, such as the IBM chess program that beat Kasparov.
The defining aspect of this view is that the computer is limited
in it's capacity; it's intelligence is, in effect, only artificial
Other supporters of AI take a more extreme view. Often referred
to as a "learning computer," they describe a computer
that can not only react to input, but learn from it. The computer
is able to interact with its environment, make mistakes, and re-write
its code to handle the resulting circumstances. This view brings
the computer closer to being compared with a human; the current
computer is compared to a small infant that starts out with a
certain set amount of information or instruction, and then "grows"
into a more intelligent, rational adult. The computer is now able
to analyze, reason, and, given a situation, make an "intelligent"
decision based on past experiences. Such a computer would be desirable
for formulating and evaluating scenarios incredibly quickly to
advise their human counterparts or, in some cases, implement the
decision themselves. We see a computer of this type in the movie
Wargames, where it is used to simulate military strategy.
Here, however, is where some lines begin to be crossed. To some,
the learing computer has the potential to be more "intelligent"
than humans - to others, it's inevitable. At the far end of the
spectrum, some view the human brain as nothing more than an extremely
complex computer, and see no reason why computers, when they become
fast enough and big enough, can't evolve into something similar.
This stance is radically different than the first view, for now
the computer is much more than a tool that assists us in our daily
activities. In effect, our own existence is reduced to being that
of simply a complexly evolved computer. This implies that we are
only useful until a faster, more efficient machine comes along,
and this frightens and offends many people, giving rise to the
works of fiction talked about earlier.
Where is the point, then, where we switch from futuristic probability
to science fiction? The beginning of the AI spectrum, one can
hardly dispute the possibility of a computer that can seem artificially
intelligent in a limited way, because the early prototypes of
such computers already exist. The learning computer raises more
question marks, but since the basic premise is still only a computer
analyzing data and making decisions, the concept is not all that
unthinkable to most people. It is the last assertion that creates
waves of animosity between those who claim to be pro-artificial
intelligence and those who, by contrast, are labled as anti-AI.
Many people, such as myself, who are considered "anti-AI",
do not deny the possibility or even plausibility of basic artificial
intelligence such as described in the first two scenarios. However,
the notion of a computer "evolving" to a level on par
or surpassing humans is quite hard to swallow. When we begin to
talk about artificial intelligence on this level, the meaning
of the term changes dramatically, and the issue raises some serious
objections. Basically, it comes down to the question of the differences
between computers and humans. To truly be considered intelligent
on the level of humans, a "being" must first be able
to take in all the input surrounding it, evaluate it, and therefor
make an "intelligent" decision. Obviously, a computer
is quite good at doing this, but let us look at the nature of
the input. Sure, a computer can handle numbers and compare results
with pre-defined (or even re-defined) standards of right and wrong.
The problem arises when such factors as emotions are introduced.
How does one quantify happiness, anger, or distress? How does
one quantify love?
The argument against this usually centers on the human itself,
insisting that the ability to analyze and feel emotions is also
a learned thing - an product of evolution, if you will. If humans
were able to develop these capacities as a youngster, what prevents
a computer from doing the same? In the end, there can be no definitive
answer, because the conclusion one reaches is largely dependent
on his or her view of human life in the first place. Those who
subscribe to a "natural" evolution of mankind have no
problem imagining a similar process with computers, aided by us
along the way. Those who don't buy evolution, however, and insist
that the human being possesses a spirit that is unique to him
alone, cannot reconcile this belief with such a computer. In particular,
man's ability to create is a trait which no other animal seems
to possess. Along with genetic instinct and copy-cat behavior,
humans also have not only the ability to create, but a strong
desire to do so. Perhaps this is why some desire to create the
most amazing thing of all - something that can also create. It
is this defining characteristic which I believe makes this last
version of artificial intelligence a wholly impossible one.
We see, then, that in reality the differences between supporters of artificial intelligence and those that are labeled to be in opposition are not as drastic as it may seem. At any rate, the vehement disagreements usually stem from misunderstandings of the meaning of the term, and once that is established, the argument is transferred to differences in personal beliefs about human nature, not about technical difficulties. Undoubtedly, work will continue in both areas of artificial intelligence, and most likely the basic version of AI will be realized in the near future. However, as I've stated above, I believe the quest to create an "artificial human" will only result in the realization that there are some things which man just can not do.