Techno
Page - By Harendra Alwis
Artificial intelligence
Will
it be good or bad?
Do you think that AI will be the future
of computing or is it just an abstract concept that is simply
impossible to implement. Will we ever be able to produce machines
that act and think like us? Should we build such machines in
the first place? Will AI turn out to be a threat to the very
existence of the human race? How can we control these intelligent
machines? Will those machines have a sense of morality; and
be able to differentiate between what is right and wrong? What
should be the limits of Artificial Intelligence and will it
be possible to impose those limits? Is it ethical to create
machines that could imitate humans? The Internet too is a revolutionary
new concept that is also abused. Could the same fate befall
Artificial Intelligence and will it be misused to harm others?
Will we be able to use AI to make the Internet a safer place
for all? Does the good outweigh the bad?
Write in and share your views with us at technopage_lk@yahoo.com |
Even
though I am not so much into computer games, I was quite impressed
with the quality of animations in Cricket 2002 developed by Electronic
Arts. The animations and the behaviour of the players were better
than anything I have seen before (ok fine... maybe I haven't seen
much) even though they were not as real as 'really-real'. These
improvements are all a result of the advancements in the field of
artificial intelligence; one of the fastest growing branches of
Information Technology. The gamers are one of the major users of
AI technology and be it the cricketers in Cricket 2002 or the different
units in Red Alert 2; the driving force behind all of these games
is Artificial Intelligence.
Artificial
Intelligence, or AI for short, is a combination of computer science,
physiology, and philosophy. AI is a broad topic, covering different
fields, from machine vision to expert systems. The element that
the fields of AI have in common is the creation of machines that
can 'think'.
Scientists
and researchers have adopted many drastic approaches to tackle the
problem. One approach is to implement AI through software alone,
while others are mapping the human brain to build circuitry that
could mimic it. The other approach is to redesign both hardware
and software 'from scratch' and this has led researchers to try
out new alternatives to silicon and adopt new programming languages
designed with AI in mind.
But the development
of software technologies itself and the quest to break the limits
of 4th generation programming languages (4GLs) to go beyond 5GLs
where computer code will be very similar to natural languages has
become a part of their research. Today, we have to learn complex
computer languages to tell the stupid machines what we want them
to do. The development of 5th Generation Languages are aimed at
making these computer languages more like our own natural languages.
This would mean that with proper voice recognition abilities, we
would be able to simply talk to the computers and tell them what
to do.
In order to
classify machines as 'thinking', it is necessary to define intelligence.
To what degree does intelligence consist of, for example, solving
complex problems, or making generalisations and relationships? Then
what about perception and comprehension? Research into the areas
of learning, of language, and of sensory perception has aided scientists
in building intelligent machines. One of the most challenging approaches
facing experts is building systems that mimic the behaviour of the
human brain, made up of billions of neurons, and arguably the most
complex matter in the universe. Perhaps the best way to gauge the
intelligence of a machine is British computer scientist Alan Turing's
test. He stated that a computer would deserve to be called intelligent
if it could deceive a human into believing that it was human. But
this definition was put to test by a simple programme called Eliza.
Eliza, the
computer therapist, is probably the most famous AI programme yet
created. Eliza was created in 1966 at MIT by Joseph Weizenbaum.
Weizenbaum was surprised and even horrified to find that many of
Eliza's interviewees would form strong emotional bonds with her.
Psychiatrists were ready to begin letting Eliza treat their patients,
and people were calling Weizenbaum to ask for Eliza's help in sorting
out their problems.
Weizenbaum's
experience with Eliza and society's response to her ultimately left
him opposed to the idea of constructing artificial intelligences.
Weizenbaum's fear was that, despite artificial intelligences' inability
to fully understand or sympathise with the human condition, society
would be all to ready to entrust artificial intelligences with the
task of managing human affairs.
With all of
the hype surrounding Eliza, it was surprising to find that Eliza
didn't actually possess any of the qualities usually associated
with intelligence. Eliza knows nothing about her environment. She
has no reasoning ability or understanding of her own motives. She
can't plan out her actions and she can't learn.
(You can find
a web based version of Eliza at http://www-ai.ijs.si/eliza/eliza.html
or download it from http://www.spaceports.com/~sjlaven/eliza.htm)
Artificial
Intelligence has come a long way from its early roots, driven by
dedicated researchers. The beginnings of AI reach back before electronics,
to philosophers and mathematicians such as Boole and others who
theorised principles that are still used as the foundation of AI
Logic. AI really began to intrigue researchers with the invention
of the computer in 1943.
The technology
was finally available, or so it seemed, to simulate intelligent
beha-viour. Over the decades that followed, despite many stumbling
blocks, AI has grown from a dozen researchers, to thousands of engineers
and specialists; and from programmes capable of playing checkers,
to systems designed to diagnose disease.
AI has always
been on the pioneering end of computer science. Advanced-level computer
languages, as well as computer interfaces and word-processors owe
their existence to the research into artificial intelligence. The
theory and insights brought about by AI research will set the trend
in the future of computing. The products available today are only
bits and pieces of what is yet to follow, but they are a movement
towards the future of artificial intelligence. The affects of the
advancements in the quest for artificial intelligence on our lives
go seemingly unnoticed today, but they will be the force that will
define the future of computing.
We as a nation
are trying to drive the machine of Information Technology to solve
many of the socio-economic problems we are faced with today. Here
as we talk of an E-Lanka it is paramount that we realise the importance
of AI as an emerging technology. One remarkable breakthrough in
Artificial Intelligence could possibly re-define software applications,
hardware technologies and systems engineering as we know them and
make the computer a whole new experience.
I have mentioned
before that AI could do to IT, what the jet engine or the rocket
engine did to aviation. We are still driving the propeller driven
bi-planes and mono-planes powered by silicon and electronics.
It is time
we made an effort to invent the supersonic dreams of the future
that would drive us forward, and power them with our imagination
and creativity. If we are to become a powerful force in the IT world,
AI could just be the gateway that we need to make those dreams come
true.
|