By Nalin D Jayasuriya I first learned Artificial Intelligence (AI) systems in 1988 at the University of London. I worked on various models that trained these systems with data, both at that time and later in the 1990s. While several decades have passed and computers have much more processing power, storage and faster memory, several [...]

Sunday Times 2

Let’s not rush AI adoption

View(s):

By Nalin D Jayasuriya

I first learned Artificial Intelligence (AI) systems in 1988 at the University of London. I worked on various models that trained these systems with data, both at that time and later in the 1990s.

While several decades have passed and computers have much more processing power, storage and faster memory, several key features haven’t changed. One of them is that for the AI systems to produce correct results (expected outcome), what data it is fed (trained on) is critical.

Recently there is been a sudden surge in the adoption of these AI systems for various purposes, including online chat tools (e.g., at web portals instead of customer service personnel), those that approve finance applications (e.g., credit cards/loans), facial recognition (e.g., in security/surveillance), vehicle licence plate recognition (e.g. law enforcement), voice recognition (e.g., user authentication) and for application/performance evaluation (e.g., job application screening), querying knowledge and news (e.g., ChatGPT), and a host of other purposes.

Some use of AI has been there for a while with us. Did you pause to think how your favorite streaming service recommends music and movies for you? Does your streaming service know your taste for music and movies?

Some of these systems are custom-built or trained before use, while others are provided as a service by companies that have already trained the system (e.g., human face recognition for security/surveillance). Some of these systems even learn (are fine-tuned) while interacting, while others do not.

We are humans from different parts of the world. We perceive the same world, the universe and events in different ways. Even what we each think is right and wrong depends on how we were brought up as children (e.g., values instilled by parents, elders and teachers), the governing local laws, where we lived and live, economic and social conditions that affected and affecting us, among a host of other factors. So, each of us perceives the world and what happens around us quite differently.

When news articles are published in different countries on the same event, each such article is different because the folks that live in that country (intended audience) perceive and interpret that same event in their own way, subject to their own history, values and alliances. There will rarely be an article on the same event that is universally acceptable!

When the AI systems are trained (e.g., ChatGPT) what data it is fed may need to be facts in vanilla-flavour. However, in reality, that is almost always not the case. These systems seem to have been fed biased or subjective data (e.g., by geographical location) because the results indicate so! User beware! Perhaps there need to be different models trained for each geographical area, maybe for each country.

One of the major flaws in AI is the opaque (non-transparent) nature of how a result is arrived at given the input(s) and context. The data model that drives AI is hard to understand even for its creator(s) after it is trained with lots of data. For example, if you have submitted an application for a job (e.g., via a resume) and got filtered out by AI that pre-screened your application, will the employer be able to explain to you why they didn’t even see your application if you called them to follow up? In most cases, the answer is no. They will have no clue why your qualified application didn’t get through their AI entity. The same goes for other similar systems, such as facial recognition.

What if you forgot to shave one morning and the building AI didn’t let you in through the doors, and there is no human security officer to explain to because that person was laid off to cut costs?

It seems to be that a big part of the drive for AI adoption is fuelled by the perceived profits organisations/companies can make by replacing the humans that are currently performing that task.

Thus, we got to be cautious and not rush AI adoption.

As a human, I can say for sure that I prefer to interact with another human than have to interact with an AI system for anything. I try to bypass those chatbots on websites when possible to reach a human. I would rather have a human decide on something that will affect me, than an AI entity. I am sure you feel the same. Did I not mention that most of the AI systems (at least for now) do not have simulated feelings? I think an AI system with feelings will be even creepier!

(The writer is an IT consultant)

 

Share This Post

WhatsappDeliciousDiggGoogleStumbleuponRedditTechnoratiYahooBloggerMyspaceRSS

Advertising Rates

Please contact the advertising office on 011 - 2479521 for the advertising rates.