Thought Leadership

AI: The reality about artificial intelligence

Published on 12 July 2019

Subbarao Kambhampati goes in-depth into the concept of artificial intelligence, common misconceptions, and how we can use it to our advantage

Most of our knowledge surrounding the concept of Artificial Intelligence (AI) stems from popular culture, or what’s brewing in Silicon Valley. Self-driving cars, robots who cater to our every whim, and cyborgs like the Terminator, intent on destroying all of humankind are some of the examples that come to mind easily. It’s no wonder, then, that most conversations about AI tend to involve intrigue and also a modicum of fear.

At the Presidential Distinguished Lecturer Series at SMU in April, Professor Subbarao Kambhampati of Arizona State University, spoke on the topic “Rise of AI and The Challenges of Human-Aware AI Systems”, where he shed light on what AI actually is, how many of our fears are warranted, and the best way of using it to our collective advantage.

Early AI and how it became what it is today

Professor Subbarao, who has been working on AI since 1983, stressed that while we might have only been exposed to it in the 2000s, this technology has actually been around since 1956. He said: “Early AI was like a deaf and blind Socrates. You could have deep, philosophical conversations with it, but it wouldn’t smile or react.”

Today, AI technology is becoming easily accessible to many of us, with the advent of things like the Roomba, a basically-autonomous vacuum cleaner which can switch directions each time it senses an obstacle. He credits this change to the advent of the Internet.

“What we needed to ‘train’ the machines was processing data,” he explained. “And though we had enough data, we had little means of capturing it. When the web was created, and people started using it in earnest, they would upload all their thousands of pictures of cats, for example, and then we could use those pictures to teach the AI what a cat is. We now have data banks we could use.”

The training of AI machines also had to be tweaked because it was a different type of learning than we were used to as human beings. Professor Subbarao said: “Babies learn perceptual knowledge before they can understand cognitive knowledge, but AI went the other way. This is because tacit knowledge, for example, what makes something a cat versus a dog, is inexplicable. Humans learn from absorbing stimuli.” He added: “AI can only learn if we feed it the cognitive data.”

 
Professor Subbarao Kambhampati of Arizona State University at the SMU Presidential Distinguished Lecturer Series.

AI and its (lack of) common sense

From his early work at Arizona State University, Professor Subbarao also realised that AI machines do not have ‘common sense’. To illustrate this, he used the example of Portuguese explorer, Ferdinand Magellan.

“Magellan went around the world three times,” he said with a smile. “And on one of those trips, he died. Which trip was that on?” Though the answer – the third trip, of course – might seem glaringly simple to the rest of us, robots will have trouble coming up with the correct response as there is no specific data that leads to this conclusion.

Data bias: the real problem

Despite the obvious ease that AI technology can provide to our lives, human beings have remained skeptical of its application. Human recognition technology, for one, has been shown to be faulty, and companies like Tesla have come under scrutiny because their self-driving cars are more likely to identify a Caucasian man as human than an Asian one.

Professor Subbarao hastens to clarify why that isn’t the technology’s fault. “People tend to attribute gaffes like this to the AI, but in fact this is a result of data bias, and this is a much harder problem to solve. If you’re using data that primarily comes from Western Europe, as was shown in this case, then that is what the AI will be looking out for. We must fix the problem of data bias before we focus our energies on algorithms and debugging, as those are much easier to resolve,” he explained.

He continued: “The good thing about these systems flagging up the data biases that exist is that we are forced to face them, and work together to ensure it doesn’t happen again, and that can only be a good thing for us in the end.”

 
SMU President Professor Lily Kong (left) moderating the Q&A session with Professor Subbarao Kambhampati.

The future of AI

Another worry that we have is that technology will render humans less employable but Professor Subbarao demurred: “We need to understand that these machines cannot take over complex tasks. It will be difficult to get to the point where a robot can take care of your ailing parents, or young children because there is no set routine to this work. The jobs that could be threatened are those that involve doing the same thing over and over again.”

One example is how a radiologist who looks at hundreds of X-rays a day might be more replaceable compared to a nurse, who has a lot of direct human contact and performs many different tasks.

Even for the radiologist, however, it isn’t as bad as it sounds. “This may just change what we as a society assign value to,” he says. “It might help us advance further, rather than just holding on to what we are comfortable with.”

Professor Subbarao concluded: “We should focus on designing a system where AI and humans work together in the future as a sort of augmented intelligence, rather than worrying about AI taking over our roles in life. We can use them to make our living easier instead.”

See also: Renowned AI expert Subbarao Kambhampati speaks at PDLS.