Are you familiar with the term AI effect?
The genesis of this term actually ties to the entire debate of what defines the word “intelligence” in the term “artificial intelligence.” Within the realm of the recent AI hype, a significant portion of the hype, wrongly, in my opinion, focuses on this specific term, “intelligence.” This pursuit of advancing AI towards intelligence is elusive in many ways.
An aspect that I have mentioned in some of my other articles is what exactly can be defined as intelligent. Intelligence is not just limited to the emotional quotient but also includes other ingredients like our emotional quotient and the ability to feel. But if we take the approach that a system can be considered intelligent if it can perform something that is considered a skill that only humans can do, like driving a car, this approach does not work.
Why will this approach not work? It’s not because once we train algorithms to do something that is considered difficult, we tend to normalize that capability. Once a task that is considered challenging is mastered by algorithms, that milestone eventually loses its significance and we stop seeing that particular achievement as an intelligent achievement. The fact is, IBM’s Deep Blue defeated a human who was considered the best player of the game of chess just decades ago. We normalized that as well. No one said that Deep Blue has become intelligent.
This specific paradox is the AI effect. As per Pamela McCorduck: “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’.”
Another way to summarize the AI effect is that if it is possible for AI to do it, it is not intelligent. Take another example from our day-to-day lives that has been around for more than a decade: facial recognition. Have you ever feared that your phone has become intelligent or aware just because it can now do something that only humans could do before? Obviously not!
The quest to achieve intelligence in AI, if you pursue the path of getting computers to do things too difficult for computers to do, will force us into another form of artificial impossibility. Because no destination will satisfy us once we arrive there. There is another approach that many postulate, to test whether an AI has actually become intelligent, and that is the Turing test.
But remember that the ability of an algorithm or machine to fool people is more like an arbitrary target, since the human subjects interacting with the machine will learn from the experience where they were fooled. So the chances are that any system that can pass the test once, unless they are interacting with someone as dumb as me, will not pass again. You have to remember that the Turin test was postulated half a century ago. The realm and context of algorithms has changed significantly since then.
So what is the right approach then when we follow the path of building intelligent systems? The answer to this question will be covered in one of the chapters of the upcoming Designed Analytics report. The report will be published on July 8th. Stay tuned!


