AI Learns But Does Not Understand

Below is an excerpt from the Designed Analytics Report: Avoiding The Third AI Winter. This section of the report focussed on explaining that while it may seem that the algorithm understands the subject matter, it does not, since it can just calculate, not think. AI, or specifically the way LLMs work does not understand but does learn.

That learning comes by doing math. This approach has no drawbacks at all when it comes to the current scope of usage. The fact is, we have just scratched the surface when it comes to the current scope of usage. This approach though does become a bottleneck when you aim for artificial general intelligence (AGI), which can almost perform majority of tasks that humans can do.


Take a step back and think about the capability of these models to summarize an article. If you use a large language model to summarize a piece of text, or research paper, you will “perceive” that they do a good job. What we asked the LLM to do was to summarize key facts embedded within the research paper. If these summaries are good, this means that the model has been able to capture the key facts embedded within that research paper.

A question may arise that if the model was able to summarize the paper decently, does it understand the facts embedded within the paper? Yes and No ! The way these models “comprehend” and learn is different from how we comprehend and learn.These models learn from semantics and word associations. Leveraging that approach, they are also able to learn that within the structure of a piece of writing, key facts are embedded in a specific way. Here is a real life example of this approach.

More than two decade ago, when I was preparing for my GMAT, I found the reading comprehension portion challenging. The primary reason was that some passages pertained to topics that can make you fall asleep within seconds. I wasn’t as curious back then as I am now so my interest in topics beyond certain areas was minimal. For example, details on how a specific type of rock behaves against various environmental elements. Who cares! However, I was aiming for a 700+ score so had to devise a way to address this challenge. I ditched taking computer-based tests for a while and amassed paper-based reading material for reading comprehension.

Going through hundreds of passages and corresponding questions, I devised an approach of finding where the answers may be, after I had read the question, without reading the entire passage. It worked. For it to work, I did not need to know the facts at all. I deciphered a system that allowed me to look for answers in specific areas, because of the way written pieces are generally structured.

This is the same approach that LLMs take to understand and summarize text. My approach was paired with human intuition as well. For LLMs, it is pure math. Next, let us consider creative output, like generating images. This example is also a real example that I created for illustrative purposes. 

Figure 2 on the next page shows two images generated using the following prompt:

“Cinematic shot. A cat wearing a hat in a business suit. The cat is holding a cup of coffee.


You can download the report here:


Leave a comment