In the history of the development of artificial intelligence technologies, periods of AI winters are periods where the domain of artificial intelligence saw reduced funding and reduced interest. This includes both the interest in products that leverage these technologies as well as the research in AI technologies.
The term AI winter first appeared on the horizon in 1984. While many of us have been gaslighted to believe that approaches like deep learning are relatively new in the field of computing, the fact is they have existed for almost half a century now. In a video series that I used to publish during COVID pandemic, I shared a copy of the book that was focused on leveraging neural networks in manufacturing, and was published in 1979. In fact, the very first disappointment in the field of AI came into the form of failures in productionizing machine translation algorithm way back in 1966.
The very first period of AI winter or the first AI winter was from 1974 to 1980. If you research first AI winter, you will find that the hype during this phase was largely because we were able to understand the power that artificial intelligence algorithms can deliver. This excitement, in terms of theoretical possibilities that AI algorithms could generate led to a hype that could not have materialized into reality because of technology limitations in that era. The result was that after the initial excitement, increased funding and interest, both the academia and industry, including defense research organizations, realized that though there was potential, it was not feasible to translate many of those capabilities into reality.
Just to give you an idea of how advanced theoretical AI had been in that era, I want to highlight the introduction and the subsequent practical implementation failure of single layer neural networks in the 1960s. Perceptrons, a fundamental component of neural networks was invented by Frank Rosenblatt in the 1950s. In late 1950s, Rosenblatt predicted that the perceptron may eventually be able to learn make decisions and translate languages. Fascinating! Isn’t it?
However there was not enough understanding on how perceptrons, specifically multi layered perceptrons could be trained. Actually, the concept of multilayer perceptron and its training had not emerged back then. A book published in 1969 titled “Perceptrons” , essentially postulated that perceptrons are limited in terms of what they can realistically deliver. This ended the pursuit of perceptrons for a while. Hence, as you can assume funding for neural network related projects became almost impossible to find in the 70s and 80s.
However this first winter of neural networks came to an end in the mid 1980s when work by leading researchers like John Hofield and David Rumelheart revived a large scale interest in this field. It was unfortunate that Frank Rosenblatt did not live long enough to see this revived interest in a field that he invented.
The second AI winter was a bit different from the first AI winter because at this point there was no doubt that approaches like perceptrons or neural networks can deliver solutions that did not exist. However, the challenge in terms of technology to make these applications a reality still remained. If you read the news about AI these days, then you know that the hardware and energy consumption by these AI algorithms is massive.
The hardware that they leverage is one of the most powerful available on earth. None of that existed back then to bring the ideas to life. Building that kind of infrastructure was outside the realm of the capabilities available during that time. The fact is, that even the technologies that were productionized in some forms, like voice recognition, did not receive much traction. Alex Castro highlighted the aversion that existed in AI solutions in the minds of investors in this era in his quote in The Economist (7 June 2007):
“Investors were put off by the term voice recognition which, like artificial intelligence is associated with systems that have all too often failed to live up to their promise.”
Similarly, Jim Markoff quoted about this era in a The New York Times article (2005):
“At its low point some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild eyed dreamers.”
Such was the stigma associated with the term “AI” during the second AI winter that researchers in the mid 2000s deliberately tried to name their algorithms in ways so that those algorithms would not get associated with artificial intelligence. Machine learning was a term that was a result of this quest.
We will conclude our exploration of the AI Winter in the second part of this article. The second part will be published on 07/17.

