As I was going through my reading list this morning, an interesting headline caught my attention. An article titled “Chinese researchers develop new deep learning model to predict battery lifetime” highlighted how a team of Chinese researchers has proposed a new type of deep learning model to predict the lifetime of lithium-ion batteries. At first glance, this may not seem like a big deal at all. I have not checked on the Samsung phones for a similar feature, but I know for sure that you can see the battery quality prediction in iPhone settings.
Basically, there is this battery health number that ranges from 0-100%. While this is not exactly predicting the lifecycle, to calculate this percentage, you definitely need to leverage analytics. To be more specific, charge data. The screengrab below shows the number for an iPhone 11 in 2024.

Once there are sufficient data points from a minimum number of charge cycles, it can be leveraged to generate the prediction. So if an analytics approach around this already exists, why is this research a featured research? This one sentence from the article answers this question:
“The deep learning model can accurately predict the battery’s current cycle life and remaining service life using only 15 charge cycle data points.”
Nearly a decade ago, the Big Data revolution brought attention to the fact that organizations are generating vast quantities of data and there is value embedded in the data that is not being harnessed. Fields like data science originated from this intense focus on harnessing gold from Big Data. But like every other technology, the hype eclipsed common sense in some aspects. We developed a mindset that analytics in the Big Data era is always about Tera and Petra bytes of data.
It is true that approaches like deep learning benefit when the data volume is large. But have we evaluated this for each and every type of problem we can solve by leveraging deep learning? As I have highlighted in some of my other articles, we now need to make AI models more agile, which means that they should be able to do more with less memory and less data. This is not only one of the imperatives of Edge AI but also Business AI if you are looking to expand AI-based tools to those working in the processes. An obsession with Big Data however may be in the way.
What makes this research interesting is that only 15 charge cycles of data points were used, and yet the researchers could leverage deep learning, which we generally view as an algorithm that works best with large data sets. Not much information is available currently on accuracy levels and other parameters. But if the researchers were indeed able to develop a model with decent accuracy with only 15 charge cycles data, we are on to something.
Studying the data used, the algorithm, the parameters, and the output, we can learn how this approach of leveraging “Tiny Data” with Deep Learning can be extrapolated into other applications. An interesting data point will also be the model’s hardware consumption. If you are wondering why this excites me, imagine this scenario. Tiny data that sits in repositories of individual teams can have dedicated localized models, which then interface with aggregator models, thereby aggregating up hierarchically. We can get as creative as we want when it comes to building hyper customized Enterprise AI.

