Deep Learning in Predictive Maintenance ( Part I of II)

The world of maintenance has made substantial progress in the last decade. Fueled by the ease of availability of machine learning tools like open source tools, organizations moving to the cloud, where most hyperscalers provide them a very robust tool chest to make the best use of their data through AI and machine learning, a significant number of manufacturing organizations have developed the capability of predictive maintenance, powered by machine learning.

Two broad categories of algorithms can be leveraged for predictive maintenance. One category is classic machine learning algorithms like clustering, SVM, and decision trees. However, as discussed in the subsequent section, these algorithms have certain disadvantages. This two-part article will examine the disadvantages and how deep learning algorithms can help mitigate those disadvantages. This first part will cover the limitations of the machine learning algorithm group.

The second part of this article series will explain why deep learning algorithms may be a better choice. It will be published on 02/13. Note that many organizations have already started leveraging deep learning algorithms for predictive maintenance, so this is not some fancy futuristic postulation.

While machine learning has certainly brought reliability and predictive maintenance forward and is extensively leveraged these days by organizations that leverage best practices in predictive maintenance, there is no denying that even this approach has limitations.

Limitations of ML approach for data-driven predictive maintenance

The very first limitation is generalizability. Because the implementation mechanism off from machine learning algorithms is specific to domains it translates into training and fine tuning of the algorithm for every specific application.

Domain-specific knowledge is a critical input in designing machine learning-based predictive maintenance algorithms. Also, preprocessing steps like feature engineering are almost always mandatory in machine learning-based fault detection prognostic or diagnostic algorithms. Remember that feature engineering is an intricate process requiring a careful combination of inputs gathered from domain experts and customized features that can structure the data set.

While certainly an advance over the legacy methods of maintenance, the fact is that because the network architecture of machine learning algorithms is simple, these networks still have limited learning capabilities. In technical jargon parlance, the term for such networks is shallow networks. However, the data leveraged for building data-driven predictive maintenance algorithms and processes has all the traits of real-world data, like noise, nonlinearity, and other complexities.

Machine learning algorithms do not handle irregularities, nonstationarity, and nonlinearity, and these traits are generally present in sensor data collected from industrial equipment. This is why these shallow networks cannot perform data abstraction through features critical for fault predictions in the most efficient way.

If you have implemented these algorithms in the real world for predictive maintenance after substantial training, you know that the actual real-world performance of these machine learning algorithms declines when they are used with real-time datasets in production. Machine learning algorithms also do not perform well in cross-domain applications. If the nature of the applications is complex, performance is impacted.

These limitations can be addressed by deep learning algorithms. In the second part, we will explore how deep learning can help mitigate these limitations.


Leave a comment