If you love Physics and are a neural network enthusiast, this will be an exciting topic for you since PINNs lie at the intersection of these two topics. I have been reading about the topic and associated research papers since morning, and it looks like the opportunities to leverage this approach abound in the world of business processes.
PINNs use data-driven supervised neural networks to learn the model, but they also use physics equations. These equations are embedded in the model to encourage consistency. These equations generally represent the known physics of the system that the NN needs to focus on.
Based on the description above, you can tell that PINNs have the advantage of being not only data-driven, because they are NN after all, but can also ensure consistency with the physics. This allows us to extrapolate PINNs accurately beyond the available data. Another advantage of PINNs is that they can generate more robust models with less data.
Let us say that we have a physics equation, F = f(a). We can try training an NN with an objective function to learn this equation, using approaches like L2 regularization to avoid overfitting. The challenge with this approach is that for segments of the domain for which we have data, regularisation will definitely improve the usefulness of our model, but if we do not have data spanning the entire domain, the model will not be able to extrapolate much beyond the available data. And this is where PINNs come to the rescue.
Leveraging the same approach as L2 regularisation, PINNs minimize the data loss, but in doing so they leverage known physics as an additional regularisation term. The model objective, therefore, in plain English, is “Please fit the data, but ensure that the solution is consistent with the physics equations that we know should define this data”. And it is this approach that can be powerful.
Many of you NN enthusiasts may ask, what is the point of using NNs if you already know the underlying relation or the equations? One advantage is that you can train a PINN with a parameter (e.g., friction) as an additional input and thus get a family of solutions represented by a single neural network.
We know that deep learning models are good at learning complex non-linear patterns. Many physical systems have a lot of nonlinear interactions like friction, slipping, combustion, etc., and capturing them all with physics equations is pretty complex/tedious. So, if you have a simplified physics representation of the system, you can model the observed error between the simplified physics and true system using Deep Learning. Then, you can combine the simplified physics system and the deep learning model to represent the system accurately.
This approach can be leveraged in fields beyond physical equations and the realm of manufacturing. For example, in financial engineering and trading. In a scenario, you know that an equation exists to define certain relationships, but the results or numbers indicate some noise or additional factors. You can leverage PINNs to evaluate and understand those extra factors.
Opportunities go beyond finance in areas like economics and supply chain. Supply chain data that should be clearly defined by a particular relationship but is not can use PINNs. There can be plenty of use cases in other functions as well. The fact is, the more creative we get, the more opportunities we will find. Certainly, an area I will suggest you explore further.

