Representation Learning in Generative AI

When it comes to Generative AI, we often hear the terms encoding and decoding. A key term that we don’t often hear associated with these encoding and decoding is latent space.

What is common to many Generative AI models is the concept of encoding the training dataset into a latent space. Then, you sample from that latent space and decode the point back to the original domain. This encoder-decoder technique attempts to transform the highly nonlinear complexities on which the data resides (for example, a pixel space) into a simplified latent space. This latent space can then be sampled from in such a way that any point in the latent space is essentially a representation of a well-formed image.

Watch this episode of Gen AI Bytes to gain a better understanding of Representation Learning.


Leave a comment