Variational Auto-Encoders
Course Work for E9 333 - Advanced Deep Representation Learning @ IISc (Aug - Dec 2022)
Variational Auto-Encoders (VAEs) are probabilistic graphical models which use neural networks to approximate latent posteriors. We try to make posterior as close to a fixed prior while trying to maximise the information it can transmit about the data point. The encoder takes the datapoint and maps to a latent and decoder takes the latent and reconstructs the datapoint.
Here, I implement VAEs on the dSprites and CelebA datasets. All code and results are in my GitHub repo .