Silpa Vadakkeeveetil Sreelatha
Academic and research departments
Centre for Vision, Speech and Signal Processing (CVSSP), Surrey Institute for People-Centred Artificial Intelligence (PAI).About
My research project
Interpretable Representation Learning using Generative modelsGenerative models have witnessed significant performance improvements in image synthesis over the
last decade with the introduction of the generative adversarial network (GAN), variational autoencoder
(VAE), and Diffusion models. Extensive research has been carried out to demonstrate their utility in
applications such as super-resolution image synthesis, text-conditioned image generation, and many
others. However, it is necessary to identify the interpretable and disentangled representations concealed
within the generative models to widen their ability to extrapolate, which is a critical component of
human representation capabilities. For instance, a model that learns to generate digits should capture
representations corresponding to digit identity, shape etc in separate units. Some of the advantages of
learning such representations are ; 1. They aid in the controllable generation, which may be used in various applications such as zero-shot classification, image manipulation, etc. 2. They can be used to synthesize counterfactual images which could be used in explainable AI, fairness, and robustness. My project aims to learn interpretable representations in the generative models which could be used to improve the robustness, explainability and fairness of the classifiers.
Supervisors
Generative models have witnessed significant performance improvements in image synthesis over the
last decade with the introduction of the generative adversarial network (GAN), variational autoencoder
(VAE), and Diffusion models. Extensive research has been carried out to demonstrate their utility in
applications such as super-resolution image synthesis, text-conditioned image generation, and many
others. However, it is necessary to identify the interpretable and disentangled representations concealed
within the generative models to widen their ability to extrapolate, which is a critical component of
human representation capabilities. For instance, a model that learns to generate digits should capture
representations corresponding to digit identity, shape etc in separate units. Some of the advantages of
learning such representations are ; 1. They aid in the controllable generation, which may be used in various applications such as zero-shot classification, image manipulation, etc. 2. They can be used to synthesize counterfactual images which could be used in explainable AI, fairness, and robustness. My project aims to learn interpretable representations in the generative models which could be used to improve the robustness, explainability and fairness of the classifiers.