Generative modeling is a class of statistical methods designed to capture properties of the underlying data distribution and holds promise in a variety of settings such as semi-supervised learning and unsupervised representation learning. However, the flurry of recent successes in deep generative modeling places less emphasis on realistic scenarios where the model needs to adapt to specific target distributions or individual preferences at test time. In this talk, I will talk about two applications of deep generative models in which adaptation is particularly important — fairness and music generation. For both works, I will discuss how to efficiently leverage unlabeled data as a key source of weak supervision, and elaborate upon how to utilize this signal for targeted generation tasks.
Kristy Choi is a 4th year Ph.D. student in Computer Science at Stanford University advised by Stefano Ermon. Her research focuses on probabilistic machine learning methods, where she is interested in developing techniques for better adaptation in deep generative models as well as robust representation learning. She is the recipient of the NSF Graduate Research Fellowship, Stanford Graduate Fellowship, Qualcomm Innovation Fellowship, and the Two Sigma Diversity PhD Fellowship. Previously, Kristy obtained her B.A. in Computer Science-Statistics at Columbia University.