Deep generative models, such as Generative Adversarial Networks (GANs) in particular, can sample realistic images from Gaussian noise. However, are they good for image editing? Image editing requires the output to retain some resemblance to the user-provided input image. In this talk, I will discuss a different formulation in which the generator network is trained to transform one image to another. I will explore some interesting ways to constrain the generator to respect the input images, and show that they are indeed useful for image editing and other practical tasks.
Taesung Park is a Ph.D. student at UC Berkeley, advised by Prof. Alexei Efros, focusing on computer vision and learning-based computational photography. He worked on several projects related to image synthesis, including CycleGAN (co-first author, 6000+ citations) and GauGAN (Best Paper Finalist at CVPR19 and Best in Show Award at SIGGRAPH19 Real-Time Live). He received a B.S. in Math and M.S. in Computer Science at Stanford, working with Vladlen Koltun and Sergey Levine. He is a recipient of Samsung Scholarship and Adobe Research Fellowship 2020.