O Speaker: Dr. Jae Shin Yoon (Adobe Research)
O Title: Metaverse in the Wild: Modeling, Adapting, and Rendering of 3D Human Avatars from a Single Camera
O Date: Wed., Nov 23, 2022.
O Start Time: 4pm
O Venue: room 1104, bldg N24 (LG Hall)
Metaverse is poised to enter our daily lives as new social media. One positive application would be tele-presence that allows users to interact with others through the photorealistic 3D avatars using AR/VR headsets. Such tele-presence requires high fidelity 3D avatars, depicting fine-grained appearance, e.g., pore, hair, wrinkle on face, from any viewpoint. Previous works have utilized a system of multiview cameras to generate the 3D avatars, which enables measuring appearance and 3D geometry of a subject. Deploying such large camera systems in our daily environment, however, is often difficult in practice due to the requirement of camera infrastructure with precisely controlled lighting. In this talk, I will introduce a computational model that can reconstruct a 3D human avatar from a single camera whose quality is equivalent to that from multi-camera system by learning from data.
Jae Shin Yoon is a research scientist at Adobe Research. He did his PhD in Computer Science at the University of Minnesota advised by Prof. Hyun Soo Park. Jae Shin obtained his B.S. from Hanyang University, South Korea in 2015, and he then moved to Korea Institute of Science and Technology (KAIST) for his M.S. in 2017 advised by Prof. In So Kweon. His research interests are solving the problem of high-quality reconstruction and generalization of digitized human avatar over temporal domain by modeling, rendering, and adapting humans from a single camera with computer vision, graphics, and machine learning knowledge. Please refer to his personal webpage for more details: https://gorokee.github.io/jsyoon/