Learning kinematic structures of articulated objects from visual input data is an active research topic in computer vision and robotics. The accurately estimated kinematic structure represents motion properties as well as shape information of an object in a topological manner, and it encodes relationships between rigid body parts connected by kinematic joints. Thus, it can be considered as a mid-level representation of general objects captured from different sensors such as RGB camera, depth camera, and robot encoders. The talk focuses on robot learning of articulated kinematic structures and their uses for higher level applications such as efficient human hand posture estimation, kinematic structure correspondence matches and personalised dressing assistance. I also introduce several on going works of Personal Robotics Laboratory of Imperial College London.
Hyung Jin Chang is a postdoctoral researcher of Personal Robotics Laboratory at Imperial College London, UK. He received his B.S. and Ph.D degree from the school of Electrical Engineering and Computer Science, Seoul National University. His current research interests include articulated structure learning from visual data, articulated hand pose estimation, human robot interaction for assistive robot, object tracking, human action understanding, and user modeling. He has co-authored over 36 academic papers in international conferences and journals, and 12 international / Korean patents.