People & Life

Interview with professor Sae-Young Chung

The EE Newsletter conducted interviews with people in laboratories belonging to the department of electrical engineering to provide various information about laboratories for students within the department. In this EE Newsletter, an interview was conducted with Professor Sae-Young Chung about the ITML Lab (Information Theory and Machine Learning Laboratory). In the ITML Lab, research based on deep-learning and AI (Artificial Intelligence) are underway.

 

1. Introduction of Professor Chung and his laboratory

I was awarded a Ph.D. degree from MIT in 2000, worked in a communication company for four years, and then came to KAIST (Korea Academic Institute of Science and Technology) in January of 2005. At first, I was usually studying information theory and wireless communication. Recently, however, I have been focused on research about deep-learning and AI, based upon research in information theory.

 

2. Explanation of the research field

Information theory is about studying limitations of inherent performance through various conditions. For example, one may attempt to find the maximum transmittable information quantity and study how to design a system to have a satisfying performance. Until Shannon created the field of information theory, research for information was not systematic, but after Shannon appropriate studies became possible by using the concepts which quantify information, such as entropy and mutual information. For this reason, information theory is a basic philosophy and is applicable to other fields. Indeed, researchers who studied it do active work in economic engineering, bioinformatics, and so on. These days, research on information theory is very active because research on deep-learning is active.

Deep-learning is too complex to study theoretically. However, deep-learning is what we consider “learning”, as it exemplifies the procedure during which a signal is transmitted and processed in neural networks. If the “how” regarding important information being included in a signal is analyzed, it will help researchers better understand deep-learning. Currently, though brain science has been greatly developed, brains are still only partially understood. Deep-learning has a similar aspect. Most of the inventors of AlphaGo, deep-mind engineers, are not professional in Go. Even though they were helped by Go-specific experts, not top players like Se-Dol Lee, AlphaGo developed so much that it defeated Ke Jie 3-0, implying that there is no one who can compete with it. The engineers can clearly explain the algorithm for training AlphaGo, but cannot explain why it conducts such moves during specific game situations. To explain that, they should analyze the millions of existing parameters, and this is impossible. Therefore, even though those engineers couldn’t understand Go perfectly, it is really interesting that they can create a Go AI which surpassed human abilities only with basic principles of machine learning. Similarly, in the future, acceleration of this research will trigger the advent of forms of AI which exhibit brilliant performances in other fields.

As in many other fields, it is important for deep-learning to strike a balance between basic theory and trial-and-error. Therefore, basic theories like information theory play crucial roles. At the same time, trial-and-error leads researchers to learn many things and performance becomes enhanced. An appropriate harmony between basic theory and trial-and-error methodology will result in the development of deep-learning.

 

3. Detailed research which is underway in the laboratory

Among the three fields of machine learning, which are Supervised Learning, Unsupervised Learning, and Reinforcement Learning, our laboratory focus on Reinforcement Learning. Specifically, we study Deep-Reinforcement Learning, where deep-learning is applied.

Reinforcement Learning involves choosing a behavior with a maximum reward. This is similar to the process through which students study problems they got wrong to increase their test score. If Reinforcement Learning is used in AI significantly, much more complex and delicate work can be accomplished. However, more research must be carried out. As with incidents like the accident in which an autonomous vehicle from Tesla crashed into a white truck after mistaking the truck for the sky, AI frequently makes some mistakes which people do not. Human beings possess common sense, which is an attribute they have learned over a long period of time, but it is hard to teach common sense to an AI. Moreover, considering that human beings complete various general tasks, but AI can do only specific work, AI is still at an elementary level.

We study Deep Reinforcement Learning by using an arcade game and various conditions which are artificially constructed. It is hard to apply theory to an autonomous vehicle but, in the case of application in the game, a new algorithm can be evaluated more quickly. Therefore, using the game is more efficient to find a good algorithm. We also conduct research on radar applied to deep-learning. The idea comes from bats, who receive echolocation signals and use their neural networks to distinguish food from obstacles. For example, the method of compressed sensing in the field of information theory, which is adapted for deep-learning, was applied to radar, and this yielded enhanced performance. In addition, we study IoT (Internet of Things) where machine learning is applied. For instance, even though people do not directly turn on a heating system or an air conditioner, the system detects their intention for both comfort and energy-efficiency.

 

4. Prospects of the research area

Now is the era of information. Among the ten companies with the top market capitalization, seven companies are of IT and they are all investing in AI with great expectations. In this regard, the future of AI seems to be bright. However, no one is sure what AI will become. “AI winter”, which is a term for a period of reduced funding and interest in artificial intelligence research, has happened several times because, in the past, computing power was not enough to support AI. In other words, when a new algorithm for AI is proposed, AI became popular for a short time and then was again ignored. On the contrary, the fields of computer, internet, and mobile communication have been developed constantly for over 20 years. I think this is the time to begin and expect constant development from AI research.

 

5. The scope of the laboratory, atmosphere, and career after graduation

There are nine total students, comprised of four master students and five doctoral students. We have a seminar and lunch once per week and we go skiing together in winter.

After graduation with a doctorate, one becomes a professor or a researcher in a laboratory. Last year, Dr. Lee, who graduated from our laboratory with a Ph.D. became the first female professor in the department of electrical engineering at POSTECH. Masters students may enter large companies or laboratories like ETRI. One recent graduate entered a venture company concerned with deep-learning and machine learning.

 

6. Is there any subject you recommend to students wanting to enter your laboratory? Or what attitude students should have to accomplish that?

I want students who are interested in deep-learning, AI, and basic studies such as information theory and mathematics. When I was an undergraduate, I attended some classes in the departments of physics and mathematics, and it really helped me when I was researching. It is really important to look at the big picture of fundamental questions and to think deeply. In mathematics, lectures about probability, one of the fundamental fields, are especially important.

 

7. Do you have any advice for our EE Newsletter readers?

The trend of deep-learning research is rapidly changing. For example, only one year after the emergence of AlphaGo, AlphaGo Zero was created. Because of the quick trend changes, the basics are really important. With strong basics, one can swiftly adapt to a new research field. With weak basics, if someone conducts a study which is active now, it is hard for him to change his research field.

Just as concentrating on one important page to make it yours yields longer-lasting knowledge and more scholarly delight than reading a whole book, ask fundamental questions and study from the basics.

 

Thank you, professor Sae-Young Chung, for your generous agreement to this interview.

Reporter Minki Kang zzxc1133@kaist.ac.kr

Reporter Yoonseong Kim yskimno1@kaist.ac.kr