to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 1
- 6
to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 2
- 6
to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 3
- 6
to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 4
- 6
to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 5
- 6
are a key thrust
in EE researchAI and machine learning are a key thrust in EE research
AI/machine learning efforts are already a big part of ongoing
research in all 6 divisions - Computer, Communication, Signal,
Wave, Circuit and Device - of KAIST EE
- 6
- 6
Highlights
Dr. CheolJun Park appointed as an assistant professor at Kyung Hee University
KAIST EE graduate (BS, MS, and PhD) Dr. CheolJun Park from Prof. Yongdae Kim’s lab will join the School of Computing at Kyung Hee university as an Assistant Professor from the Fall of 2024.
Dr. CheolJun Park earned his Ph.D. degree with the topic of “A study on dynamic method for finding implementation vulnerabilities in cellular baseband” in 2024.
During his degree, he worked on over-the-air security testing and reported several security vulnerabilities of cellular modems from companies like Qualcomm, Samsung, Google, which allow an attacker to eavesdrop on and manipulate data traffic, spoof a smartphone’s time, or trigger memory crashes. In addition to his research, Dr. Park interned with Qualcomm’s wireless security team. After receiving his Ph.D. degree in February 2024, he has continued his research work at the Prof. Yongdae Kim’ lab as a postdoctoral researcher.
His will continue to research in the fields of cellular and wireless security.
Please give Dr. CheolJun Park a warm encouragement and congratulations.
A new faculty member, Professor Insu Han, appointed at our School
Professor Insu Han will be appointed to our School as of September 1st, 2024.
Congratulations on your appointment.
Professor Yong Man Ro’s Research Team Wins Outstanding Paper Award at AI Top tier Conference (ACL 2024)
<(from left) Se Jin Park ph.d. candidate, Chae Won Kim ph.d. candidate>
PhD students Se Jin Park and Chae Won Kim from Professor Yong-Man Ro’s research team in the School of Electrical Engineering at KAIST have won the Outstanding Paper Award at the ACL (Association for Computational Linguistics) 2024 conference, held in Bangkok.
ACL is recognized as the world’s leading conference in the field of Natural Language Processing (NLP) and is one of the top-tier international conferences in Artificial Intelligence (AI).
Their award-winning paper, titled “Let’s Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation,” introduces an innovative model designed to make interactions between humans and AI more natural and human-like.
Unlike traditional text-based or speech-based dialogue models, this research developed a Human Multimodal LLM (Large Language Model) that enables AI to comprehend both visual cues and vocal signals from humans. Additionally, it allows the AI to engage in conversations using human-like facial expressions and speech.
This breakthrough opens up new possibilities for improving the intuitiveness and effectiveness of human-AI interactions by simultaneously processing visual and auditory signals during conversations.
Professor Yong Man Ro stated, ” This research marks a significant advancement in human-AI interaction, and we hope this technology will be widely applied in various real-world applications.
This award is yet another example of the international recognition of the excellence of AI research at KAIST’s School of Electrical Engineering.”
Professor Yun Insu’s Lab (as a Part of Team Atlanta) Advances to Finals of the U.S. DARPA ‘AI Cyber Challenge (AIxCC)’ and Secures $2 Million in Research Funding
<Professor Insu Yun>
Ph.D. candidate Hee Suk Yoon (Prof. Chang D. Yoo) wins excellent paper award
<(From left) Professor Chang D. Yoo, Hee Suk Yoon integrated Ph.D. candidate>
The Korean Society for Artificial Intelligence holds conferences quarterly, and this year’s summer conference is scheduled to take place from August 15 to 17 at BEXCO in Busan.
Hee Suk Yoon, a PhD candidate, has been recognized for the excellence of his paper titled “BI-MDRG: Bridging Image History in Multimodal Dialogue Response Generation” and has been selected as an award recipient.
Moreover, the findings will be presented at the ‘European Conference on Computer Vision (ECCV) 2024′, one of the top international conferences in the field of computer vision, to be held in Milan, Italy, in September this year (Paper title: BI-MDRG: Bridging Image History in Multimodal Dialogue Response Generation).
The detailed information is as follows:
* Conference Name: 2024 Summer Conference of the Korean Artificial Intelligence Association
* Period: August 15 to 17, 2024
* Award Name: Excellent Paper Award
* Authors: Hee Suk Yoon, Eunseop Yoon, Chang D. Yoo (Supervising Professor)
* Paper Title: BI-MDRG: Bridging Image History in Multimodal Dialogue Response Generation
This research is considered an innovative breakthrough that overcomes the limitations of existing multimodal dialogue large models, such as ChatGPT, and maintains consistency in image generation within multimodal dialogues.
Figure 1 : Image Response of ChatGPT and BI-MDRG (ours)
Traditional multimodal dialogue models prioritize generating textual descriptions of images and then create images using text-to-image models.
This approach often fails to sufficiently reflect the visual information from previous dialogues, leading to inconsistent image responses.
However, Professor Yoo’s BI-MDRG minimizes image information loss through a direct image referencing technique, enabling consistent image response generation.
Figure 2 : Framework of previous multimodal dialogue system and our proposed BI-MDRG
BI-MDRG is a new system designed to solve the problem of image information loss in existing multimodal dialogue models by proposing Attention Mask Modulation and Citation Module.
Attention Mask Modulation allows the dialogue to focus directly on the image itself instead of its textual description, while the Citation Module ensures consistent responses by directly referencing objects that should be maintained in image responses through citation tagging of the same objects appearing in the conversation.
The research team validated BI-MDRG’s performance across various multimodal dialogue benchmarks, achieving high dialogue performance and consistency.
Figure 3: Overall framework of BI-MDRG
BI-MDRG offers practical solutions in various multimodal application fields.
For instance, in customer service, it can enhance user satisfaction by providing accurate images based on conversation content.
In education, it can improve understanding by consistently providing relevant images and texts in response to learners’ questions. Additionally, in the entertainment field, it can enable natural and immersive interactions in interactive games.
A new faculty member, Professor Jaeil Baek, appointed at our School
Professor Jaeil Baek will be appointed to our School as of October 1st, 2024.
Congratulations on your appointment.
*Laboratory Website Link: https://sites.google.com/view/kiplab
Professor Junmo Kim’s research team has garnered funding for 2024 SW Star Lab project under the Information and Communication Broadcasting Technology Development Program
<Professor Junmo Kim>
Professor Junmo Kim’s research team from our department has garnered funding for 2024 SW Star Lab project under the Information and Communication Broadcasting Technology Development Program, administered by the Ministry of Science and ICT and the Institute of Information & Communications Technology Planning & Evaluation (IITP).
The SW Star Lab project aims to secure world-class original technology in five core SW domains (‘Big Data’, ‘Cloud’, ‘Algorithm’, ‘Application SW’, and ‘Artificial Intelligence’) and to cultivate master’s and doctoral-level SW talent. Selected research teams receive funding of approximately 1.5 billion won (about 200 million won annually) over an eight-year period.
Professor Junmo Kim’s research team proposed a project titled “Developing Sustainable, Real-Time Generative AI for Multimodal Interaction” in the ‘Artificial Intelligence’ domain.
This project seeks to overcome the limitations of current 2D-based image/video generation models through the development of 3D-based image/video generation model. The research focuses on developing image/video generation model capable of understanding objects as complete entities rather than from specific(fixed) viewpoint, enabling more realistic object generation and movement representation.
Furthermore, the proposal extends beyond simple generation models to include understanding multimodal inputs and restricting harmful content, aiming to develop technology that generates content with a deep comprehension of societal and economic implications.
This project is expected to create synergies by collaborating with the Vision-Centered Artificial General Intelligence (ViC-AGI) Lab, led by Professor In So Kweon from our department, which has been designated as KAIST Cross-Generation Collaborative Lab.
Professor Minsoo Rhu’s research lab has been selected for the 2024 SW Star Lab Project under the Information and Communication Broadcasting Technology Development Program
<Professor Minsoo Rhu>