EE Professor Yoo Chang-Dong’s Lab Wins 1st Place in the 2024 SNUBH AKI Datathon

EE Professor Yoo Chang-Dong’s Lab Wins 1st Place in the 2024 SNUBH AKI Datathon

members

<Photo (from left): Professor Changdong Yoo, Ph.D. candidate Ji Woo Hong, Ph.D. candidate Gwanhyeong Koo, MS candidate Young Hwan Lee, Ph.D. candidate Sunjae Yoon >

 

Doctoral students Jiwoo Hong, Kwanhyung Koo, and Seonjae Yoon, along with master’s student Younghwan Lee, from Professor Yoo Chang-Dong’s lab participated in the “2024 Bundang Seoul National University Hospital Acute Kidney Injury Datathon” under the team name “U-Vengers” and won the 1st Place Award.

 

This competition, hosted by Bundang Seoul National University Hospital, was an online datathon where participants used acute kidney injury (AKI) patient datasets to propose ideas and develop digital healthcare AI models.

 

The key goal was to create AI models that not only performed well but also demonstrated fairness across factors like gender and religion. The U-Vengers were recognized for the performance, fairness, creativity, and applicability of their developed model.

 

award

<Team ‘U-Vengers’ being awarded ‘2024 Bundang Seoul National University Hospital Acute Kidney Injury Datathon’>

Details are as follows:

 

Event: 2024 Bundang Seoul National University Hospital Acute Kidney Injury Datathon

 

Overview: Participants used an AKI patient dataset to develop AI models for AKI prediction, applicable in real clinical settings. In the preliminary round, models were developed using the MIMIC-IV dataset, and in the final round, real data from Bundang Seoul National University Hospital was used to build practical models.

 

Competition Period: September 12 – October 20

 

Award: 1st Place (Director of Biomedical Research Institute Award, Bundang Seoul National University Hospital)

 

Participants: Jiwoo Hong (Team Leader), Kwanhyung Koo, Younghwan Lee, Seonjae Yoon

 

Prof. Kyeongha Kwon and Sang-Gug Lee Team had developed electrochemical impedance spectroscopy (EIS) technology

images 000083 photo1.jpg 4

<Photo (from left): Ph.D. candidate Young-Nam Lee, Prof. Sang-Gug Lee, Prof. Kyeongha Kwon>

 

Accurately diagnosing the state of electric vehicle (EV) batteries is essential for their efficient management and safe use. KAIST researchers have developed a new technology that can diagnose and monitor the state of batteries with high precision using only small amounts of current, which is expected to maximize the batteries’ long-term stability and efficiency.

 

EE research team led by Professors Kyeongha Kwon and Sang-Gug Lee from the School of Electrical Engineering had developed electrochemical impedance spectroscopy (EIS) technology that can be used to improve the stability and performance of high-capacity batteries in electric vehicles.

 

EIS is a powerful tool that measures the impedance* magnitude and changes in a battery, allowing the evaluation of battery efficiency and loss. It is considered an important tool for assessing the state of charge (SOC) and state of health (SOH) of batteries. Additionally, it can be used to identify thermal characteristics, chemical/physical changes, predict battery life, and determine the causes of failures. *Battery Impedance: A measure of the resistance to current flow within the battery that is used to assess battery performance and condition. 

 

However, traditional EIS equipment is expensive and complex, making it difficult to install, operate, and maintain. Moreover, due to sensitivity and precision limitations, applying current disturbances of several amperes (A) to a battery can cause significant electrical stress, increasing the risk of battery failure or fire and making it difficult to use in practice.

 

images 000083 EIS Image 1 900

< Figure 1. Flow chart for diagnosis and prevention of unexpected combustion via the use of the electrochemical impedance spectroscopy (EIS) for the batteries for electric vehicles. >

 

To address this, the KAIST research team developed and validated a low-current EIS system for diagnosing the condition and health of high-capacity EV batteries. This EIS system can precisely measure battery impedance with low current disturbances (10mA), minimizing thermal effects and safety issues during the measurement process.

 

In addition, the system minimizes bulky and costly components, making it easy to integrate into vehicles. The system was proven effective in identifying the electrochemical properties of batteries under various operating conditions, including different temperatures and SOC levels.

 

Professor Kyeongha Kwon (the corresponding author) explained, “This system can be easily integrated into the battery management system (BMS) of electric vehicles and has demonstrated high measurement accuracy while significantly reducing the cost and complexity compared to traditional high-current EIS methods. It can contribute to battery diagnosis and performance improvements not only for electric vehicles but also for energy storage systems (ESS).”

 

This research, in which Young-Nam Lee, a doctoral student in the School of Electrical Engineering at KAIST participated as the first author, was published in the prestigious international journal IEEE Transactions on Industrial Electronics (top 2% in the field; IF 7.5) on September 5th. (Paper Title: Small-Perturbation Electrochemical Impedance Spectroscopy System With High Accuracy for High-Capacity Batteries in Electric Vehicles, Link: https://ieeexplore.ieee.org/document/10666864)

 

images 000083 image02 2 900

< Figure 2. Impedance measurement results of large-capacity batteries for electric vehicles. ZEW (commercial EW; MP10, Wonatech) versus ZMEAS (proposed system) >

 

This research was supported by the Basic Research Program of the National Research Foundation of Korea, the Next-Generation Intelligent Semiconductor Technology Development Program of the Korea Evaluation Institute of Industrial Technology, and the AI Semiconductor Graduate Program of the Institute of Information & Communications Technology Planning & Evaluation.

EE Professor Junil Choi Research Team Lead Development of New Visible Light Communication Encryption Technology Using Chiral Nanoparticles n collaboration with Seoul National University

EE Professor Junil Choi Research Team Lead Development of New Visible Light Communication Encryption Technology Using Chiral Nanoparticles n collaboration with Seoul National University

 

241008 main1

<Photo (from left): Professor Junil Choi, integrated master’s and PhD student Gunho Han, Seoul National University PhD student Junghyun Han, Dr. Jiawei Liu, Professor Ki Tae Nam>

 

Recently, next-generation visible light communication technology, leveraging visible light’s high frequency and linear propagation used in lighting systems, has attracted significant interest. Visible light communication boasts high security and data transmission speed, but it remains vulnerable to eavesdropping due to signal leakage, necessitating further advancements in encryption. The novel approach by the research team aims to address this gap by harnessing the unique interaction between polarization and the chiral optical properties of nanoparticles, which significantly enhances encryption performance.

 

The collaborative research from KAIST and Seoul National University has successfully used chiral nanoparticles to develop a secure visible light communication technology that greatly improves security. They achieved this by leveraging the nanoparticles’ chiral optical properties.

 

The team demonstrated through simulations that the security of visible light communication can be enhanced by optimizing the polarization based on the chiral properties of the nanoparticles—properties that are exclusive to authorized receivers. This effectively blocks any eavesdropping attempts.

 

241008 main2

<Figure 1. Conceptual illustration of the novel polarization-based visible light communication encryption system developed using chiral nanoparticles>

 

The research also revealed that signals passing through chiral nanoparticles create a unique differential channel due to circular dichroism—a phenomenon where the absorption of left- and right-handed circularly polarized light differs. The team found that adjusting the signal strength received through this differential channel can further boost encryption capabilities.

 

Furthermore, by comparing the bit error rates of legitimate receivers and potential eavesdroppers, the team demonstrated that visible light communication, once encrypted in this way, becomes nearly impossible to clone or intercept. They also showed that optimizing the polarization state based on the chiral properties allows for selective tuning of the system’s security and energy efficiency.

 

Professor Junil Choi emphasized, “This achievement was possible thanks to the collaboration between experts in materials science and electrical engineering. Moving forward, we intend to continue advancing visible light communication technology based on nanoparticles, aiming to develop a fundamentally eavesdropping-proof communication system.”

 

The study, co-authored by KAIST PhD candidate Gunho Han, Seoul National University PhD candidate Junghyun Han, and postdoctoral researcher Dr. Jiawei Liu, was published in the September issue of the prestigious multidisciplinary journal Nature Communications (Paper title: Spatiotemporally modulated full-polarized light emission for multiplexed optical encryption). This research was supported by the Agency for Defense Development through the Future Challenge Defense Technology Development Program.

Master’s student Jimin Lee from Professor Hyeon-Min Bae’s lab wins the Poster Excellence Award at the fNIRS 2024 Conference

Master’s student Jimin Lee from Professor Hyeon-Min Bae’s lab wins the Poster Excellence Award at the fNIRS 2024 Conference

 

Inline image 2024 09 27 10.48.45.675

<From left to right: Master’s student Jimin Lee, Ph.D. students Seongkwon Yu and Bumjun Koh, and Master’s graduate Yuqing Liang>

 

Jimin Lee, a master’s student in Professor Hyeonmin Bae’s lab, was awarded the prestigious Poster Excellence Award at the fNIRS 2024 conference, held from September 11 to 15 at the University of Birmingham, UK.

 

Now in its 7th edition, fNIRS is a biennial international conference that brings together basic and clinical scientists focused on understanding the functional properties of biological tissues, including the brain.

 

The award-winning research poster, titled “Fiber-less Speckle Contrast Optical Spectroscopy System Using a Multi-Hole Aperture Method,” was a collaborative project involving Jimin Lee, Ph.D. students Seongkwon Yu and Bumjun Koh, and Master’s graduate Yuqing Liang.

 

This research was recognized by the fNIRS 2024 Program Committee for its excellence, earning the Poster Excellence Award, which is part of the Scientific Excellence Awards.

 

The award is given to master’s, Ph.D., and postdoctoral researchers who deliver outstanding posters or presentations, chosen from among the 350 posters presented at the conference.

 

Inline image 2024 09 27 10.45.16.005

KAIST EE’s Insue Won (MS, Graduated, 8. 2024), Jeoungmin Ji (Ph.D Candidate), and Donggyun Lee in Prof. Seunghyup Yoo’s lab awarded at the 2024 International Meeting on Information Display (IMID)]

[KAIST EE’s Insue Won (MS, Graduated, 8. 2024), Jeoungmin Ji (Ph.D Candidate), and Donggyun Lee in Prof. Seunghyup Yoo’s lab awarded at the 2024 International Meeting on Information Display (IMID)]

  IMID

<(from left) Master’s Insue Won, Ph.D Candidate Jeoungmin Ji>

 

Insue Won (MS, Graduated, Aug., 2024) and Jeoungmin Ji (Ph.D Candidate) (Advised by Prof. Seunghyup Yoo) won the Best Poster Paper Award at the 2024 International Meeting on Information Display (IMID) for their work entitled “Temperature-Dependent Dynamics of Triplet Excitons in MR-TADF OLEDs: Insights from Magneto-Electroluminescence Analysis.”

 

In addition, Dr. Donggyun Lee (Ph.D. Graduated, Feb., 2024) won “Kim Yong-Bae Award Grand Prize” in IMID for his work on stretchable OLED displays.

 

The International Meeting on Information Display (IMID) is one of the world’s two largest international conferences in the field of display technology, held annually during the summer.

This year, the conference took place from August 20 to 23 at the Jeju Convention Center (ICC Jeju).

 

Ph.D candidate Jeoungmin Ji presented a poster titled “Temperature-Dependent Dynamics of Triplet Excitons in MR-TADF OLEDs: Insights from Magneto-Electroluminescence Analysis,” which was conducted in collaboration with Samsung Display and supported by the Technology Innovation Program funded By the Ministry of Trade, Industry & Energy(MOTIE, Korea).

 

Additionally, Dr. Donggyun Lee was awarded the prestigious ‘Kim Yong Bae Award Grand Prize,’ which is presented annually at IMID to one graduate who submits an outstanding thesis in the field of display technology.

 

Best Paper Award

  <Best Poster Award> 

 

              

<Dr. Lee being awarded ‘Kim Yong Bae Award Grand Prize at IMID 2024>

Professor In-So Kweon Selected as Recipient of the 38th Inchon Prize in the Science and Technology Category

Professor In-So Kweon Selected as Recipient of the 38th Inchon Prize in the Science and Technology Category

 

<Professor Kweon, In-So>
 

Professor In-So Kwon has been selected as the recipient of the 38th Inchon Prize in the Science and Technology category, an award hosted by the Inchon Memorial Foundation and The Dong-A Ilbo.

 

The Inchon Memorial Foundation and The Dong-A Ilbo established the Inchon Prize in 1987 to honor the legacy of Inchon Kim Seong-Soo, who, during the turbulent period of Japanese colonial rule, founded The Dong-A Ilbo and Gyeongseong Textile Company, and nurtured talent through institutions such as Central School and Boseong Professional School (now Korea University).

 

Regarding Professor Kwon’s selection, the Inchon Memorial Foundation stated, “In the 1980s, when robotics and computer vision were largely unexplored fields in South Korea, Professor Kwon embarked on pioneering research that yielded world-class results. As a first-generation researcher in computer vision, he has trained over 200 students and laid the foundation for the AI computer vision field. Recently, he extended the ‘attention’ model, which simulates human focus, to computer vision. He also developed the CBAM algorithm, which significantly enhanced image recognition performance, with the related paper being cited over 20,000 times.”

 

Professor Kwon is a member of IEEE and has held key positions including Chair of the Department of Automation and Design Engineering at KAIST, Editorial Board Member of the International Journal of Computer Vision, Head of KAIST’s Robotics and Computer Vision Laboratory, Director of the KAIST P3 Digicar Center, Co-Chair of the 11th Asian Conference on Computer Vision, and President of the Korea Robotics Society in 2016.

 

 

EE Professor Dongsu Han’s Research Team Develops Technology to Accelerate AI Model Training in Distributed Environments Using Consumer-Grade GPUs

EE Professor Dongsu Han’s Research Team Develops Technology to Accelerate AI Model Training in Distributed Environments Using Consumer-Grade GPUs

2024 09 02 211619

<(from left) Professor Dongsu Han, Dr. Hwijoon Iim, Ph.D. Candidate Juncheol Ye>

 

Professor Dongsu Han’s research team of the KAIST Department of Electrical Engineering has developed a groundbreaking technology that accelerates AI model training in distributed environments with limited network bandwidth using consumer-grade GPUs.

 

Training the latest AI models typically requires expensive infrastructure, such as high-performance GPUs costing tens of millions in won and high-speed dedicated networks.

As a result, most researchers in academia and small to medium-sized enterprises have to rely on cheaper, consumer-grade GPUs for model training.

However, they face difficulties in efficient model training due to network bandwidth limitations.

 

Inline image 2024 09 02 14.59.01.205

<Figure 1. Problems in Conventional Low-Cost Distributed Deep Learning Environments>

 

To address these issues, Professor Han’s team developed a distributed learning framework called StellaTrain.

StellaTrain accelerates model training on low-cost GPUs by integrating a pipeline that utilizes both CPUs and GPUs. It dynamically adjusts batch sizes and compression rates according to the network environment, enabling fast model training in multi-cluster and multi-node environments without the need for high-speed dedicated networks.

 

StellaTrain adopts a strategy that offloads gradient compression and optimization processes to the CPU to maximize GPU utilization by optimizing the learning pipeline. The team developed and applied a new sparse optimization technique and cache-aware gradient compression technology that work efficiently on CPUs.

 

This implementation creates a seamless learning pipeline where CPU tasks overlap with GPU computations. Furthermore, dynamic optimization technology adjusts batch sizes and compression rates in real-time according to network conditions, achieving high GPU utilization even in limited network environments.

 

Inline image 2024 09 02 14.59.01.206

<Figure 2. Overview of the StellaTrain Learning Pipeline>

 

Through these innovations, StellaTrain significantly improves the speed of distributed model training in low-cost multi-cloud environments, achieving up to 104 times performance improvement compared to the existing PyTorch DDP.

 

Professor Han’s research team has paved the way for efficient AI model training without the need for expensive data center-grade GPUs and high-speed networks. This breakthrough is expected to greatly aid AI research and development in resource-constrained environments, such as academia and small to medium-sized enterprises.

 

Professor Han emphasized, “KAIST is demonstrating leadership in the AI systems field in South Korea.” He added, “We will continue active research to implement large-scale language model (LLM) training, previously considered the domain of major IT companies, in more affordable computing environments. We hope this research will serve as a critical stepping stone toward that goal.”

 

The research team included Dr. Hwijoon Iim and Ph.D. candidate Juncheol Ye from KAIST, as well as Professor Sangeetha Abdu Jyothi from UC Irvine. The findings were presented at ACM SIGCOMM 2024, the premier international conference in the field of computer networking, held from August 4 to 8 in Sydney, Australia (Paper title: Accelerating Model Training in Multi-cluster Environments with Consumer-grade GPUs). 

 

Meanwhile, Professor Han’s team has also made continuous research advancements in the AI systems field, presenting a framework called ES-MoE, which accelerates Mixture of Experts (MoE) model training, at ICML 2024 in Vienna, Austria.

 

By overcoming GPU memory limitations, they significantly enhanced the scalability and efficiency of large-scale MoE model training, enabling fine-tuning of a 15-billion parameter language model using only four GPUs. This achievement opens up the possibility of effectively training large-scale AI models with limited computing resources.

 

Inline image 2024 09 02 14.59.01.207 1

<Figure 3. Overview of the ES-MoE Framework>

 

Inline image 2024 09 02 14.59.01.207

<Figure 4. Professor Dongsu Han’s research team has enabled AI model training in low-cost computing environments, even with limited or no high-performance GPUs, through their research on StellaTrain and ES-MoE.>

 

 

 

 

 

Professor Yong Man Ro’s Research Team Wins Outstanding Paper Award at AI Top tier Conference (ACL 2024)

Professor Yong Man Ro’s Research Team Wins Outstanding Paper Award at AI Top tier Conference (ACL 2024)

   1

<(from left) Se Jin Park ph.d. candidate, Chae Won Kim ph.d. candidate>

 

PhD students Se Jin Park and Chae Won Kim from Professor Yong-Man Ro’s research team in the School of Electrical Engineering at KAIST have won the Outstanding Paper Award at the ACL (Association for Computational Linguistics) 2024 conference, held in Bangkok.

ACL is recognized as the world’s leading conference in the field of Natural Language Processing (NLP) and is one of the top-tier international conferences in Artificial Intelligence (AI).

 

Their award-winning paper, titled “Let’s Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation,” introduces an innovative model designed to make interactions between humans and AI more natural and human-like.

 

Unlike traditional text-based or speech-based dialogue models, this research developed a Human Multimodal LLM (Large Language Model) that enables AI to comprehend both visual cues and vocal signals from humans. Additionally, it allows the AI to engage in conversations using human-like facial expressions and speech.

 

This breakthrough opens up new possibilities for improving the intuitiveness and effectiveness of human-AI interactions by simultaneously processing visual and auditory signals during conversations.

 

Inline image 2024 08 19 07.30.20.729

The paper was also presented as an oral presentation at the ACL 2024 conference in Bangkok, where it garnered significant attention.
 

Professor Yong Man Ro stated, ” This research marks a significant advancement in human-AI interaction, and we hope this technology will be widely applied in various real-world applications.

This award is yet another example of the international recognition of the excellence of AI research at KAIST’s School of Electrical Engineering.”

 

Professor Yun Insu’s Lab (as a Part of Team Atlanta) Advances to Finals of the U.S. DARPA ‘AI Cyber Challenge (AIxCC)’ and Secures $2 Million in Research Funding

Professor Yun Insu’s Lab (as a Part of Team Atlanta) Advances to Finals of the U.S. DARPA ‘AI Cyber Challenge (AIxCC)’ and Secures $2 Million in Research Funding

Inline image 2024 08 12 10.01.52.045

<Professor Insu Yun>

 

Team Atlanta, which includes Professor Yun Insu’s lab, successfully advanced to the finals of the DARPA AI Cyber Challenge (AIxCC) held in Las Vegas, USA, from August 10 to 11. The team has secured $2 million (approximately 2.7 billion KRW) in research funding.
 
AIxCC is a competition where teams pit their AI-based Cyber Reasoning Systems (CRS) against each other. DARPA’s challenges include embedding vulnerabilities in real software such as Linux, and each team’s CRS was tasked with automatically analyzing the software to identify and patch these vulnerabilities. DARPA then evaluated each CRS based on the number and variety of vulnerabilities discovered, the accuracy of the patches, and other factors.
 
 
Out of 91 registered teams and 39 participating teams in the preliminary round, Team Atlanta was selected as one of the seven teams advancing to the finals.
Team Atlanta consists of members from KAIST, Georgia Tech, NYU, POSTECH, and Samsung Research.
 
Notably, Team Atlanta’s CRS achieved a remarkable feat by discovering a new vulnerability in the famous software sqlite3, which was not initially intended by the challenge organizers. This achievement is seen as a significant milestone, demonstrating the potential of AI to bring innovation to the field of cybersecurity, aligning well with the goals of AIxCC.
 
As a result of their success, Team Atlanta has been awarded $2 million in research funding and will advance to the final competition, which will take place at DEF CON in August 2025, where the ultimate winner will be determined

 

Inline image 2024 08 12 10.05.38.972

Ph.D. candidate Hee Suk Yoon (Prof. Chang D. Yoo) wins excellent paper award

Ph.D. candidate Hee Suk Yoon (Prof. Chang D. Yoo) wins excellent paper award

                

<(From left) Professor Chang D. Yoo, Hee Suk Yoon integrated Ph.D. candidate>

 

The Korean Society for Artificial Intelligence holds conferences quarterly, and this year’s summer conference is scheduled to take place from August 15 to 17 at BEXCO in Busan.

Hee Suk Yoon, a PhD candidate, has been recognized for the excellence of his paper titled “BI-MDRG: Bridging Image History in Multimodal Dialogue Response Generation” and has been selected as an award recipient.

Moreover, the findings will be presented at the ‘European Conference on Computer Vision (ECCV) 2024′, one of the top international conferences in the field of computer vision, to be held in Milan, Italy, in September this year (Paper title: BI-MDRG: Bridging Image History in Multimodal Dialogue Response Generation).

 

The detailed information is as follows:
* Conference Name: 2024 Summer Conference of the Korean Artificial Intelligence Association
* Period: August 15 to 17, 2024
* Award Name: Excellent Paper Award
* Authors: Hee Suk Yoon, Eunseop Yoon, Chang D. Yoo (Supervising Professor)
* Paper Title:  BI-MDRG: Bridging Image History in Multimodal Dialogue Response Generation

 

This research is considered an innovative breakthrough that overcomes the limitations of existing multimodal dialogue large models, such as ChatGPT, and maintains consistency in image generation within multimodal dialogues.

 

chatgpt

Figure 1 : Image Response of ChatGPT and BI-MDRG (ours)

Traditional multimodal dialogue models prioritize generating textual descriptions of images and then create images using text-to-image models.

This approach often fails to sufficiently reflect the visual information from previous dialogues, leading to inconsistent image responses.

However, Professor Yoo’s BI-MDRG minimizes image information loss through a direct image referencing technique, enabling consistent image response generation.

 

240709

Figure 2 : Framework of previous multimodal dialogue system and our proposed BI-MDRG

BI-MDRG is a new system designed to solve the problem of image information loss in existing multimodal dialogue models by proposing Attention Mask Modulation and Citation Module.

Attention Mask Modulation allows the dialogue to focus directly on the image itself instead of its textual description, while the Citation Module ensures consistent responses by directly referencing objects that should be maintained in image responses through citation tagging of the same objects appearing in the conversation.

The research team validated BI-MDRG’s performance across various multimodal dialogue benchmarks, achieving high dialogue performance and consistency.

 

training overall5

Figure 3: Overall framework of BI-MDRG

BI-MDRG offers practical solutions in various multimodal application fields.

For instance, in customer service, it can enhance user satisfaction by providing accurate images based on conversation content.

In education, it can improve understanding by consistently providing relevant images and texts in response to learners’ questions. Additionally, in the entertainment field, it can enable natural and immersive interactions in interactive games.