Home

Highlights

교수님 360
Award

교수님 연구실

Youngjoon Lee, a Ph.D. candidate from Prof. Joonhyuk Kang’s laboratory at KAIST, received the Best Paper Award at the D2ET Workshop held in conjunction with IEEE BigData 2025.

 

The D2ET Workshop aims to address the increasing fragmentation of data across the real world—so-called “data islands”—which significantly reduce the utility of big data. To enhance data value and usability, the workshop explores new research directions in next-generation databases and is jointly organized under the A3 Foresight Program, supported by JSPS (Japan), NRF (Korea), and NSFC (China).

 

The awarded paper proposes a generative AI–powered federated learning plugin designed for robust learning in heterogeneous IoT environments, aligning well with the workshop’s mission of promoting data integration and effective data utilization.

교수님 720
News
교수님 연구팀
< (From left) Prof. Jung-Woo Choi, Dr. Dongheon Lee, and Ph.D. student Younghoo Kwon. >
Researchers led by Professor Jung-Woo Choi at the School of Electrical Engineering, KAIST, have developed DeepASA, an unified auditory AI model capable of comprehensive auditory scene analysis using diverse acoustic cues, similarly to human hearing. This research has been presented at NeurIPS 2025, the world’s top-tier AI conference, under the title “DeepASA: An Object-Oriented Multi-Purpose Network for Auditory Scene Analysis.”
 
Humans naturally analyze sounds collected through both ears and extract information such as the direction, type, onset time of the sound, as well as the spatial environments where reflections occur. Furthermore, when multiple sounds overlap, humans can selectively focus on each source, separate them, and understand individual sound contents.

 

f1
<(Left): Overview of the DCASE Challenge Task 4 our team won · Center: Research team photo, (Right): Best Student Presentation Award>

 

DeepASA processes multi-channel audio recordings in an object-oriented manner—analogous to the human binaural system—and performs almost every auditory scene analysis task, including moving sound source separation, dereverberation of direct and reflected components, sound classification, event detection, and direction-of-arrival estimation. Unlike conventional single-channel methods, DeepASA enables multi-channel separation for immersive audio such as Dolby Atmos and Ambisonics, allowing editing and remixing of spatial audio data by sound object.

 

f2
<Example of auditory scene analysis: (Left) complex indoor acoustic scene, 
(Right) detected sources, events, directions, and separated results compared with ground truth>

 

The researchers demonstrated that a single model performing multiple tasks yields improved performance for each task. They further introduced a Chain of Inference approach, in which temporal coherence among separated source signals, detected classes, and directional patterns is analyzed to refine the inference results, thereby significantly improving the robustness of auditory AI systems.

 

image03
<DeepASA structure with Chain-of-Inference>

 

Even before the NeurIPS presentation, the research team had achieved first place in Task 4 of the DCASE Challenge 2025, the world’s most prestigious competition in acoustic detection and analysis. This task focused on “Spatial Semantic Segmentation of Sound Scenes.” At the DCASE 2025 Workshop held in October 2025, the team received the Best Student Paper Award (given to a single team) and simultaneously won the Best Judge’s Award.

 

f3
<(Left) DCASE Challenge Task 4 introduction (Center) research team (Right) award ceremony>

 

Such advanced audio AI technology enables unprecedented capabilities for sound-based detection of hazardous or critical events. For instance, it can detect long-distance drones based solely on sound, monitor abnormal activity in border surveillance systems, or recover faint audio buried by noise. Therefore, it can play a critical role in national-defense and security applications requiring detection of potential threats using acoustic information.
In addition, by separating sound objects and extracting directional and spatial acoustic features from recorded immersive audio, DeepASA enables re-editing of complex sound fields, which is essential for next-generation AR/VR spatial audio rendering. It represents a core technology enabling complete re-synthesis and reconstruction of immersive sound scenes.
 
The DeepASA research team includes Dr. Dong-Heon Lee and Ph.D. student Young-Hoo Kwon from KAIST EE. This project was supported by the National Research Foundation of Korea (NRF, No. RS-2024-00337945), the Ministry of Science and ICT (STEAM Research Program), and the Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD.
1 2
Award

2 2

Professor Sanghyeon Kim’s research team from our undergraduate School of Electrical Engineering has received the IEEE Paul Rappaport Award on December 8, 2025, in recognition of their achievements in developing Complementary Field-Effect Transistor (CFET) technology, which is gaining attention as a next-generation transistor architecture.

 

The IEEE Paul Rappaport Award is presented to the best paper selected from among the papers published over the previous year in IEEE Transactions on Electron Devices (TED), a leading journal in the field of semiconductor devices. This year’s award was chosen from a total of 1,202 papers published in 2024, and this achievement marks the first time that a university in Korea has received this award.

 

The awarded paper, titled “Heterogeneous 3-D Sequential CFETs With Ge (110) Nanosheet p-FETs on Si (100) Bulk n-FETs,” was led by Dr. Seong Kwang Kim (Ph.D. graduate in 2023 from the School of Electrical Engineering), and was conducted in collaboration with Professor Byung-Jin Cho’s laboratory. The study is significant in that it demonstrated a direction for overcoming the structural issue of CFETs—namely, the low performance of p-FETs—by integrating, as the upper device, a Ge channel with a (110) crystal orientation.

 

In addition, all stages of device design, fabrication, and evaluation were carried out entirely at KAIST, which represents the high research standards and infrastructure of the School of Electrical Engineering.

 

Dr. Seong Kwang Kim stated, “Since my Ph.D., and continuing now as I work in industry, I have been continually developing three-dimensional stacked devices. There remain many challenges to overcome in order to achieve mass production of three-dimensional stacked devices, and I will continue researching and taking on these challenges to contribute to the development of semiconductor technology in Korea.”

Related Link

 
교수님360 1
News
2. 최경철교수연구팀 1
<(From left) Dr. Byeongju Noh, PhD candidate Young-Hun Jung, Integrated MS–PhD student Minwoo Park, and Professor Kyung Cheol Choi>
A Korean research team, raising the question “Which OLED light color can actually improve memory and pathological markers in Alzheimer’s patients?”, has identified the most effective OLED color capable of enhancing cognitive function using only light—with no drugs involved. The OLED platform developed for this study can precisely control color, brightness, flicker frequency, and exposure duration, suggesting potential future development into personalized OLED-based electroceuticals.
 

A joint research team led by Professor Kyung Cheol Choi from the School of Electrical Engineering at KAIST and Dr. Ja Wook Koo and Dr. Hyang Sook Hoe from the Korea Brain Research Institute (KBRI) developed a uniform-illuminance, three-color OLED photostimulation platform and confirmed that red 40-Hz light was the most effective among blue, green, and red lights in improving Alzheimer’s pathology and memory function.

 

To overcome the structural limitations of conventional LEDs—such as brightness imbalance, heat generation risk, and variability caused by animal movement—the researchers developed an OLED-based photostimulation platform that emits light uniformly. Using this platform, they compared white, red, green, and blue light under identical conditions (40-Hz frequency, brightness, and exposure time) and found that red 40-Hz light produced the most significant improvement.

 

In an early-stage (3-month-old) Alzheimer’s animal model, improvement in pathology and memory was observed after only two days of stimulation. When early Alzheimer’s model mice were exposed to one hour of light per day for two days, both white and red light improved long-term memory. Additionally, the amount of amyloid-β (Aβ) plaques—protein aggregates known as a major factor in Alzheimer’s disease—was reduced in key brain regions such as the hippocampus, and levels of the plaque-clearing enzyme ADAM17 increased.

 

This indicates that even very short periods of light stimulation can reduce harmful proteins in the brain and improve memory function. In particular, with red light, the inflammatory cytokine IL-1β, known to exacerbate inflammation and contribute to Alzheimer’s progression, decreased significantly, demonstrating an anti-inflammatory effect.

 

Moreover, the more plaque was reduced, the greater the improvement in memory—direct evidence that pathological improvement leads to cognitive enhancement.

 

In the mid-stage (6-month-old) Alzheimer’s model, statistically significant pathological improvement was seen only with red light. In a two-week long-term stimulation experiment under the same conditions, both white and red light improved memory, but a statistically meaningful.

 

images 000114 pic1 1
< The mechanism by which red OLED stimulation of neurons reduces amyloid-β in Alzheimer’s model mice >

 

Differences at the molecular level were also clear. Under red light, levels of ADAM17 (which helps remove plaques) increased, while levels of BACE1, an enzyme responsible for producing plaques, decreased—demonstrating a dual effect of both inhibiting plaque formation and promoting plaque removal. In contrast, white light only lowered BACE1, showing more limited therapeutic effects compared to red light.

 

This scientifically identifies that the color of light is a key factor determining therapeutic efficacy.

 

To determine which neural circuits were activated by light stimulation, the team analyzed the expression of c-Fos, an immediate-early gene that is activated when neurons fire.

 

They found activation throughout the visual–memory circuit, extending from the visual cortex → thalamus → hippocampus, providing direct neurological evidence that light stimulation awakens the visual pathway, enhancing hippocampal function and memory.

 

Thanks to the uniform-illuminance OLED platform, light was evenly delivered regardless of animal movement, ensuring stable experimental results and high reproducibility across repeated tests.

 

This study is the first to demonstrate that cognitive function can be improved using only light, without drugs, and that Alzheimer’s pathological markers can be regulated through combinations of light color, frequency, and duration.

 

The OLED platform developed in this study allows fine control over color, brightness, flicker ratio, and exposure time, making it suitable for personalized stimulation design in future human clinical research.

 

The research team plans to expand conditions such as stimulation intensity, energy, duration, and combined visual–auditory stimulation, aiming toward clinical-stage development.

 

Dr. Byeongju Noh (from Professor Kyung Cheol Choi’s research team) said, “This study experimentally demonstrates the importance of color standardization and confirms that red OLED is the key color that activates ADAM17 and suppresses BACE1 across disease stages.”

 

Professor Kyung Cheol Choi emphasized, “Our uniform-illuminance OLED platform overcomes the structural limitations of traditional LEDs and enables high reproducibility and safe evaluation. We expect wearable RED OLED electroceuticals for everyday use to present a new therapeutic paradigm for Alzheimer’s disease.”

 

The research findings were published online on October 25 in ACS Biomaterials Science & Engineering, a leading international journal in biomedical and materials science.

 

※ Paper Title: Color Dependence of OLED Phototherapy for Cognitive Function and Beta-Amyloid Reduction through ADAM17 and BACE1  DOI: https://pubs.acs.org/doi/full/10.1021/acsbiomaterials.5c01162

※ Co-authors:
 Byeongju Noh, Hyun-Ju Lee, Jiyun Lee, Jiyun Lee, Ji-Eun Lee, Bitna Joo, Young-Hun Jung, Minwoo Park, Sora Kang, Seokjun Oh, Jeong-Woo Hwang, Dae-Si Kang, Yongmin Jeon, So-Min Lee, Hyang Sook Hoe, Ja Wook Koo, Kyung Cheol Choi

This research was supported by the National Research Foundation of Korea and the National IT Industry Promotion Agency under the Ministry of Science and ICT, and the Korea Brain Research Institute Basic Research Program. (2017R1A5A1014708, 2022M3E5E9018226, H0501-25-1001, 25-BR-02-02, 25-BR-02-04)

교수님 수상360
Award
교수님 900
<Professor Sanghun Jeon(far right) > 
Professor Sanghun Jeon of the KAIST School of Electrical Engineering has received the 2025 KCHIPS (Korea CHIPS Program for Public-Private Joint Investment in Semiconductor Talent Development) President’s Award from the Korea Evaluation Institute of Industrial Technology (KEIT).
 
The award recognizes outstanding research achievements that have contributed to strengthening Korea’s semiconductor R&D competitiveness through the KCHIPS program. Professor Jeon was selected for his leadership in the flagship project titled “Next-Generation Memory Device Development Using Hafnia (HfO₂)-Based Ferroelectrics,” which has made significant contributions to advancing Korea’s memory semiconductor technologies.
 
Professor Jeon’s research team has focused on overcoming the limitations of conventional silicon-based memory structures by studying ultra-thin hafnia ferroelectric polarization stabilization, interface engineering, and high-density array implementation technologies. Their work has produced notable progress in achieving the low-power, high-speed, and high-reliability characteristics required for future AI and server-oriented non-volatile memory, earning strong recognition from both industry and research institutions.
 
The award ceremony was held on November 12, 2025 (Wednesday) at 18:00 during the KCHIPS General Workshop at the Crystal Ballroom, 3rd Floor, Vivaldi Sonocam, Hongcheon.
교수님 360
News
교수님 360
<Prof. Yong Man Ro>

Professor Yong Man Ro has been elevated to the grade of IEEE Fellow for the Class of 2026, with the citation “for contributions to Human-Centered Multimodal Signal Processing.” Recognized for bridging the gap between human perception and machine intelligence, Prof. Ro has established foundational frameworks in multimodal human signal analysis and developed the first human-centered personalized models for quantifying Virtual Reality (VR) quality. His authority in the global signal processing community is further evidenced by his widely cited research and his influential academic standing.

 

Building on this legacy of human-centric analysis, Prof. Ro is currently spearheading the future of AI through his research on Multimodal Large Language Models (MLLM) and Multimodal AI. His lab focuses on creating AI agents capable of “Inclusive Human Multimodal AI,” a vision recently validated by his achievement, which won the Outstanding Paper Award at ACL 2024, a top-tier AI conference. This research marks a leap toward empathetic Artificial General Intelligence (AGI) that can perceive human signals. Beyond his research, Prof. Ro continues to shape the field as an elected member of the Image, Video, and Multidimensional Signal Processing (IVMSP) Technical Committee of the IEEE Signal Processing Society, a member of the Editorial Board for IEEE Transactions on Image Processing (TIP), and as a mentor to over 100 Ph.D. and M.S. graduates who are now leading innovation across academia and top-tier tech research Institutes.

입력해주세요. 7 enhancer1
News
입력해주세요. 7 enhancer1
< Executive Director Jin Yeong Kim >

Dr. Jin Yeong Kim, a graduate of Professor Kyung Cheol Choi’s lab in our department, has been promoted to Executive Director (Vice President–level) in the latest regular personnel announcement at Samsung Display.

 

Mr. Kim earned his master’s degree in 2011 and his Ph.D. in 2014, after which he joined Samsung Display and has since served as a principal engineer in the Materials Development Team of the Small and Medium-Sized Display Business Division. He has led the development of Tandem structure materials for next-generation IT and automotive products, significantly contributing to the realization of high-reliability and high-efficiency displays and strengthening the company’s core product competitiveness. His outstanding research achievements and leadership have earned him deep trust within the organization, culminating in his promotion as an Executive Director in his 30s.

 

Currently, Executive Director Kim is spearheading the advancement of high-performance tandem materials and the development of key materials for small and medium-sized displays to prepare for the next wave of IT and automotive display technologies. He is expected to continue playing a pivotal role in enhancing Samsung Display’s global technological competitiveness and driving innovation in the future display industry.

 

교수님 320
News
교수님 메인 최종
<(From left) Professor Dongsu Han, Dr. Jinwoo Park and Master’s student Seunggeun Cho>
A research team led by Professor Dongsu Han from KAIST’s School of Electrical Engineering has developed an edge-assisted inference framework that dramatically reduces large language model (LLM) service costs by utilizing affordable consumer-grade GPUs.
 

Currently, LLM inference services rely entirely on dedicated accelerators and GPUs in data centers, requiring substantial financial and infrastructure investments for large-scale language model services. While high-performance consumer-grade GPUs—more affordable than data center GPUs—have become widely available at the edge outside data centers, structural limitations of existing LLM inference architectures prevent their efficient utilization in internet environments with limited communication infrastructure.

 

The research team developed SpecEdge, an edge-assisted inference framework to address these challenges. SpecEdge reduces LLM inference costs by effectively distributing computation between consumer-grade edge GPUs and data center GPUs. The framework also adopts speculative decoding techniques to enable smooth communication between edge GPUs and data center GPUs over the internet. Speculative decoding is a technique where a relatively small language model quickly generates multiple high-probability tokens, which are then verified by a large language model. SpecEdge deploys a small model on edge GPUs to generate high-probability token sequences at once, then sends them to data center GPUs for batch verification.

 

picture 1 diagramkr
<SpecEdge Framework Diagram>

 

SpecEdge employs a strategy where edge GPUs continue generating tokens while waiting for verification results from the server. After initial token generation, the edge pre-generates additional tokens along the highest-probability path, allowing immediate utilization of pre-generated tokens when all verification results match. Additionally, server-side pipeline optimization intelligently batches verification requests from multiple edges to maximize server GPU utilization. While one edge GPU drafts tokens, the server verifies other requests, eliminating idle time and enabling processing of more requests.

 

picture 2 proactive draft
<Edge GPU Proactive Draft>
picture 3 pipeline optimization
<Pipeline Batch Optimization for Server GPU>

 

This research demonstrates the potential to reduce dependence on data center GPUs by leveraging widely deployed edge GPUs. The SpecEdge framework, which can be extended to NPUs at the edge, addresses cost concerns and limited GPU availability in data centers, providing opportunities to deploy high-quality LLM services. This could lower barriers to entry in the AI service market and stimulate competition, laying the foundation for the development of Korea’s AI industry ecosystem.

 

Professor Dongsu Han stated, “We will continue research to enable the use of user edge devices as LLM infrastructure, beyond edge cloud GPUs,” adding that “utilizing user edge resources will reduce the cost burden on service providers, lower barriers to accessing high-quality LLMs, and serve as the foundation for AI for everyone.”

 

This research was conducted with Dr. Jinwoo Park and Master’s student Seunggeun Cho from KAIST. The findings will be presented as a Spotlight paper (top 3.2% of submissions) at the Annual Conference on Neural Information Processing Systems (NeurIPS), a top-tier international conference in artificial intelligence, held in San Diego, USA, from December 2–7 (Paper title: SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs).

NOTICE

MORE

SEMINAR & EVENT