Highlights

Youngjoon Lee, a Ph.D. candidate from Prof. Joonhyuk Kang’s laboratory at KAIST, received the Best Paper Award at the D2ET Workshop held in conjunction with IEEE BigData 2025.
The D2ET Workshop aims to address the increasing fragmentation of data across the real world—so-called “data islands”—which significantly reduce the utility of big data. To enhance data value and usability, the workshop explores new research directions in next-generation databases and is jointly organized under the A3 Foresight Program, supported by JSPS (Japan), NRF (Korea), and NSFC (China).
The awarded paper proposes a generative AI–powered federated learning plugin designed for robust learning in heterogeneous IoT environments, aligning well with the workshop’s mission of promoting data integration and effective data utilization.



(Right) detected sources, events, directions, and separated results compared with ground truth>
The researchers demonstrated that a single model performing multiple tasks yields improved performance for each task. They further introduced a Chain of Inference approach, in which temporal coherence among separated source signals, detected classes, and directional patterns is analyzed to refine the inference results, thereby significantly improving the robustness of auditory AI systems.



Professor Sanghyeon Kim’s research team from our undergraduate School of Electrical Engineering has received the IEEE Paul Rappaport Award on December 8, 2025, in recognition of their achievements in developing Complementary Field-Effect Transistor (CFET) technology, which is gaining attention as a next-generation transistor architecture.
The IEEE Paul Rappaport Award is presented to the best paper selected from among the papers published over the previous year in IEEE Transactions on Electron Devices (TED), a leading journal in the field of semiconductor devices. This year’s award was chosen from a total of 1,202 papers published in 2024, and this achievement marks the first time that a university in Korea has received this award.
The awarded paper, titled “Heterogeneous 3-D Sequential CFETs With Ge (110) Nanosheet p-FETs on Si (100) Bulk n-FETs,” was led by Dr. Seong Kwang Kim (Ph.D. graduate in 2023 from the School of Electrical Engineering), and was conducted in collaboration with Professor Byung-Jin Cho’s laboratory. The study is significant in that it demonstrated a direction for overcoming the structural issue of CFETs—namely, the low performance of p-FETs—by integrating, as the upper device, a Ge channel with a (110) crystal orientation.
In addition, all stages of device design, fabrication, and evaluation were carried out entirely at KAIST, which represents the high research standards and infrastructure of the School of Electrical Engineering.
Dr. Seong Kwang Kim stated, “Since my Ph.D., and continuing now as I work in industry, I have been continually developing three-dimensional stacked devices. There remain many challenges to overcome in order to achieve mass production of three-dimensional stacked devices, and I will continue researching and taking on these challenges to contribute to the development of semiconductor technology in Korea.”
Related Link

A joint research team led by Professor Kyung Cheol Choi from the School of Electrical Engineering at KAIST and Dr. Ja Wook Koo and Dr. Hyang Sook Hoe from the Korea Brain Research Institute (KBRI) developed a uniform-illuminance, three-color OLED photostimulation platform and confirmed that red 40-Hz light was the most effective among blue, green, and red lights in improving Alzheimer’s pathology and memory function.
To overcome the structural limitations of conventional LEDs—such as brightness imbalance, heat generation risk, and variability caused by animal movement—the researchers developed an OLED-based photostimulation platform that emits light uniformly. Using this platform, they compared white, red, green, and blue light under identical conditions (40-Hz frequency, brightness, and exposure time) and found that red 40-Hz light produced the most significant improvement.
In an early-stage (3-month-old) Alzheimer’s animal model, improvement in pathology and memory was observed after only two days of stimulation. When early Alzheimer’s model mice were exposed to one hour of light per day for two days, both white and red light improved long-term memory. Additionally, the amount of amyloid-β (Aβ) plaques—protein aggregates known as a major factor in Alzheimer’s disease—was reduced in key brain regions such as the hippocampus, and levels of the plaque-clearing enzyme ADAM17 increased.
This indicates that even very short periods of light stimulation can reduce harmful proteins in the brain and improve memory function. In particular, with red light, the inflammatory cytokine IL-1β, known to exacerbate inflammation and contribute to Alzheimer’s progression, decreased significantly, demonstrating an anti-inflammatory effect.
Moreover, the more plaque was reduced, the greater the improvement in memory—direct evidence that pathological improvement leads to cognitive enhancement.
In the mid-stage (6-month-old) Alzheimer’s model, statistically significant pathological improvement was seen only with red light. In a two-week long-term stimulation experiment under the same conditions, both white and red light improved memory, but a statistically meaningful.

Differences at the molecular level were also clear. Under red light, levels of ADAM17 (which helps remove plaques) increased, while levels of BACE1, an enzyme responsible for producing plaques, decreased—demonstrating a dual effect of both inhibiting plaque formation and promoting plaque removal. In contrast, white light only lowered BACE1, showing more limited therapeutic effects compared to red light.
This scientifically identifies that the color of light is a key factor determining therapeutic efficacy.
To determine which neural circuits were activated by light stimulation, the team analyzed the expression of c-Fos, an immediate-early gene that is activated when neurons fire.
They found activation throughout the visual–memory circuit, extending from the visual cortex → thalamus → hippocampus, providing direct neurological evidence that light stimulation awakens the visual pathway, enhancing hippocampal function and memory.
Thanks to the uniform-illuminance OLED platform, light was evenly delivered regardless of animal movement, ensuring stable experimental results and high reproducibility across repeated tests.
This study is the first to demonstrate that cognitive function can be improved using only light, without drugs, and that Alzheimer’s pathological markers can be regulated through combinations of light color, frequency, and duration.
The OLED platform developed in this study allows fine control over color, brightness, flicker ratio, and exposure time, making it suitable for personalized stimulation design in future human clinical research.
The research team plans to expand conditions such as stimulation intensity, energy, duration, and combined visual–auditory stimulation, aiming toward clinical-stage development.
Dr. Byeongju Noh (from Professor Kyung Cheol Choi’s research team) said, “This study experimentally demonstrates the importance of color standardization and confirms that red OLED is the key color that activates ADAM17 and suppresses BACE1 across disease stages.”
Professor Kyung Cheol Choi emphasized, “Our uniform-illuminance OLED platform overcomes the structural limitations of traditional LEDs and enables high reproducibility and safe evaluation. We expect wearable RED OLED electroceuticals for everyday use to present a new therapeutic paradigm for Alzheimer’s disease.”
The research findings were published online on October 25 in ACS Biomaterials Science & Engineering, a leading international journal in biomedical and materials science.
※ Paper Title: Color Dependence of OLED Phototherapy for Cognitive Function and Beta-Amyloid Reduction through ADAM17 and BACE1 DOI: https://pubs.acs.org/doi/full/10.1021/acsbiomaterials.5c01162
※ Co-authors:
Byeongju Noh, Hyun-Ju Lee, Jiyun Lee, Jiyun Lee, Ji-Eun Lee, Bitna Joo, Young-Hun Jung, Minwoo Park, Sora Kang, Seokjun Oh, Jeong-Woo Hwang, Dae-Si Kang, Yongmin Jeon, So-Min Lee, Hyang Sook Hoe, Ja Wook Koo, Kyung Cheol Choi
This research was supported by the National Research Foundation of Korea and the National IT Industry Promotion Agency under the Ministry of Science and ICT, and the Korea Brain Research Institute Basic Research Program. (2017R1A5A1014708, 2022M3E5E9018226, H0501-25-1001, 25-BR-02-02, 25-BR-02-04)


Professor Yong Man Ro has been elevated to the grade of IEEE Fellow for the Class of 2026, with the citation “for contributions to Human-Centered Multimodal Signal Processing.” Recognized for bridging the gap between human perception and machine intelligence, Prof. Ro has established foundational frameworks in multimodal human signal analysis and developed the first human-centered personalized models for quantifying Virtual Reality (VR) quality. His authority in the global signal processing community is further evidenced by his widely cited research and his influential academic standing.
Building on this legacy of human-centric analysis, Prof. Ro is currently spearheading the future of AI through his research on Multimodal Large Language Models (MLLM) and Multimodal AI. His lab focuses on creating AI agents capable of “Inclusive Human Multimodal AI,” a vision recently validated by his achievement, which won the Outstanding Paper Award at ACL 2024, a top-tier AI conference. This research marks a leap toward empathetic Artificial General Intelligence (AGI) that can perceive human signals. Beyond his research, Prof. Ro continues to shape the field as an elected member of the Image, Video, and Multidimensional Signal Processing (IVMSP) Technical Committee of the IEEE Signal Processing Society, a member of the Editorial Board for IEEE Transactions on Image Processing (TIP), and as a mentor to over 100 Ph.D. and M.S. graduates who are now leading innovation across academia and top-tier tech research Institutes.

Dr. Jin Yeong Kim, a graduate of Professor Kyung Cheol Choi’s lab in our department, has been promoted to Executive Director (Vice President–level) in the latest regular personnel announcement at Samsung Display.
Mr. Kim earned his master’s degree in 2011 and his Ph.D. in 2014, after which he joined Samsung Display and has since served as a principal engineer in the Materials Development Team of the Small and Medium-Sized Display Business Division. He has led the development of Tandem structure materials for next-generation IT and automotive products, significantly contributing to the realization of high-reliability and high-efficiency displays and strengthening the company’s core product competitiveness. His outstanding research achievements and leadership have earned him deep trust within the organization, culminating in his promotion as an Executive Director in his 30s.
Currently, Executive Director Kim is spearheading the advancement of high-performance tandem materials and the development of key materials for small and medium-sized displays to prepare for the next wave of IT and automotive display technologies. He is expected to continue playing a pivotal role in enhancing Samsung Display’s global technological competitiveness and driving innovation in the future display industry.

Currently, LLM inference services rely entirely on dedicated accelerators and GPUs in data centers, requiring substantial financial and infrastructure investments for large-scale language model services. While high-performance consumer-grade GPUs—more affordable than data center GPUs—have become widely available at the edge outside data centers, structural limitations of existing LLM inference architectures prevent their efficient utilization in internet environments with limited communication infrastructure.
The research team developed SpecEdge, an edge-assisted inference framework to address these challenges. SpecEdge reduces LLM inference costs by effectively distributing computation between consumer-grade edge GPUs and data center GPUs. The framework also adopts speculative decoding techniques to enable smooth communication between edge GPUs and data center GPUs over the internet. Speculative decoding is a technique where a relatively small language model quickly generates multiple high-probability tokens, which are then verified by a large language model. SpecEdge deploys a small model on edge GPUs to generate high-probability token sequences at once, then sends them to data center GPUs for batch verification.

SpecEdge employs a strategy where edge GPUs continue generating tokens while waiting for verification results from the server. After initial token generation, the edge pre-generates additional tokens along the highest-probability path, allowing immediate utilization of pre-generated tokens when all verification results match. Additionally, server-side pipeline optimization intelligently batches verification requests from multiple edges to maximize server GPU utilization. While one edge GPU drafts tokens, the server verifies other requests, eliminating idle time and enabling processing of more requests.


This research demonstrates the potential to reduce dependence on data center GPUs by leveraging widely deployed edge GPUs. The SpecEdge framework, which can be extended to NPUs at the edge, addresses cost concerns and limited GPU availability in data centers, providing opportunities to deploy high-quality LLM services. This could lower barriers to entry in the AI service market and stimulate competition, laying the foundation for the development of Korea’s AI industry ecosystem.
Professor Dongsu Han stated, “We will continue research to enable the use of user edge devices as LLM infrastructure, beyond edge cloud GPUs,” adding that “utilizing user edge resources will reduce the cost burden on service providers, lower barriers to accessing high-quality LLMs, and serve as the foundation for AI for everyone.”
This research was conducted with Dr. Jinwoo Park and Master’s student Seunggeun Cho from KAIST. The findings will be presented as a Spotlight paper (top 3.2% of submissions) at the Annual Conference on Neural Information Processing Systems (NeurIPS), a top-tier international conference in artificial intelligence, held in San Diego, USA, from December 2–7 (Paper title: SpecEdge: Scalable Edge-Assisted Serving Framework for Interactive LLMs).