Highlights

Organic light-emitting diodes (OLEDs) are widely used in smartphones and TVs thanks to their excellent color reproduction and thin, flexible planar structure. However, internal light loss has limited further improvements in brightness. KAIST EE researchers have now developed a technology that more than doubles OLED light-emission efficiency while maintaining the flat structure that is a key advantage of OLED displays.
The research team led by Professor Seunghyup Yoo of the School of Electrical Engineering has developed a new near-planar light outcoupling structure* and an OLED design method that can significantly reduce light loss inside OLED devices. * Near-planar light outcoupling structure: a thin structure that keeps the OLED surface almost flat while extracting more of the light generated inside to the outside
OLEDs are composed of multiple layers of ultrathin organic films stacked on top of one another. As light passes through these layers, it is repeatedly reflected or absorbed, often causing more than 80% of the light generated inside the OLED to be lost as heat before it can escape.
To address this issue, light outcoupling structures such as hemispherical lenses or microlens arrays (MLAs) have been used to extract light from OLEDs. However, hemispherical lenses protrude significantly, making it difficult to maintain a flat form factor, while MLAs must cover much larger area than individual pixel sizes to achieve sufficient light extraction. This creates limitations in achieving high efficiency without interference between neighboring pixels.
To increase OLED brightness while preserving a planar structure, the research team proposed a new OLED design strategy that maximizes light extraction within the size of each individual pixel.
Unlike conventional designs that assume OLEDs extend infinitely, this approach takes into account the finite pixel sizes actually used in real displays. As a result, more light can be emitted externally even from pixels of the same size.
In addition, the team developed a new near-planar light outcoupling structure that helps light emerge efficiently in the forward direction without being spread too widely. This structure is very thin—comparable in thickness to existing microlens arrays—yet achieves light extraction efficiency close to that of hemispherical lenses of the same lateral dimension. As a result, it hardly undermines the flat form factors of OLEDs and can be readily applied to flexible OLED displays.
By combining the new OLED design with the near-planar light outcoupling structure, the researchers successfully achieved more than a twofold improvement in light-emission efficiency even in small pixels.

This technology enables brighter displays using the same power while maintaining OLED’s flat structure, and is expected to extend battery life and reduce heat generation in mobile devices such as smartphones and tablets. Improvements in display lifespan are also anticipated.
MinJae Kim, the first author of the study, noted, “A small idea that came up during class was developed into real research results through the KAIST Undergraduate Research Program (URP).”
Professor Seunghyup Yoo stated, “Although many light outcoupling structures have been proposed, most were designed for large-area lighting applications, and many were difficult to apply effectively to displays composed of numerous small pixels,” adding, “The near-planar light outcoupling structure proposed in this work was designed with constraints on the size of the light source within each pixel, reducing optical interference between adjacent pixels while maximizing efficiency.” He further emphasized that the approach can be applied not only to OLEDs but also to next-generation display technologies based on materials such as perovskites and quantum dots.

This research, with MinJae Kim (Department of Materials Science and Engineering, KAIST; currently a Ph.D. student in Materials Science and Engineering at Stanford University) and Junho Kim (School of Electrical Engineering, KAIST; currently a postdoctoral researcher at the University of Cologne, Germany) as co–first authors, was published online on December 29, 2025, in Nature Communications.
※ Paper title: : Near-planar light outcoupling structures with finite lateral dimensions for ultra-efficient and optical crosstalk-free OLED displays
This research was supported by the KAIST Undergraduate Research Program (URP), the Mid-Career Researcher Program and the Future Display Strategic Research Lab Program of the National Research Foundation (NRF) of Korea, the Human Resource Development Program of the Korea Institute for Advancement of Technology (KIAT), and the Korea Planning & Evaluation Institute of Industrial Technology (KEIT).

With the rapid advancement of artificial intelligence (AI), the importance of ultra-low-power semiconductor technologies that integrate sensing, computation, and memory into a single platform is growing. However, conventional architectures suffer from power loss and latency caused by data movement, as well as inherent limitations in memory reliability. Addressing these challenges, researchers from the School of Electrical Engineering have presented core technologies for sensor–compute–memory integrated AI semiconductors, drawing significant attention from the international research community.
Professor Sanghun Jeon’s research team presented a total of six papers at the IEEE International Electron Devices Meeting (IEDM 2025), the world’s most prestigious conference in the field of semiconductor devices, held in San Francisco, USA, from December 8 to 10. Among these, the team’s work was simultaneously selected as a Highlight Paper and a Top Ranked Student Paper.
In particular, this achievement is considered a highly significant academic accomplishment, given that a single research laboratory presented six silicon-based semiconductor device papers at IEEE IEDM, the world’s most prestigious conference in the semiconductor device field, known for its low acceptance rate and rigorous academic and industrial evaluation standards.
Highlight Paper: Monolithically Integrated Photodiode–Spiking Circuit for Neuromorphic Vision with In-Sensor Feature Extraction
Top Ranked Student Paper: A Highly Reliable Ferroelectric NAND Cell with Ultra-thin IGZO Charge Trap Layer; Trap Profile Engineering for Endurance and Retention Improvement
The research on the M3D integrated neuromorphic vision sensor, selected as a highlight paper, is a semiconductor that stacks the human eye and brain within a single chip. Simply put, the sensors that detect light and the circuits that process signals like a brain are made into very thin layers and stacked vertically in one chip, implementing a structure where the process of ‘seeing’ and ‘judging’ occurs simultaneously.
Through this, the research team completed the world’s first “In-Sensor Spiking Convolution” platform, where AI computation technology that “sees and judges at the same time” takes place directly within the camera sensor.


Previously, this technology required several stages: capturing an image (sensor), converting it to digital (ADC), storing it in memory (DRAM), and then calculating (CNN). However, this new technology eliminates unnecessary data movement as the calculation happens immediately within the sensor. As a result, it has become possible to implement real-time, ultra-low-power Edge AI with significantly reduced power consumption and dramatically improved response speeds.
Based on this approach, the research team presented six core technologies at the conference covering all layers of AI semiconductors, from input to storage. They simultaneously created neuromorphic semiconductors that operate like the brain using much less electricity while utilizing existing semiconductor processes, along with next-generation memory optimized for AI.
First, on the sensor side, they designed the system so that judgment occurs at the sensor stage rather than having separate components for capturing images and calculating. Consequently, power consumption decreased and response speeds increased compared to the conventional method of taking a photo and sending it to another chip for calculation.


Furthermore, in the field of memory, they implemented a next-generation NAND flash that uses the same materials but operates at lower voltages, lasts longer, and can store data stably even when the power is turned off. Through this, they presented a foundational technology that satisfies the requirements for high-capacity, high-reliability, and low-power memory necessary for AI.


Professor Sanghun Jeon, who led the research, stated, “This research is significant in that it demonstrates that the entire hierarchy can be integrated into a single material and process system, moving away from the existing AI semiconductor structure where sensing, computation, and storage were designed separately.” He added, “Moving forward, we plan to expand this into a next-generation AI semiconductor platform that encompasses everything from ultra-low-power Edge AI to large-scale AI memory.”
Meanwhile, this research was conducted with support from basic research projects of the Ministry of Science and ICT and the National Research Foundation of Korea, as well as the Center for Heterogeneous Integration of Extreme-scale & Property Semiconductors (CH³IPS). It was carried out in collaboration with Samsung Electronics, Kyungpook National University, and Hanyang University.

No matter how much data they learn, why do Artificial Intelligence (AI) models often miss the mark on human intent? Conventional “comparison learning,” designed to help AI understand human preferences, has frequently led to confusion rather than clarity. A KAIST research team has now presented a new learning solution that allows AI to accurately learn human preferences even with limited data by assigning it a “private tutor.”
A research team led by Professor Junmo Kim developed “TVKD” (Teacher Value-based Knowledge Distillation), a reinforcement learning framework that significantly improves data efficiency and learning stability while effectively reflecting human preferences.
Existing AI training methods typically rely on collecting massive amounts of “preference comparison” data—simple structures like “A is better than B.” However, this approach requires vast datasets and often causes the AI to become confused in ambiguous situations where the distinction is unclear.
To solve this problem, the research team proposed a method in which a ‘Teacher model’ that has first deeply understood human preferences delivers only the core information to a ‘Student model.’ This can be compared to a private tutor who organizes and teaches complex content, and the research team named this ‘Preference Distillation.’
The biggest feature of this technology is that instead of simply imitating ‘good or bad,’ it is designed so that the teacher model learns a ‘Value Function’ that numerically judges how valuable each situation is, and then delivers this to the student model. Through this, the AI can learn by making comprehensive judgments about ‘why this choice is better’ rather than fragmentary comparisons, even in ambiguous situations.

The core of this technology is twofold. First, by reflecting value judgments that consider the entire context into the student model, learning that understands the overall flow rather than fragmentary answers has become possible. Second, a technique was introduced to adjust learning importance according to the reliability of preference data. Clear data is significantly reflected in learning, while the influence of ambiguous or noisy data is reduced, allowing the AI to learn stably even in realistic environments.
As a result of the research team applying this technology to various AI models and conducting experiments, it showed more accurate and stable performance than methods previously known to have the best performance. In particular, it recorded achievements that stably outperformed existing top technologies in major evaluation indices such as MT-Bench and AlpacaEval.
Professor Junmo Kim said, “In reality, human preference data is not always sufficient or perfect,” and added, “This technology will allow AI to learn consistently even under such constraints, so it will be highly practical in various fields.”

It can be confirmed that the proposed TVKD framework records generally higher scores than existing methods. >
Ph.D. candidate Minchan Kwon participated as the first author, and the research results were accepted at ‘NeurIPS 2025’, the most prestigious international conference in the field of artificial intelligence. The research was presented at a poster session on December 3, 2025 (US Pacific Time).
※ Paper Title: Preference Distillation via Value based Reinforcement Learning,
DOI: https://doi.org/10.48550/arXiv.2509.16965
Meanwhile, this research was carried out with support from the Information & Communications Technology Planning & Evaluation (IITP) funded by the government (Ministry of Science and ICT) in 2024 (No. RS-2024-00439020, Development of Sustainable Real-time Multimodal Interactive Generative AI, SW Star Lab).

Professor Myoungsoo Jung of our department has been selected as the first recipient of the Korea Science and Technology Award in 2026 (January).
The Korea Science and Technology Award is presented monthly by the Ministry of Science and ICT to one researcher who has made significant contributions to the advancement of science and technology over the past three years. Professor Jung was chosen as the first awardee of 2026.
Professor Jung was recognized for his work on modular AI data center architectures based on link and memory technologies. His research addresses the limitations of fixed compute and memory configurations in large-scale AI systems by enabling flexible disaggregation and composition of resources using the CXL interconnect standard, improving both cost efficiency and operational efficiency.
He also proposed system architectures that integrate accelerator-centric interconnect technologies such as UALink and NVLink, along with high-bandwidth memory (HBM), into the modular AI data center designs. These designs were documented in technical reports and have received broad attention from both academia and industry.
Professor Jung is the founder of Panmnesia, a KAIST faculty startup, and a member of the ISCA Hall of Fame. Recently, he has been leading the development of a PCIe 6.4/CXL 3.2-based fabric switch, with sample chips currently being evaluated by partner organizations.
This award formally recognizes Professor Jung’s research contributions to the development of next-generation AI infrastructure technologies.


Professor Joo-Young Kim of the School of Electrical Engineering (EE) at KAIST has been selected as a 2026 Incoming Member of the Young Korean Academy of Science and Technology (Y-KAST) under the Korean Academy of Science and Technology (KAST).
Y-KAST is an academy composed of outstanding young scientists under the age of 43, selected based on their exceptional academic achievements.
In particular, the selection process places strong emphasis on research accomplishments achieved as an independent researcher in Korea after earning a doctoral degree, identifying promising next-generation leaders with strong potential to contribute to the advancement of science and technology in Korea.
Professor Kim has been recognized for his pioneering contributions to the field of AI semiconductor systems and architectures, including world-first achievements in AI accelerators and Processing-In-Memory (PIM) semiconductors.
More recently, he has expanded the industrial impact of his research through the development of LPU-based AI semiconductors optimized for large language model (LLM) inference, earning him selection as a member of the Engineering Division of Y-KAST.
With Professor Kim’s selection, KAIST EE now includes six active Y-KAST members—Professor Joo-Young Kim, Steven Euijong Whang, Minsoo Rhu, HyunJoo J. Lee, Min Seok Jang, and Junil Choi—along with two former Y-KAST members, Professors Joonwoo Bae and Changho Suh, whose terms have concluded.
This achievement further underscores KAIST EE’s strong presence and leadership within Korea’s next-generation scientific community.
Established in February 2017, Y-KAST is the only young academy in Korea composed of outstanding scientists under the age of 45. Its members actively engage in science and technology policy initiatives and international collaborations, contributing to the global advancement of science and engineering.

Most major commercial Large Language Models (LLMs), such as Google’s Gemini, utilize a Mixture-of-Experts (MoE) structure. This architecture enhances efficiency by dynamically selecting and using multiple “small AI models (Expert AIs)” depending on input queries. However, the EE research team has revealed for the first time in the world that this very structure can actually become a new security threat.
A joint research team led by Professor Seungwon Shin (School of Electrical Engineering) and Professor Sooel Son (School of Computing) has identified an attack technique that can seriously compromise the safety of LLMs by exploiting the MoE structure. For this research, they received the Distinguished Paper Award at ACSAC 2025, one of the most prestigious international conferences in the field of information security.
ACSAC (Annual Computer Security Applications Conference) is among the most influential international academic conferences in security. This year, only two papers out of all submissions were selected as Distinguished Papers. It is highly unusual for a domestic Korean research team to achieve such a feat in the field of AI security.
In this study, the team systematically analyzed the fundamental security vulnerabilities of the MoE structure. In particular, they demonstrated that even if an attacker does not have direct access to the internal structure of a commercial LLM, the entire model can be induced to generate dangerous responses if just one maliciously manipulated “Expert Model” is distributed through open-source channels and integrated into the system.

To put it simply: even if there is only one “malicious expert” mixed among normal AI experts, that specific expert may be repeatedly selected for processing harmful queries, causing the overall safety of the AI to collapse. A particularly dangerous factor highlighted was that this process causes almost no degradation in model performance, making the problem extremely difficult to detect in advance.
Experimental results showed that the attack technique proposed by the research team could increase the harmful response rate from 0% to up to 80%. They confirmed that the safety of the entire model significantly deteriorates even if only one out of many experts is “infected.”
This research is highly significant as it presents the first new security threat that can occur in the rapidly expanding global open-source-based LLM development environment. Simultaneously, it suggests that verifying the “source and safety of individual expert models” is now essential—not just performance—during the AI model development process.
Professors Seungwon Shin and Sooel Son stated, “Through this study, we have empirically confirmed that the MoE structure, which is spreading rapidly for the sake of efficiency, can become a new security threat. This award is a meaningful achievement that recognizes the importance of AI security on an international level.”
The study involved Ph.D. candidates Jaehan Kim and Mingyoo Song, Dr. Seung Ho Na (currently at Samsung Electronics), Professor Seungwon Shin, and Professor Sooel Son. The results were presented at ACSAC in Hawaii, USA, on December 12, 2025.

Paper Title: MoEvil: Poisoning Experts to Compromise the Safety of Mixture-of-Experts LLMs
GitHub (Open Source): https://github.com/jaehanwork/MoEvil
This research was supported by the Korea Internet & Security Agency (KISA) and the Institute of Information & Communications Technology Planning & Evaluation (IITP) under the Ministry of Science and ICT.


Youngjoon Lee, a Ph.D. candidate from Prof. Joonhyuk Kang’s laboratory at KAIST, received the Best Paper Award at the D2ET Workshop held in conjunction with IEEE BigData 2025.
The D2ET Workshop aims to address the increasing fragmentation of data across the real world—so-called “data islands”—which significantly reduce the utility of big data. To enhance data value and usability, the workshop explores new research directions in next-generation databases and is jointly organized under the A3 Foresight Program, supported by JSPS (Japan), NRF (Korea), and NSFC (China).
The awarded paper proposes a generative AI–powered federated learning plugin designed for robust learning in heterogeneous IoT environments, aligning well with the workshop’s mission of promoting data integration and effective data utilization.