to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 1
- 6
to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 2
- 6
to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 3
- 6
to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 4
- 6
to be the world’s
top IT powerhouse.We thrive to be the world’s top IT powerhouse.
Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.
- 5
- 6
are a key thrust
in EE researchAI and machine learning are a key thrust in EE research
AI/machine learning efforts are already a big part of ongoing
research in all 6 divisions - Computer, Communication, Signal,
Wave, Circuit and Device - of KAIST EE
- 6
- 6
develops a simulation
framework called vTrain
Achieves Human-Level Tactile Sensing with
Breakthrough Pressure Sensor
Seungwon Shin’s Team
Validates Cyber Risks
of LLMs
Wearable Carbon Dioxide Sensor
Joint Team Develops
Neuromorphic Semiconductor Chip
Develops High-Efficiency
Avalanche Quantum Dots
Develop AI That
Imagines and Understands
Microelectrodes Array
Hafnia-Based Ferroelectric Memory Technology
Highlights


Large AI models such as ChatGPT and DeepSeek are gaining attention as they’re being applied across diverse fields. These large language models (LLMs) require training on massive distributed systems composed of tens of thousands of data center GPUs. For example, the cost of training GPT-4 is estimated at approximately 140 billion won. A team of Korean researchers has developed a technology that optimizes parallelization configurations to increase GPU efficiency and significantly reduce training costs.
An EE research team led by Professor Minsoo Rhu, in collaboration with the Samsung Advanced Institute of Technology (SAIT), has developed a simulation framework called vTrain, which accurately predicts and optimizes the training time of LLMs in large-scale distributed environments.
To efficiently train LLMs, it’s crucial to identify the optimal distributed training strategy. However, the vast number of potential strategies makes real-world testing prohibitively expensive and time-consuming. As a result, companies currently rely on a limited number of empirically validated strategies, causing inefficient GPU utilization and unnecessary increases in training costs. The absence of suitable large-scale simulation technology has significantly hindered companies from effectively addressing this issue.
To overcome this limitation, Professor Rhu’s team developed vTrain, which can accurately predict training time and quickly evaluate various parallelization strategies. Through experiments conducted in multi-GPU environments, vTrain’s predictions were compared against actual measured training times, resulting in an average absolute percentage error (MAPE) of 8.37% on single-node systems and 14.73% on multi-node systems.

In collaboration with SAIT, the team has also released the vTrain framework along with over 1,500 real-world training time measurement datasets as open-source software (https://github.com/VIA-Research/vTrain) for free use by AI researchers and companies.

Professor Rhu commented, “vTrain utilizes a profiling-based simulation approach to explore training strategies that enhance GPU utilization and reduce training costs compared to conventional empirical methods. With the open-source release, companies can now efficiently cut the costs associated with training ultra-large AI models.”

This research, with Ph.D. candidate Jehyeon Bang as the first author, was presented last November at MICRO, the joint International Symposium on Microarchitecture hosted by IEEE and ACM, one of the premier conferences in computer architecture. (Paper title: “vTrain: A Simulation Framework for Evaluating Cost-Effective and Compute-Optimal Large Language Model Training”, https://doi.org/10.1109/MICRO61859.2024.00021)
This work was supported by the Ministry of Science, ICT, the National Research Foundation of Korea, the Information and Communication Technology Promotion Agency, and Samsung Electronics, as part of the SW Star Lab project for the development of core technologies in the SW computing industry.


Recent advancements in artificial intelligence have propelled large language models (LLMs) like ChatGPT from simple chatbots to autonomous agents. Notably, Google’s recent retraction of its previous pledge not to use AI for weapons or surveillance applications has rekindled concerns about the potential misuse of AI. In this context, the research team has demonstrated that LLM agents can be exploited for personal information collection and phishing attacks.
A joint research team, led by EE Professor Seungwon Shin and AI Professor Kimin Lee, experimentally validated the potential for LLMs to be misused in cyber attacks in real-world scenarios.
Currently, commercial LLM services—such as those offered by OpenAI and Google AI—have built-in defense mechanisms designed to prevent their use in cyber attacks. However, the research team’s experiments revealed that these defenses can be easily bypassed, enabling malicious cyber attacks.
Unlike traditional attackers who required significant time and effort to carry out such attacks, LLM agents can autonomously execute actions like personal information theft within an average of 5 to 20 seconds at a cost of only 30 to 60 won (approximately 2 to 4 cents). This efficiency has emerged as a new threat vector.

According to the experimental results, the LLM agent was able to collect personal information from targeted individuals with up to 95.9% accuracy. Moreover, in an experiment where a false post was created impersonating a well-known professor, up to 93.9% of the posts were perceived as genuine.
In addition, the LLM agent was capable of generating highly sophisticated phishing emails tailored to a victim using only the victim’s email address. The experiments further revealed that the probability of participants clicking on links embedded in these phishing emails increased to 46.67%. These findings highlight the serious threat posed by AI-driven automated attacks.
Kim Hanna, the first author of the study, commented, “Our results confirm that as LLMs are endowed with more capabilities, the threat of cyber attacks increases exponentially. There is an urgent need for scalable security measures that take into account the potential of LLM agents.”

Professor Shin stated, “We expect this research to serve as an essential foundation for improving information security and AI policy. Our team plans to collaborate with LLM service providers and research institutions to discuss robust security countermeasures.”

The study, with Ph.D. candidate Kim Hanna as the first author, will be presented at the USENIX Security Symposium 2025—one of the premier international conferences in the field of computer security. (Paper title: “When LLMs Go Online: The Emerging Threat of Web-Enabled LLMs” — DOI: 10.48550/arXiv.2410.14569)
This research was supported by the Information and Communication Technology Promotion Agency, the Ministry of Science and ICT, and the Gwangju Metropolitan City.


EE Professor Kyung Cheol Choi from our department has been appointed as a Fellow of the Society for Information Display (SID). Globally, only 10 researchers have been recognized as Fellows by both the IEEE (Institute of Electrical and Electronics Engineers) and SID in the field of display technology.
SID selects only five Fellows each year, based on their industrial contributions and research achievements. Professor Kyung Cheol Choi has been appointed as a 2025 SID Fellow for his research contributions in “For pioneering development of truly wearable OLED displays using fiber and fabric substrates.”
He has previously received the Merck Award in 2018 and the UDC Innovative Research Award in 2022. In 2023, he was also recognized as an IEEE Fellow for his research achievements in flexible displays.


Recent advancements in robotics have enabled machines to delicately handle fragile objects such as eggs an achievement made possible by fingertip-integrated pressure sensors that provide tactile feedback. However, even the world’s most advanced robots have struggled to accurately detect pressure in environments affected by complex external interference factors such as water, bending, or electromagnetic interference. Our research team has successfully developed a pressure sensor that operates stably without external interference even on a wet smartphone screen and achieves pressure sensing close to the level of human tactile perception.
EE Professor Jun-Bo Yoon’s research team has developed a pressure sensor capable of high-resolution pressure detection even when a smartphone screen is wet from rain or after a shower. Importantly, the sensor is immune to external interference such as “ghost touch” (erroneous touch registration) and maintains its performance under these adverse conditions.
Conventional touch systems typically employ a capacitive pressure sensor because of its simple structure and excellent durability, which makes it widely used in smartphones, wearable devices, and robotic human–machine interfaces. However, these sensors are critically vulnerable to external interference, such as water droplets, electromagnetic interference, or bending-induced deformation that can cause malfunctions.

To address this problem, the research team first investigated the root cause of interference in capacitive pressure sensors. They discovered that the “fringe field” generated at the sensor’s edge is extremely vulnerable to external interference.
To fundamentally resolve this issue, the team concluded that suppressing the fringe field—the source of the problem—was essential. Through theoretical analysis, they closely examined the structural variables that affect the fringe field and confirmed that narrowing the electrode gap to the order of several hundred nanometers could suppress the fringe field to below a few percent of its original level.

Utilizing proprietary micro/nano fabrication techniques, the research team developed a nanogap pressure sensor with an electrode gap of approximately 900 nanometers. The sensor reliably detected pressure regardless of the applied material and maintained its sensing performance even under bending or electromagnetic interference.
Moreover, by leveraging the characteristics of the developed sensor, the team implemented an artificial tactile system. Human skin employs pressure receptors known as Merkel’s discs for tactile sensing. To mimic this function, a pressure sensor technology that responds solely to pressure while remaining unresponsive to external interference was required, a condition that had proven challenging with previous technologies.
The sensor developed by Professor Yoon’s team overcomes these limitations. Its density reaches a level comparable to that of Merkel’s discs, enabling the realization of a wireless, high-precision artificial tactile system.

To further explore its applicability in various electronic devices, the team also developed a force touch pad system. They demonstrated that this system could obtain high-resolution measurements of pressure magnitude and distribution without interference.
Professor Yoon commented, “Our nanogap pressure sensor operates reliably without malfunctioning, even on rainy days or in sweaty conditions, unlike conventional pressure sensors. We expect this development to alleviate a common inconvenience experienced in everyday life.”

This research, led by Dr. Jae-Soon Yang, PhD Candidate Myung-Kun Chung, along with contributions from Professor Jae-Young Yoo from Sungkyunkwan University, was published in the renowned international journal Nature Communications on February 27, 2025. (Paper title: “Interference-Free Nanogap Pressure Sensor Array with High Spatial Resolution for Wireless Human-Machine Interfaces Applications”, https://doi.org/10.1038/s41467-025-57232-8)
The study was supported by the National Research Foundation of Korea’s Mid-Career Researcher Support Program and Leading Research Center Support Program.

2025 KAIST EE Colloquium Lecture Series kicked off on March 13, 2025. The first lecture featured Ms. Hyunjoo Je, Managing Partner at Envision Partners, who delivered a talk on “Mindset of a Venture Capitalist and Entrepreneurship Capturing a Market Opportunity.” Sharing startup stories from an investor point of view, her lecture garnered significant interest from students interested in entrepreneurship.
The series will continue with a diverse lineup of distinguished speakers in the field of electrical engineering, including Prof. Young Lee, KAIST Invited Chair Professor and former Minister of SMEs and Startups, as well as Prof. Leong Chuan Kwek, a quantum technology expert from the National University of Singapore.
According to Prof. Jinseok Choi, the organizer of the Colloquium Lecture Series, lectures will also feature newly appointed faculty members of School of EE and a professor involved in entrepreneurship. He encouraged active participation from students and faculty members alike.
The colloquium lectures are held Thursday at 4:00 PM in Lecture Halls 1 of the Information and Electronics Building (E3-1). (Refer to the poster shown below for full details.)


Dr. Joon-Kyu Han (Professor Yang-Kyu Choi’s Research Group) Appointed as Assistant Professor in the Department of Materials Science and Engineering at Seoul National University.
Dr. Joon-Kyu Han (Advisor: Professor Yang-Kyu Choi) has been appointed as an Assistant Professor in the Department of Materials Science and Engineering at Seoul National University, effective March 1, 2025.
During his Ph.D. studies, Dr. Han actively conducted research in the field of semiconductor devices and has published over 90 papers in internationally renowned journals and conferences, including Science Advances, Advanced Science, Advanced Functional Materials, Nano Letters, and IEDM.
After obtaining his Ph.D. in February 2023, Dr. Han held positions as a postdoctoral researcher at the Inter-University Semiconductor Research Center (ISRC) at Seoul National University, a visiting researcher at Harvard University, and an assistant professor in the Department of System Semiconductor Engineering at Sogang University.
His primary research interests lie in next-generation logic, memory, and neuromorphic semiconductor devices.


Professor Si-Hyeon Lee has been appointed as an Associate Editorof IEEE Transactions on Information Theory, the most prestigious journal in the field of information theory. Founded in 1953, IEEE Transactions on Information Theory is one of the oldest journals in the IEEE and serves as a leading platform for theoretical research on the representation, storage, transmission, processing, and learning of information. The journal particularly focuses on publishing research that explores fundamental principles and applications across various domains, including communications, compression, security, machine learning, and quantum information. As an Associate Editor, Professor Lee will play a pivotal role inmanaging the peer review process and shaping the academic direction of the journal, making significant contributions to the advancement of the field. Notably, this appointment marks only the fourth time in over 70 years since the journal’s inception that a researcher affiliated with a Korean university has been selected for this role, highlighting Professor Lee’s outstanding research achievements and international academic contributions.
Professor Lee’s primary research areas include the study of information-theoretic performance limits and the development of optimal schemes in communication, statistical inference, and machine learning, contributing to the theoretical foundations of next-generation communication and intelligent systems. Additionally, Professor Lee has served as a Technical Program Chairfor the IEEE Information Theory Workshop, a major international conference in information theory, and has been actively engaged as an IEEE Information Theory Society Distinguished Lecturer, disseminating the latest research trends to the academic community.

