Home

School of Electrical Engineering We thrive
to be the world’s
top IT powerhouse.
We thrive to be the world’s top IT powerhouse.

Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.

  • 1
  • 6
Learn More
School of Electrical Engineering We thrive
to be the world’s
top IT powerhouse.
We thrive to be the world’s top IT powerhouse.

Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.

  • 2
  • 6
Learn More
School of Electrical Engineering We thrive
to be the world’s
top IT powerhouse.
We thrive to be the world’s top IT powerhouse.

Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.

  • 3
  • 6
Learn More
School of Electrical Engineering We thrive
to be the world’s
top IT powerhouse.
We thrive to be the world’s top IT powerhouse.

Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.

  • 4
  • 6
Learn More
School of Electrical Engineering We thrive
to be the world’s
top IT powerhouse.
We thrive to be the world’s top IT powerhouse.

Our mission is to lead innovations
in information technology, create lasting impact,
and educate next-generation leaders of the world.

  • 5
  • 6
Learn More
AI in EE AI and machine learning
are a key thrust
in EE research
AI and machine learning are a key thrust in EE research

AI/machine learning  efforts are already   a big part of   ongoing
research in all 6 divisions - Computer, Communication, Signal,
Wave, Circuit and Device - of KAIST EE 

  • 6
  • 6
Learn More
Previous slide
Next slide
Prof. Minsoo Rhu’s team
develops a simulation
framework called vTrain
Read more...
Prof. Jun-Bo Yoon’s Team
Achieves Human-Level Tactile Sensing with
Breakthrough Pressure Sensor
Read more...
Prof.
Seungwon Shin’s Team
Validates Cyber Risks
of LLMs
Read more...
Prof. Seunghyup Yoo’s team Develops
Wearable Carbon Dioxide Sensor
to Enable Real-time Apnea Diagnosis​
Read more...
Prof. Choi and Yoon’s
Joint Team Develops
Neuromorphic Semiconductor Chip
that Learns and Corrects Itself​
Read more...
Prof. Jung-Yong Lee’s Team
Develops High-Efficiency
Avalanche Quantum Dots
Read more...
Prof. Junmo Kim’s Team
Develop AI That
Imagines and Understands
How Images Change Like Humans
Read more...
Prof. Hyunjoo J. Lees Team Develops Stretchable
Microelectrodes Array
for Organoid Signal Monitoring
Read more...
Prof. Sanghun Jeon's Team Develops
Hafnia-Based Ferroelectric Memory Technology
Read more...
Previous slide
Next slide

Highlights

교수님 360
교수님 900 enhancer
〈 (From left) Professor Minsoo Rhu, Ph.D. candidate Jehyeon Bang, and Dr. Yujeong 〉

Large AI models such as ChatGPT and DeepSeek are gaining attention as they’re being applied across diverse fields. These large language models (LLMs) require training on massive distributed systems composed of tens of thousands of data center GPUs. For example, the cost of training GPT-4 is estimated at approximately 140 billion won. A team of Korean researchers has developed a technology that optimizes parallelization configurations to increase GPU efficiency and significantly reduce training costs.

 

An EE research team led by Professor Minsoo Rhu, in collaboration with the Samsung Advanced Institute of Technology (SAIT), has developed a simulation framework called vTrain, which accurately predicts and optimizes the training time of LLMs in large-scale distributed environments.

 

To efficiently train LLMs, it’s crucial to identify the optimal distributed training strategy. However, the vast number of potential strategies makes real-world testing prohibitively expensive and time-consuming. As a result, companies currently rely on a limited number of empirically validated strategies, causing inefficient GPU utilization and unnecessary increases in training costs. The absence of suitable large-scale simulation technology has significantly hindered companies from effectively addressing this issue.

 

To overcome this limitation, Professor Rhu’s team developed vTrain, which can accurately predict training time and quickly evaluate various parallelization strategies. Through experiments conducted in multi-GPU environments, vTrain’s predictions were compared against actual measured training times, resulting in an average absolute percentage error (MAPE) of 8.37% on single-node systems and 14.73% on multi-node systems.

 

1. vTrain 시뮬레이터 구조 모식도
〈 Figure 1. Schematic diagram of the vTrain simulator architecture 〉

 

In collaboration with SAIT, the team has also released the vTrain framework along with over 1,500 real-world training time measurement datasets as open-source software (https://github.com/VIA-Research/vTrain) for free use by AI researchers and companies.

 

2. 단일 노드 시스템좌 및 다중 노드 시스템우에 대한 학습 시간 측정값과 예측값의 비교
〈 Figure 2. Comparison of measured and predicted training times for single-node (left) and multi-node (right) systems (Figure caption as provided in the original article) 〉

 

Professor Rhu commented, “vTrain utilizes a profiling-based simulation approach to explore training strategies that enhance GPU utilization and reduce training costs compared to conventional empirical methods. With the open-source release, companies can now efficiently cut the costs associated with training ultra-large AI models.”

 

3. 다양한 병렬화 기법에 따른 MT NLG 학습 시간 및 GPU 사용률 변화
〈 Figure 3. Changes in MT-NLG training time and GPU utilization with various parallelization techniques (Figure caption as provided in the original article) 〉

 

This research, with Ph.D. candidate Jehyeon Bang as the first author, was presented last November at MICRO, the joint International Symposium on Microarchitecture hosted by IEEE and ACM, one of the premier conferences in computer architecture. (Paper title: “vTrain: A Simulation Framework for Evaluating Cost-Effective and Compute-Optimal Large Language Model Training”, https://doi.org/10.1109/MICRO61859.2024.00021)

 

This work was supported by the Ministry of Science, ICT, the National Research Foundation of Korea, the Information and Communication Technology Promotion Agency, and Samsung Electronics, as part of the SW Star Lab project for the development of core technologies in the SW computing industry.

교수팀 360
900
〈 <(from left) Ph.D. candidate Kim Hanna, Prof. Shin Seungwon, and Ph.D. candidate Song Minkyoo 〉

Recent advancements in artificial intelligence have propelled large language models (LLMs) like ChatGPT from simple chatbots to autonomous agents. Notably, Google’s recent retraction of its previous pledge not to use AI for weapons or surveillance applications has rekindled concerns about the potential misuse of AI. In this context, the research team has demonstrated that LLM agents can be exploited for personal information collection and phishing attacks.

 

A joint research team, led by EE Professor Seungwon Shin and AI Professor Kimin Lee, experimentally validated the potential for LLMs to be misused in cyber attacks in real-world scenarios.

 

Currently, commercial LLM services—such as those offered by OpenAI and Google AI—have built-in defense mechanisms designed to prevent their use in cyber attacks. However, the research team’s experiments revealed that these defenses can be easily bypassed, enabling malicious cyber attacks.

 

Unlike traditional attackers who required significant time and effort to carry out such attacks, LLM agents can autonomously execute actions like personal information theft within an average of 5 to 20 seconds at a cost of only 30 to 60 won (approximately 2 to 4 cents). This efficiency has emerged as a new threat vector.

 

1. LLM에이전트가 웹 기반 도구들을 사용해 공격자의 요구에 따라 답변 생성하는 과정
〈 Figure 1. Illustration showing the process in which an LLM agent utilizes web-based tools to generate responses according to the attacker’s (user’s) requests. 〉

 

According to the experimental results, the LLM agent was able to collect personal information from targeted individuals with up to 95.9% accuracy. Moreover, in an experiment where a false post was created impersonating a well-known professor, up to 93.9% of the posts were perceived as genuine.

 

In addition, the LLM agent was capable of generating highly sophisticated phishing emails tailored to a victim using only the victim’s email address. The experiments further revealed that the probability of participants clicking on links embedded in these phishing emails increased to 46.67%. These findings highlight the serious threat posed by AI-driven automated attacks.

 

Kim Hanna, the first author of the study, commented, “Our results confirm that as LLMs are endowed with more capabilities, the threat of cyber attacks increases exponentially. There is an urgent need for scalable security measures that take into account the potential of LLM agents.”

 

2. 메타의 CEO인 마크 저커버그의 이메일 주소만을 활용 피싱 이메일 내용
〈 Figure 2. A phishing email generated by an LLM agent (using Claude) targeted at Meta’s CEO, Mark Zuckerberg. The email was created solely based on his email address, with the LLM agent autonomously determining relevant content, sender information, and URL link text. 〉

 

Professor Shin stated, “We expect this research to serve as an essential foundation for improving information security and AI policy. Our team plans to collaborate with LLM service providers and research institutions to discuss robust security countermeasures.”

 

3. Claude 기반 LLM 에이전트를 활용 얼마나 많은 사람들의 개인정보를 수집할 수 있는지 실험 결과
〈Figure 3. Experimental results showing the extent to which personal information can be collected using a Claude-based LLM agent. In this experiment, personal information of computer science professors was collected. 〉

 

The study, with Ph.D. candidate Kim Hanna as the first author, will be presented at the USENIX Security Symposium 2025—one of the premier international conferences in the field of computer security. (Paper title: “When LLMs Go Online: The Emerging Threat of Web-Enabled LLMs” — DOI: 10.48550/arXiv.2410.14569)

 

This research was supported by the Information and Communication Technology Promotion Agency, the Ministry of Science and ICT, and the Gwangju Metropolitan City.

360 1
Kyung Cheol CHOI KAIST
〈 Professor Kyung Cheol Choi 〉

EE Professor Kyung Cheol Choi from our department has been appointed as a Fellow of the Society for Information Display (SID). Globally, only 10 researchers have been recognized as Fellows by both the IEEE (Institute of Electrical and Electronics Engineers) and SID in the field of display technology.

 

SID selects only five Fellows each year, based on their industrial contributions and research achievements. Professor Kyung Cheol Choi has been appointed as a 2025 SID Fellow for his research contributions in “For pioneering development of truly wearable OLED displays using fiber and fabric substrates.”

 

He has previously received the Merck Award in 2018 and the UDC Innovative Research Award in 2022. In 2023, he was also recognized as an IEEE Fellow for his research achievements in flexible displays.

교수님 3600 1
교수님 900 1
〈(From left) Professor Jun-Bo Yoon, Dr. Jae-Soon Yang〉

Recent advancements in robotics have enabled machines to delicately handle fragile objects such as eggs an achievement made possible by fingertip-integrated pressure sensors that provide tactile feedback. However, even the world’s most advanced robots have struggled to accurately detect pressure in environments affected by complex external interference factors such as water, bending, or electromagnetic interference. Our research team has successfully developed a pressure sensor that operates stably without external interference even on a wet smartphone screen and achieves pressure sensing close to the level of human tactile perception.

 

EE Professor Jun-Bo Yoon’s research team has developed a pressure sensor capable of high-resolution pressure detection even when a smartphone screen is wet from rain or after a shower. Importantly, the sensor is immune to external interference such as “ghost touch” (erroneous touch registration) and maintains its performance under these adverse conditions.

 

Conventional touch systems typically employ a capacitive pressure sensor because of its simple structure and excellent durability, which makes it widely used in smartphones, wearable devices, and robotic human–machine interfaces. However, these sensors are critically vulnerable to external interference, such as water droplets, electromagnetic interference, or bending-induced deformation that can cause malfunctions.

 

1
Figure 1. (Left) Schematic illustration of a smartphone surface where water impairs proper touch registration on a rainy day. (Center) Schematic diagram showing unintended sensor malfunctions in the presence of interference. (Right) Simulation results of the electric field distribution under normal conditions and in the presence of interference; interference causes distortion of the fringe field.

To address this problem, the research team first investigated the root cause of interference in capacitive pressure sensors. They discovered that the “fringe field” generated at the sensor’s edge is extremely vulnerable to external interference.

 

To fundamentally resolve this issue, the team concluded that suppressing the fringe field—the source of the problem—was essential. Through theoretical analysis, they closely examined the structural variables that affect the fringe field and confirmed that narrowing the electrode gap to the order of several hundred nanometers could suppress the fringe field to below a few percent of its original level.

 

2
Figure 2. (Left) Photograph of the nanogap pressure sensor developed in this study. (Center) Schematic diagram demonstrating how the nanogap design effectively suppresses the fringe field to block external interference. (Right) Electron microscope image of the fabricated nanogap pressure sensor.

Utilizing proprietary micro/nano fabrication techniques, the research team developed a nanogap pressure sensor with an electrode gap of approximately 900 nanometers. The sensor reliably detected pressure regardless of the applied material and maintained its sensing performance even under bending or electromagnetic interference.

 

Moreover, by leveraging the characteristics of the developed sensor, the team implemented an artificial tactile system. Human skin employs pressure receptors known as Merkel’s discs for tactile sensing. To mimic this function, a pressure sensor technology that responds solely to pressure while remaining unresponsive to external interference was required, a condition that had proven challenging with previous technologies.

 

The sensor developed by Professor Yoon’s team overcomes these limitations. Its density reaches a level comparable to that of Merkel’s discs, enabling the realization of a wireless, high-precision artificial tactile system.

 

3
Figure 3. (Left) Schematic diagram comparing the human method of pressure detection with that of the interference-free, high-resolution nanogap pressure sensor designed to mimic it. (Right) Illustration of a wireless artificial tactile system utilizing the nanogap pressure sensor that can grasp objects even when water is present on the surface. The sensor remains unresponsive to water yet precisely detects pressure.

 

To further explore its applicability in various electronic devices, the team also developed a force touch pad system. They demonstrated that this system could obtain high-resolution measurements of pressure magnitude and distribution without interference.

 

Professor Yoon commented, “Our nanogap pressure sensor operates reliably without malfunctioning, even on rainy days or in sweaty conditions, unlike conventional pressure sensors. We expect this development to alleviate a common inconvenience experienced in everyday life.”

 

4
Figure 4. (Left) Schematic of the force touch pad system implemented using the nanogap pressure sensor, along with an illustration showing the sensor’s surface covered with water. (Center) Multi-touch measurement results obtained using the force touch pad system in a water-covered scenario. (Right) Three-dimensional measurement results accurately depicting pressure magnitude and distribution without interference or cross-talk from water on the sensor’s surface.

 

This research, led by Dr. Jae-Soon Yang, PhD Candidate Myung-Kun Chung, along with contributions from Professor Jae-Young Yoo from Sungkyunkwan University, was published in the renowned international journal Nature Communications on February 27, 2025. (Paper title: “Interference-Free Nanogap Pressure Sensor Array with High Spatial Resolution for Wireless Human-Machine Interfaces Applications”, https://doi.org/10.1038/s41467-025-57232-8)

 

The study was supported by the National Research Foundation of Korea’s Mid-Career Researcher Support Program and Leading Research Center Support Program.

360

썸네일 upscaler2× 1

2025 KAIST EE Colloquium Lecture Series kicked off on March 13, 2025. The first lecture featured Ms. Hyunjoo Je, Managing Partner at Envision Partners, who delivered a talk on “Mindset of a Venture Capitalist and Entrepreneurship Capturing a Market Opportunity.” Sharing startup stories from an investor point of view, her lecture garnered significant interest from students interested in entrepreneurship.

 

The series will continue with a diverse lineup of distinguished speakers in the field of electrical engineering, including Prof. Young Lee, KAIST Invited Chair Professor and former Minister of SMEs and Startups, as well as Prof. Leong Chuan Kwek, a quantum technology expert from the National University of Singapore.

 

According to Prof. Jinseok Choi, the organizer of the Colloquium Lecture Series, lectures will also feature newly appointed faculty members of School of EE and a professor involved in entrepreneurship. He encouraged active participation from students and faculty members alike.

 

The colloquium lectures are held Thursday at 4:00 PM in Lecture Halls 1 of the Information and Electronics Building (E3-1). (Refer to the poster shown below for full details.)

 

2025 봄학기 콜로키움 포스터 최종 e1741918497376

 

360
600
Dr. Joon-Kyu Han

 

Dr. Joon-Kyu Han (Professor Yang-Kyu Choi’s Research Group) Appointed as Assistant Professor in the Department of Materials Science and Engineering at Seoul National University.

 

Dr. Joon-Kyu Han (Advisor: Professor Yang-Kyu Choi) has been appointed as an Assistant Professor in the Department of Materials Science and Engineering at Seoul National University, effective March 1, 2025.

 

During his Ph.D. studies, Dr. Han actively conducted research in the field of semiconductor devices and has published over 90 papers in internationally renowned journals and conferences, including Science Advances, Advanced Science, Advanced Functional Materials, Nano Letters, and IEDM.

 

After obtaining his Ph.D. in February 2023, Dr. Han held positions as a postdoctoral researcher at the Inter-University Semiconductor Research Center (ISRC) at Seoul National University, a visiting researcher at Harvard University, and an assistant professor in the Department of System Semiconductor Engineering at Sogang University.

 

His primary research interests lie in next-generation logic, memory, and neuromorphic semiconductor devices.

교수님360 2
600 1
Professor Si-Hyeon Lee

Professor Si-Hyeon Lee has been appointed as an Associate Editorof IEEE Transactions on Information Theory, the most prestigious journal in the field of information theory. Founded in 1953, IEEE Transactions on Information Theory is one of the oldest journals in the IEEE and serves as a leading platform for theoretical research on the representation, storage, transmission, processing, and learning of information. The journal particularly focuses on publishing research that explores fundamental principles and applications across various domains, including communications, compression, security, machine learning, and quantum informationAs an Associate Editor, Professor Lee will play a pivotal role inmanaging the peer review process and shaping the academic direction of the journal, making significant contributions to the advancement of the field. Notably, this appointment marks only the fourth time in over 70 years since the journal’s inception that a researcher affiliated with a Korean university has been selected for this role, highlighting Professor Lee’s outstanding research achievements and international academic contributions.

 

Professor Lee’s primary research areas include the study of information-theoretic performance limits and the development of optimal schemes in communication, statistical inference, and machine learning, contributing to the theoretical foundations of next-generation communication and intelligent systems. Additionally, Professor Lee has served as a Technical Program Chairfor the IEEE Information Theory Workshop, a major international conference in information theory, and has been actively engaged as an IEEE Information Theory Society Distinguished Lecturer, disseminating the latest research trends to the academic community.

박사 360
신태인 박사의 증명사진
<Dr. Taein Shin>
Dr. Taein Shin from Professor Jeongho Kim’s research lab has been selected as a recipient of the ‘Best Paper Award’ at ‘DesignCon 2025,’ a prestigious international conference in semiconductor design.
 
Dr. Shin had previously won the same award at ‘DesignCon 2022’ three years ago. At that time, Professor Jeongho Kim’s research lab (KAIST TERA Lab) gained significant attention from industry and academia as four of its students, including Dr. Shin, Seongguk Kim, Seonguk Choi, and Hyeyeon Kim, were honored with the Best Paper Award, which was given to only eight recipients among all submitted papers.
 
‘DesignCon’ is a globally recognized international conference in semiconductor and package design. Each year, researchers and engineers from leading global tech companies such as Intel, NVIDIA, Google, Micron, Rambus, Texas Instruments (TI), AMD, IBM, and ANSYS, as well as students from renowned universities worldwide, participate in this conference held in Silicon Valley, USA.
 
‘DesignCon’ calls for paper abstracts at the end of June each year and conducts a rigorous review process until the end of December. The submitted papers predominantly focus on practical technologies closely related to industry applications or those that can be directly implemented in products.
 
Among all submitted papers, only up to 20 are shortlisted as Best Paper Award nominees. The authors of these nominated papers must attend the conference in person and deliver a 45-minute oral presentation, after which a strict evaluation process determines the final eight recipients of the Best Paper Award.
 
Dr. Shin attended the ‘DesignCon 2025’ international conference, which was held in San Jose, Silicon Valley, from January 28 for three days. He presented his research alongside fellow KAIST TERA Lab members—Hyeyeon Kim, a Ph.D. student, and Hyunjun Ahn, an M.S. student—who were also nominated for the Best Paper Award.
 
A TERA Lab representative stated, “Dr. Shin’s paper was selected from over 100 papers accepted by the conference in late 2024. His contribution to technological innovation in the field was highly regarded by the judging panel.”
 
Dr. Shin’s paper, titled “PSIJ-Based Integrated Power Integrity Design for HBM Using Reinforcement Learning: Beyond the Target Impedance,” introduces a methodology that optimizes power integrity design for high-bandwidth memory (HBM) packages. His approach utilizes power supply noise-induced jitter (PSIJ) as a criterion, incorporating AI to optimize design parameters affecting jitter, drawing significant attention from the academic and industrial communities.
 
The TERA Lab representative further emphasized, “Dr. Shin’s research received high praise from the judges for overcoming the limitations of traditional impedance-based power delivery network (PDN) design by leveraging reinforcement learning and power supply noise jitter. The originality of applying AI to this field was also highly rated.”
 
Dr. Shin stated, “As next-generation HBM-based package systems continue to advance in speed to support large-scale AI implementations, I aim to establish a foundation for semiconductor signal and power integrity design based on the proposed methodology.”
 
Meanwhile, as of March 2025, Professor Jeongho Kim’s research lab comprises 27 students, including 17 master’s and 10 Ph.D. candidates. The lab is conducting research on optimizing various semiconductor package and interconnection designs in both front-end and back-end processes using AI and machine learning techniques such as reinforcement and imitation learning. Additionally, the lab is actively researching HBM-based computing architectures for large-scale AI implementations. 

NOTICE

MORE

SCHEDULE

SEMINAR