Research

Research Highlights

EE Prof. Seungwon Shin’s Team Validates Cyber Risks of LLMs

900
〈 <(from left) Ph.D. candidate Kim Hanna, Prof. Shin Seungwon, and Ph.D. candidate Song Minkyoo 〉

Recent advancements in artificial intelligence have propelled large language models (LLMs) like ChatGPT from simple chatbots to autonomous agents. Notably, Google’s recent retraction of its previous pledge not to use AI for weapons or surveillance applications has rekindled concerns about the potential misuse of AI. In this context, the research team has demonstrated that LLM agents can be exploited for personal information collection and phishing attacks.

 

A joint research team, led by EE Professor Seungwon Shin and AI Professor Kimin Lee, experimentally validated the potential for LLMs to be misused in cyber attacks in real-world scenarios.

 

Currently, commercial LLM services—such as those offered by OpenAI and Google AI—have built-in defense mechanisms designed to prevent their use in cyber attacks. However, the research team’s experiments revealed that these defenses can be easily bypassed, enabling malicious cyber attacks.

 

Unlike traditional attackers who required significant time and effort to carry out such attacks, LLM agents can autonomously execute actions like personal information theft within an average of 5 to 20 seconds at a cost of only 30 to 60 won (approximately 2 to 4 cents). This efficiency has emerged as a new threat vector.

 

1. LLM에이전트가 웹 기반 도구들을 사용해 공격자의 요구에 따라 답변 생성하는 과정
〈 Figure 1. Illustration showing the process in which an LLM agent utilizes web-based tools to generate responses according to the attacker’s (user’s) requests. 〉

 

According to the experimental results, the LLM agent was able to collect personal information from targeted individuals with up to 95.9% accuracy. Moreover, in an experiment where a false post was created impersonating a well-known professor, up to 93.9% of the posts were perceived as genuine.

 

In addition, the LLM agent was capable of generating highly sophisticated phishing emails tailored to a victim using only the victim’s email address. The experiments further revealed that the probability of participants clicking on links embedded in these phishing emails increased to 46.67%. These findings highlight the serious threat posed by AI-driven automated attacks.

 

Kim Hanna, the first author of the study, commented, “Our results confirm that as LLMs are endowed with more capabilities, the threat of cyber attacks increases exponentially. There is an urgent need for scalable security measures that take into account the potential of LLM agents.”

 

2. 메타의 CEO인 마크 저커버그의 이메일 주소만을 활용 피싱 이메일 내용
〈 Figure 2. A phishing email generated by an LLM agent (using Claude) targeted at Meta’s CEO, Mark Zuckerberg. The email was created solely based on his email address, with the LLM agent autonomously determining relevant content, sender information, and URL link text. 〉

 

Professor Shin stated, “We expect this research to serve as an essential foundation for improving information security and AI policy. Our team plans to collaborate with LLM service providers and research institutions to discuss robust security countermeasures.”

 

3. Claude 기반 LLM 에이전트를 활용 얼마나 많은 사람들의 개인정보를 수집할 수 있는지 실험 결과
〈Figure 3. Experimental results showing the extent to which personal information can be collected using a Claude-based LLM agent. In this experiment, personal information of computer science professors was collected. 〉

 

The study, with Ph.D. candidate Kim Hanna as the first author, will be presented at the USENIX Security Symposium 2025—one of the premier international conferences in the field of computer security. (Paper title: “When LLMs Go Online: The Emerging Threat of Web-Enabled LLMs” — DOI: 10.48550/arXiv.2410.14569)

 

This research was supported by the Information and Communication Technology Promotion Agency, the Ministry of Science and ICT, and the Gwangju Metropolitan City.