
Recent advancements in artificial intelligence have propelled large language models (LLMs) like ChatGPT from simple chatbots to autonomous agents. Notably, Google’s recent retraction of its previous pledge not to use AI for weapons or surveillance applications has rekindled concerns about the potential misuse of AI. In this context, the research team has demonstrated that LLM agents can be exploited for personal information collection and phishing attacks.
A joint research team, led by EE Professor Seungwon Shin and AI Professor Kimin Lee, experimentally validated the potential for LLMs to be misused in cyber attacks in real-world scenarios.
Currently, commercial LLM services—such as those offered by OpenAI and Google AI—have built-in defense mechanisms designed to prevent their use in cyber attacks. However, the research team’s experiments revealed that these defenses can be easily bypassed, enabling malicious cyber attacks.
Unlike traditional attackers who required significant time and effort to carry out such attacks, LLM agents can autonomously execute actions like personal information theft within an average of 5 to 20 seconds at a cost of only 30 to 60 won (approximately 2 to 4 cents). This efficiency has emerged as a new threat vector.

According to the experimental results, the LLM agent was able to collect personal information from targeted individuals with up to 95.9% accuracy. Moreover, in an experiment where a false post was created impersonating a well-known professor, up to 93.9% of the posts were perceived as genuine.
In addition, the LLM agent was capable of generating highly sophisticated phishing emails tailored to a victim using only the victim’s email address. The experiments further revealed that the probability of participants clicking on links embedded in these phishing emails increased to 46.67%. These findings highlight the serious threat posed by AI-driven automated attacks.
Kim Hanna, the first author of the study, commented, “Our results confirm that as LLMs are endowed with more capabilities, the threat of cyber attacks increases exponentially. There is an urgent need for scalable security measures that take into account the potential of LLM agents.”

Professor Shin stated, “We expect this research to serve as an essential foundation for improving information security and AI policy. Our team plans to collaborate with LLM service providers and research institutions to discuss robust security countermeasures.”

The study, with Ph.D. candidate Kim Hanna as the first author, will be presented at the USENIX Security Symposium 2025—one of the premier international conferences in the field of computer security. (Paper title: “When LLMs Go Online: The Emerging Threat of Web-Enabled LLMs” — DOI: 10.48550/arXiv.2410.14569)
This research was supported by the Information and Communication Technology Promotion Agency, the Ministry of Science and ICT, and the Gwangju Metropolitan City.