News & Event


Home > News & Event > Seminar

News & Event


Ethical Problems and Safety of Large Language Models


Ethical Problems and Safety of Large Language Models


9. 8 (Friday) 14:00


Dr. Hwaran Lee (Naver AI Lab)


N1, Room 110


O Title: Ethical Problems and Safety of Large Language Models
O Speaker: Dr. Hwaran Lee (Naver AI Lab)
O Date: 9. 8 (Friday) 
O Start Time: 14:00
O Venue: N1, Room 110

O Abstract:

Coincidence with the astounding performance of recent large language models (LLMs), potential risks and their social impacts have been addressed. In this talk, I will give a brief introduction of the ethical problems such as toxicity, social bias/stereotypes, human-values including ethics and moral norms. Then I will talk about two new datasets: SQuARe and KoSBi. In SQuARe, we consider sensitive questions that LLMs could raise negative social impacts depending on their responses on the questions, and introduce acceptable responses. In KoSBi, we propose a Korean Social Bias dataset that reflects Korean-specific culture and society. Finally, I will explain the safety consideration embedded within HyperCLOVA, and discuss the open problems and remaining challenges.


O Bio: Hwaran Lee is a lead of the Language Research Team at NAVER AI Lab and Safety Team of HyperCLOVA X. Her current research interests are controllable language generation, trustworthy of language models, and safety & ethics for AI. She obtained a Ph.D. in Electrical Engineering at Korea Advanced Institute of Science and Technology (KAIST) in 2018. In 2012, she obtained a B.S. in Mathematical Science at KAIST. Before joining NAVER AI Lab, she worked at SK T-Brain as a research scientist from 2018 to 2021.