When LLMs Learn to LieLarge language models (LLMs) are increasingly being misused for misleading purposes, reflecting human-driven manipulation rather than inherent flaws in the models themselves.