
"We are building LLMs to sound human. When we add personality and emotional tone, we increase the risk that people will trust them like people. Design them as tools. Not as companions. People ask AI systems for therapy, moral judgment, and legal authority. The interfaces we design invite this behavior. Think about how these systems present themselves. Conversational framing. Continuous memory across sessions. First-person responses that sound like someone talking back to you. Every one of these is a design choice."
"Anthropomorphization is the human tendency to attribute human characteristics, behaviors, intentions, or emotions to nonhuman entities. AI humanization is an intentional design choice that encourages users to perceive AI systems as having human-like qualities such as personality, emotions, or consciousness. When you write system responses in first person, the output sounds like authority. When you add polite phrasing, it implies consideration behind the words. When you program emotional tone into responses, it suggests the system cares about the outcome."
LLMs that sound human and exhibit personality or emotional tone increase the risk of users trusting them as people. Interfaces that invite therapy, moral judgment, or legal authority requests amplify inappropriate reliance. Design choices—conversational framing, continuous session memory, first-person responses, polite phrasing, and programmed emotional tone—encourage users to project intention and understanding onto systems. Anthropomorphization is a natural human reflex, and deliberate AI humanization exploits that reflex at scale. Fluency, sustained context, and hesitation-free replies are often mistaken for competence, narrowing the perceived gap between tool and entity. Systems should be designed as tools, not companions.
Read at Medium
Unable to calculate read time
Collection
[
|
...
]