The article explores how AI, specifically large language models (LLMs), function as 'mimics' of human desire. Drawing from the theories of René Girard, it argues that human desire is fundamentally imitative, leading to cycles of rivalry and scapegoating. As AI learning mechanisms reflect and remix our emotional landscape, they intensify existing mimetic conflicts. However, the author suggests that by understanding AI as a tool rather than a rival, we can break free from detrimental cycles of desire-driven blame and find a healthier relationship with technology.
AI embodies mimicry, reflecting our desires and fueling cycles of imitation and rivalry, where both AI and humanity can become scapegoats in conflicts.
By perceiving AI as mere tools rather than threats, we can disrupt the destructive cycle of mimetic rivalry and scapegoating that arises between humans and machines.
René Girard's insights into mimetic desire reveal that our passions are largely imitative, and this dynamic is amplified by AI's ability to mirror back our wants.
Large language models do not generate original thoughts; instead, they echo our desires and emotions, creating a complex interplay of imitation among users.
Collection
[
|
...
]