I Gave My Personality to an AI agent. Here's What Happened Next
Briefly

Isabella, a generative agent, interviewed participants for nearly two hours and processed their responses to create AI systems mimicking their personalities. Developed by teams from Stanford University and Google DeepMind, these generative agents simulate human decision-making behavior with high accuracy. In a study involving over 1,000 people, their opinions aligned 85% with human counterparts, indicating significant predictive capabilities. Though still in early development, this technology hints at a future where AI can serve as personal online surrogates, reflecting individual beliefs and behaviors.
For nearly two hours Isabella collected my thoughts on everything from vaccines to emotional coping strategies to policing in the U.S. When the interview was over, a large language model (LLM) processed my responses to create a new artificial intelligence system designed to mimic my behaviors and beliefs—a kind of digital clone of my personality.
A team of computer scientists from Stanford University, Google DeepMind and other institutions developed Isabella and the interview process in an effort to build more lifelike AI systems. Dubbed generative agents, these systems can simulate the decision-making behavior of individual humans with impressive accuracy.
Their results were, on average, 85 percent identical, suggesting that the agents can closely predict the attitudes and opinions of their human counterparts. Although the technology is in its infancy, it offers a glimmer of a future in which predictive algorithms can potentially act as online surrogates for each of us.
When I first learned about generative agents the humanist in me rebelled, silently insisting that there was something inherently unsettling about creating digital versions of ourselves that might mimic our beliefs and behaviors.
Read at www.scientificamerican.com
[
|
]