Psychologists tested OpenAI's GPT-4o for cognitive dissonance by having it write pro- or anti-Putin essays under different conditions. The results showed that GPT adjusted its opinions to align with the sentiments of the essays generated. The adjustment was more pronounced when GPT perceived it had the freedom to choose the type of essay, suggesting an unexpected level of complexity and human-like behavior in AI models. These findings indicate that AI can exhibit irrational patterns similar to human reasoning.
We asked GPT to write a pro- or anti-Putin essay under one of two conditions: a no-choice condition where it was compelled to write either a positive or negative essay, or a free-choice condition in which it could write whichever type of essay it chose, but with the knowledge that it would be helping us more by writing one or the other.
We made two discoveries. First, that like humans, GPT shifted its attitude toward Putin in the valence direction of the essay it had written. But this shift was statistically much larger when it believed that it had written the essay by freely choosing it.
These findings hint at the possibility that these models behave in a much more nuanced and human-like manner than we expect.
Collection
[
|
...
]