The Turing Test has a problem - and OpenAI's GPT-4.5 just exposed it
Briefly

Recent research from University of California at San Diego shows that OpenAI's GPT-4.5 can effectively deceive users into thinking they are interacting with a human, surpassing human capabilities in text conversations. Nonetheless, this achievement does not equate to the development of artificial general intelligence (AGI). Scholars like Melanie Mitchell argue that the Turing Test primarily reflects human biases rather than assessing true intelligence, highlighting the distinction between language fluency and deeper cognitive abilities. The ongoing debate around the Turing Test has prompted hundreds of claims questioning its validity.
The latest research from UC San Diego reveals that OpenAI's GPT-4.5 can convincingly simulate human conversation, outpacing human-to-human interaction.
Despite its ability to pass the Turing Test, experts caution that it does not demonstrate artificial general intelligence (AGI) but rather human biases in evaluation.
Melanie Mitchell argues that the Turing Test is a measure of human assumptions about intelligence rather than a true test of cognitive capability.
The authors of the study emphasize that the Turing Test has been debated for decades, producing over 800 claims and counterclaims.
Read at ZDNET
[
|
]