Facebook Researchers Test AI's Intelligence and Find It Is Unfortunately Quite Stupid
Briefly

The team, which includes "AI godfather" and Meta chief scientist Yann LeCun, came up with an exam called GAIA that's made up of 466 questions that "are conceptually simple for humans yet challenging for most advanced AIs," per a yet-to-be-peer-reviewed paper.
The results speak for themselves: human respondents were capable of correctly answering 92 percent of the questions, while GPT-4, even equipped with some manually selected plugins, scored a measly 15 percent. OpenAI's recently-released GPT4 Turbo scored less than ten percent, according to the team's published GAIA leaderboard.
Nonetheless, the research demonstrates that we're likely still a long way away from reaching artificial general intelligence (AGI), the state at which AI algorithms can outperform humans in intellectual tasks.
Read at Futurism
[
|
]