Panels of peers are needed to gauge AI's trustworthiness - experts are not enough
Briefly

Panels of peers are needed to gauge AI's trustworthiness - experts are not enough
"This is a good way to check technical proficiency, but risks anointing a select group of elites as the arbiters of an AI tool's 'trustworthiness'. This would inadvertently reinforce power structures critiqued by Cathy O'Neil in her book review, which highlights that the objectives of AI systems reflect the goals of the select few people who build and control them (see Nature 646, 1048-1049; 2025)."
"In his World View, Vinay Chaudhri proposes using an expert interview - a 'Sunstein test' - to gauge an AI model's true level of understanding (see ). Access options Access Nature and 54 other Nature Portfolio journals Get Nature+, our best-value online-access subscription Subscribe to this journal Receive 51 print issues and online access $199.00 per year only $3.90 per issue Prices may be subject to local taxes which are calculated during checkout"
An expert interview or 'Sunstein test' can assess an AI model's technical understanding and proficiency. Relying on a narrow set of experts to judge trustworthiness can grant decision-making authority to a small elite. Concentrated authority can reinforce existing power structures that influence AI design and deployment. Objectives encoded into AI systems tend to reflect the priorities of those who build and control them, increasing the risk of misaligned values. Broader, more inclusive evaluation mechanisms are necessary to avoid amplifying biases and to ensure AI systems serve diverse societal interests rather than a select few.
Read at Nature
Unable to calculate read time
[
|
]