"This might provide a powerful new way to prevent rogue nations or irresponsible companies from secretly developing dangerous AI."
"You could design protocols such that you can only deploy a model if you've run a particular evaluation and gotten a score above a certain threshold-let's say for safety," says Tim Fist, a fellow at CNAS.
[
add
]
[
|
|
...
]