'British DARPA' to build AI gatekeepers for 'safety guarantees'
Briefly

ARIA introduces 'quantitative safety guarantees' for AI comparable to safety norms in nuclear power and aviation, emphasizing probabilistic assurances against harm from actions. The 'gatekeeper' AI model manages AI agents within defined parameters, with a goal to enhance safety in critical AI domains like infrastructure and clinical trials.
David Dalrymple, the mastermind behind ARIA's innovative AI safety scheme, integrates his expertise in AI safety and gatekeeper principles. This initiative, funded with £59 million, targets a scalable proof-of-concept with applications in electricity grid balance, and supply chain management.
Dalrymple highlights the uniqueness of ARIA's 'gatekeeper' strategy, bridging commercial and academic AI methodologies. He criticizes prevalent AI approaches for lack of safety guarantees and warns against solely relying on traditional academic logic for AI advancement, advocating for a balanced fusion of cutting-edge capabilities and rigorous mathematical reasoning.
Read at TNW | Deep-Tech
[
add
]
[
|
|
]