Google DeepMind director calls for clarity and consistency in AI regulations
Briefly

"That's my hope for the field, is that we can get to consistency, so that we can see all of the benefits of this technology," said Terra Terwilliger, director of strategic initiatives at Google DeepMind. This call for a standardized understanding of safe AI is crucial in navigating the ongoing conversation surrounding AI safety and governance. Addressing AI safety with clearer guidelines could open up avenues for innovative applications of AI while ensuring public trust in these technologies.
"The thing that makes being a doctor scary is that you can get sued for medical malpractice," Madigan-Curtis noted, highlighting the importance of legal accountability in AI development. If companies like OpenAI claim their models are powerful, they should bear responsibility for ensuring they are built and implemented safely. This accountability could lead to a paradigm where ethical considerations become integral to the AI development process.
"If your model is being used to terrorize a certain population, shouldn't we be able to turn it off, or, you know, prevent the use?" asked Madigan-Curtis. This statement addresses the need for built-in safety mechanisms within AI systems. The suggestion of a 'kill-switch' serves as a relevant point in the discussion on preventative measures to mitigate potential harms caused by AI technologies.
Terwilliger argues for regulation that differentiates between foundational AI models and applications that utilize them. "It's really important that we all lean into helping regulators understand the nuances of AI," she emphasized. By having tailored regulations, stakeholders could better address the respective responsibilities tied to varying layers of AI technology, ensuring both innovation and safety are prioritized.
Read at Fortune
[
|
]