Balancing innovation and safety: Inside NIST's AI Safety Institute
Briefly

Our mission is to advance the science of AI safety and the same time advance the implementation of and adoption of that science," she said.
Whether the institute's work leads to regulations or other guidelines, Kelly said the intent is not to slow the use and adoption of generative AI. The institute merely wants the tech to advance through standards and guidance, she said.
The Biden administration firmly believes that safety breeds trust. Trust breeds adoption, and adoption breeds innovation," Kelly said.
This is going to be an entirely new U.S. government capacity to directly test frontier AI models and systems before deployment," she said.
Read at Nextgov.com
[
|
]