Using sound to spot an imminent lithium-ion battery fail
NIST developed a machine learning-based early warning system for lithium-ion battery fires, recognizing a distinctive sound indicating potential failure.
Traditional smoke alarms are inadequate for detecting the rapid escalation of lithium-ion battery fires. Early warning is crucial for safety.
Chief Counsel for the National Institute of Standards and Technology - IPWatchdog.com | Patents & Intellectual Property Law
The Chief Counsel for NIST oversees a comprehensive legal program, providing essential advice on measurement, standards, and technology regulations.
AI firms and civil society groups plead for federal AI law
Establishment of the US AI Safety Institute is crucial for enhancing AI standards and safety amidst growing concerns.
OpenAI, Anthropic to collab with NIST on AI safety testing
The U.S. undertakes a safety-focused collaboration with OpenAI and Anthropic for AI model testing before public release.
NIST releases a tool for testing AI model risk | TechCrunch
Dioptra is a tool re-released by NIST to assess AI risks and test the effects of malicious attacks, aiding in benchmarking AI models and evaluating developers' claims.
OpenAI, Anthropic agree to get their models tested for safety before making them public
NIST formed the US AI Safety Institute Consortium to establish guidelines ensuring safe AI development and management by leveraging collaboration among key tech firms.
This tool tests AI's resilience to 'poisoned' data
Re-release of NIST tool Dioptra to test AI model susceptibility to malicious data, in response to President Biden's Executive Order on AI development.
Feds appoint "AI doomer" to run US AI safety institute
The US AI Safety Institute appointed Paul Christiano, who has expressed concerns about AI development leading to potential 'doom,' as head of AI safety.
AI firms and civil society groups plead for federal AI law
Establishment of the US AI Safety Institute is crucial for enhancing AI standards and safety amidst growing concerns.
OpenAI, Anthropic to collab with NIST on AI safety testing
The U.S. undertakes a safety-focused collaboration with OpenAI and Anthropic for AI model testing before public release.
NIST releases a tool for testing AI model risk | TechCrunch
Dioptra is a tool re-released by NIST to assess AI risks and test the effects of malicious attacks, aiding in benchmarking AI models and evaluating developers' claims.
OpenAI, Anthropic agree to get their models tested for safety before making them public
NIST formed the US AI Safety Institute Consortium to establish guidelines ensuring safe AI development and management by leveraging collaboration among key tech firms.
This tool tests AI's resilience to 'poisoned' data
Re-release of NIST tool Dioptra to test AI model susceptibility to malicious data, in response to President Biden's Executive Order on AI development.
Feds appoint "AI doomer" to run US AI safety institute
The US AI Safety Institute appointed Paul Christiano, who has expressed concerns about AI development leading to potential 'doom,' as head of AI safety.
Federal report on Surfside collapse won't be released until 2026. What's taking so long?
The Champlain Towers South investigation will now release a draft report in 2026, delayed by various challenges including interviews and testing.
Training NIST Privacy Framework
The NIST Privacy Framework helps organizations assess and implement privacy protections effectively.
NIST Launches Program to Discriminate How Far From "Human-Quality" Are Gen AI Generated Summaries
Public generative AI evaluation program by NIST focusing on text-to-text and text-to-image. Teams can act as generators or discriminators. Submission deadline in August.
NIST issues guidance on a mathematical approach to data privacy
The National Institute of Standards and Technology (NIST) has released new draft guidance on adopting differential privacy as part of security infrastructure.
The guidance aims to strike a balance between privacy and accuracy, allowing data to be released publicly without revealing individuals within the dataset.
NIST's emerging tech work will be 'very difficult' without sustained funding, director says
NIST emphasizes research priorities in AI and quantum information sciences for FY25, highlighting the need for consistent federal funding.
NIST issues guidance on a mathematical approach to data privacy
The National Institute of Standards and Technology (NIST) has released new draft guidance on adopting differential privacy as part of security infrastructure.
The guidance aims to strike a balance between privacy and accuracy, allowing data to be released publicly without revealing individuals within the dataset.
NIST's emerging tech work will be 'very difficult' without sustained funding, director says
NIST emphasizes research priorities in AI and quantum information sciences for FY25, highlighting the need for consistent federal funding.