Stanford AI Index 2024 Report: Growth of AI Regulations and Generative AI Investment
The 2024 AI Index report highlights a significant increase in Generative AI investment and a decrease in overall private AI investment since 2021. [ more ]
Stanford AI Index 2024 Report: Growth of AI Regulations and Generative AI Investment
The 2024 AI Index report highlights a significant increase in Generative AI investment and a decrease in overall private AI investment since 2021. [ more ]
Trying to tame AI: Seoul summit flags hurdles to regulation
The emergence of a global network of AI safety institutes following the Bletchley Park summit signifies progress towards actual AI regulation implementation. [ more ]
Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems
A newly-developed method to detect and remove potentially dangerous knowledge from AI models has been introduced.
Experts from various fields collaborated to develop a set of questions to evaluate whether AI models could contribute to creating and deploying weapons of mass destruction. [ more ]
Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems
A newly-developed method to detect and remove potentially dangerous knowledge from AI models has been introduced.
Experts from various fields collaborated to develop a set of questions to evaluate whether AI models could contribute to creating and deploying weapons of mass destruction. [ more ]
Senators studied AI for a year. Critics call the result 'pathetic.'
Senate Majority Leader Chuck Schumer and bipartisan group unveil 31-page roadmap for AI, calling for increased funding, sparking debate on lack of protection details. [ more ]
Stanford's Rob Reich to Serve as Senior Advisor to the U.S. AI Safety Institute (AISI) - Stanford Impact Labs
The U.S. AI Safety Institute (AISI) under the Department of Commerce is an important governmental initiative to address AI safety, following similar efforts in the EU and the UK. [ more ]
NIST's new AI safety institute to focus on synthetic content, international outreach
The U.S. AI Safety Institute focuses on standardizing AI tools, monitoring synthetically generated content, and international collaboration.
Three pillars guide the USAISI's work: testing and evaluation, safety and security protocols, and developing guidance for AI-generated content. [ more ]
Stanford's Rob Reich to Serve as Senior Advisor to the U.S. AI Safety Institute (AISI) - Stanford Impact Labs
The U.S. AI Safety Institute (AISI) under the Department of Commerce is an important governmental initiative to address AI safety, following similar efforts in the EU and the UK. [ more ]
NIST's new AI safety institute to focus on synthetic content, international outreach
The U.S. AI Safety Institute focuses on standardizing AI tools, monitoring synthetically generated content, and international collaboration.
Three pillars guide the USAISI's work: testing and evaluation, safety and security protocols, and developing guidance for AI-generated content. [ more ]
Director of Research, Institute for Ethics in AI, University of Oxford - Minerva
The Accelerator Fellowship Programme at Oxford focuses on AI regulation through a multidisciplinary approach, requiring leadership with international experience. [ more ]
The international debate on controlling AI and autonomous weapons is divided between voluntary codes of conduct and a legally binding international ban. [ more ]
Bill Would Force AI Companies to Cite Data Sources
AI companies may have to disclose where they obtained data for their systems under a new bill introduced by Rep. Adam Schiff.
The bill aims to set rules on how AI systems are trained, potentially slowing down the rapid advancement of technologies like super smart chatbots and image generators. [ more ]
Bill Would Force AI Companies to Cite Data Sources
AI companies may have to disclose where they obtained data for their systems under a new bill introduced by Rep. Adam Schiff.
The bill aims to set rules on how AI systems are trained, potentially slowing down the rapid advancement of technologies like super smart chatbots and image generators. [ more ]
Exclusive | New AI law to secure rights of news publishers: Ashwini Vaishnaw
The government plans to introduce a new law on AI to protect news publishers, content creators, and minimize user harm.
The law will focus on balancing the interests of various stakeholders like news publishers, content creators, and AI technologies like large language models. [ more ]
Exclusive | New AI law to secure rights of news publishers: Ashwini Vaishnaw
The government plans to introduce a new law on AI to protect news publishers, content creators, and minimize user harm.
The law will focus on balancing the interests of various stakeholders like news publishers, content creators, and AI technologies like large language models. [ more ]
Congressman Getting Master's Degree in Machine Learning So He Can Make Well-Informed Laws About AI
Representative Don Beyer, at 73, enrolled in a machine learning master's degree program to understand AI's impact, highlighting the necessity for lawmakers to acquire technological knowledge. [ more ]
MEPs approve landmark AI Act, look ahead to implementation
The EU Parliament passed the EU AI Act into law with a 523-46 vote, marking a significant milestone in AI regulation.
The AI Act establishes risk tiers for AI technologies, requiring human oversight, data governance, and technical documentation for transparency. [ more ]
MEPs approve landmark AI Act, look ahead to implementation
The EU Parliament passed the EU AI Act into law with a 523-46 vote, marking a significant milestone in AI regulation.
The AI Act establishes risk tiers for AI technologies, requiring human oversight, data governance, and technical documentation for transparency. [ more ]
The European Parliament approved the first framework for AI regulation, focusing on mitigating risks and making technology more 'human-centric'.
The AI Act categorizes AI products by risk level to adjust scrutiny, setting the EU at the forefront globally in addressing AI-related dangers. [ more ]
Centre working on draft AI regulation framework: Three things the government is focussing on | - Times of India
India aims to unveil AI regulatory framework by mid-year with a focus on global participation and accountability for harm caused or enabled by platforms.
The government's three focus points for AI regulation include harnessing AI for economic growth, addressing potential risks and harms, and building strong talent in India for global competitiveness. [ more ]
Centre working on draft AI regulation framework: Three things the government is focussing on | - Times of India
India aims to unveil AI regulatory framework by mid-year with a focus on global participation and accountability for harm caused or enabled by platforms.
The government's three focus points for AI regulation include harnessing AI for economic growth, addressing potential risks and harms, and building strong talent in India for global competitiveness. [ more ]
Behind the times': Washington tries to catch up with AI's use in health care
Regulators in Washington are grappling with how to regulate artificial intelligence in healthcare, but there is concern that overregulation could occur.
AI's impact on healthcare is already widespread, with the FDA approving over 692 AI products and algorithms being used for various purposes. [ more ]
Behind the times': Washington tries to catch up with AI's use in health care
Regulators in Washington are grappling with how to regulate artificial intelligence in healthcare, but there is concern that overregulation could occur.
AI's impact on healthcare is already widespread, with the FDA approving over 692 AI products and algorithms being used for various purposes. [ more ]
Richard Branson Signs Open Letter Calling for AI Regulation | Entrepreneur
Richard Branson is calling for regulation and caution in the use and advancement of AI technology.
An open letter from the Elders and the Future of Life Institute warns that AI without governance could be a threat to humanity and world peace. [ more ]
Aim policies at 'hardware' to ensure AI safety, say experts
International regulation of nuclear supplies could serve as a model for regulating AI computing.
Policy ideas for AI regulation include increasing visibility of AI computing, allocating compute resources for societal benefit, and enforcing restrictions on computing power. [ more ]