University of Notre Dame Joins AI Safety Institute Consortium
The University of Notre Dame has joined the Artificial Intelligence Safety Institute Consortium (AISIC) to address the challenges and risks associated with AI.
AISIC aims to develop standards and measurement techniques that ensure the safety and trustworthiness of AI systems. [ more ]
University of Notre Dame Joins AI Safety Institute Consortium
The University of Notre Dame has joined the Artificial Intelligence Safety Institute Consortium (AISIC) to address the challenges and risks associated with AI.
AISIC aims to develop standards and measurement techniques that ensure the safety and trustworthiness of AI systems. [ more ]
AI Safety Summit Talks with Yoshua Bengio (remote) Luma
The AI Safety Summit Talks aim to address AI risks and mitigation strategies with leading experts from the field, fostering public engagement and awareness. [ more ]
AI Safety Summit Talks with Yoshua Bengio (remote) Luma
The AI Safety Summit Talks aim to address AI risks and mitigation strategies with leading experts from the field, fostering public engagement and awareness. [ more ]
Legendary Silicon Valley investor Vinod Khosla says the existential risk of sentient AI killing us is 'not worthy of conversation'
Silicon Valley is divided into two factions: 'doomers' who worry about the risks of AI and proponents of effective accelerationism who believe in its positive potential.
Venture capitalist Vinod Khosla dismisses the 'doomers' and believes the real risk to worry about is China, not sentient AI killing humanity. [ more ]
Legendary Silicon Valley investor Vinod Khosla says the existential risk of sentient AI killing us is 'not worthy of conversation'
Silicon Valley is divided into two factions: 'doomers' who worry about the risks of AI and proponents of effective accelerationism who believe in its positive potential.
Venture capitalist Vinod Khosla dismisses the 'doomers' and believes the real risk to worry about is China, not sentient AI killing humanity. [ more ]
CISA unveils guidelines for AI and critical infrastructure
The Cybersecurity and Infrastructure Security Agency released safety guidelines for critical infrastructure, addressing AI risks and obligations under the Biden administration's executive order. [ more ]
To understand the risks posed by AI, follow the money
Predicting technological evolution is challenging, but economic risks from AI misalignment between profits and societal interests are generally knowable in advance. [ more ]
Act now on AI before it's too late, says UNESCO's AI lead
The second Global Forum on the Ethics of AI organized by UNESCO is focused on broadening the conversation around AI risks and considering AI's impacts beyond those discussed by first-world countries and business leaders.
UNESCO aims to move away from just having principles on AI ethics and focus on practical implementation through the Readiness Assessment Methodology (RAM) to measure countries' commitments. [ more ]
'World-First' Agreement on AI Reached - Data Matters Privacy Blog
The "Bletchley Declaration", endorsed by 28 countries and the EU, highlights the commitment to manage risks associated with highly capable general-purpose AI models.
The Global AI Safety Summit brought together policymakers, academics, and executives to address responsible development of AI and was seen as a diplomatic breakthrough. [ more ]
The frantic battle over OpenAI shows that money triumphs in the end | Robert Reich
OpenAI, originally a research-oriented non-profit, shifted to a capped profit structure in 2019 to attract investors.
The involvement of big money and profit-seeking investors is endangering OpenAI's non-profit safety mission.
Building an enterprise to gain the benefits of AI while avoiding risks involves a governance structure with ethicists, a for-profit commercial arm, and limitations on profit flow. [ more ]
The frantic battle over OpenAI shows that money triumphs in the end | Robert Reich
OpenAI, originally a research-oriented non-profit, shifted to a capped profit structure in 2019 to attract investors.
The involvement of big money and profit-seeking investors is endangering OpenAI's non-profit safety mission.
Building an enterprise to gain the benefits of AI while avoiding risks involves a governance structure with ethicists, a for-profit commercial arm, and limitations on profit flow. [ more ]
The Guardian @guardian: The frantic battle over OpenAI shows that money triumphs in the end | Robert Reich | The Guardian. #aistrategy #aiact #industry40 https://t.co/nE8sR74GHV
The frantic battle over OpenAI shows that money triumphs in the end | Robert Reich
OpenAI, originally a research-oriented non-profit, shifted to a capped profit structure in 2019 to attract investors.
The involvement of big money and profit-seeking investors is endangering OpenAI's non-profit safety mission.
Building an enterprise to gain the benefits of AI while avoiding risks involves a governance structure with ethicists, a for-profit commercial arm, and limitations on profit flow. [ more ]
Emmett Shear, the new head of OpenAI: A doomer' who wants to curb artificial intelligence
Emmett Shea, a self-proclaimed doomer who sees AI as a threat, has been chosen as the new leader of OpenAI, a prominent AI company partnered with Microsoft.
Shea believes that AI has the potential to bring about the apocalypse and advocates for slowing down its development to minimize risks.
He considers the risk of AI leading to universal destruction as terrifying and estimates the probability of doom to be between 5% and 50%. [ more ]