AWS's Response to Generative AI in 2023 (and What IT Pros Should Expect in 2024)
Generative AI has dominated conversations with customers in Australia and New Zealand in 2023, prompting AWS to shift its agenda.
AWS expects generative AI use cases to graduate to production grade in 2024 and is focusing on integrating responsible AI and building data platforms.
John Snow Labs Introduces Automated Responsible AI Testing Capabilities
John Snow Labs launched a no-code testing tool to evaluate AI models for bias and fairness, empowering non-technical domain experts.
Google's new AI model is lighter, more efficient and even more intelligent
Google unveiled Gemini 1.5 Flash, its most efficient AI model yet, capable of multimodal reasoning and data extraction.
Crafting A Conscience For Generative AI In Marketing | AdExchanger
Generative AI tools can generate misinformation, break copyright law, and perpetuate stereotypes, but humans are responsible for the consequences.
Companies should establish frameworks for ethical and responsible AI use and prioritize fact-checking and obtaining proper rights and permissions for copyrighted material.
Google squashes AI teams together in push for fresh models
Google consolidates generative AI teams to boost development under DeepMind.
AWS's Response to Generative AI in 2023 (and What IT Pros Should Expect in 2024)
Generative AI has dominated conversations with customers in Australia and New Zealand in 2023, prompting AWS to shift its agenda.
AWS expects generative AI use cases to graduate to production grade in 2024 and is focusing on integrating responsible AI and building data platforms.
John Snow Labs Introduces Automated Responsible AI Testing Capabilities
John Snow Labs launched a no-code testing tool to evaluate AI models for bias and fairness, empowering non-technical domain experts.
Google's new AI model is lighter, more efficient and even more intelligent
Google unveiled Gemini 1.5 Flash, its most efficient AI model yet, capable of multimodal reasoning and data extraction.
Crafting A Conscience For Generative AI In Marketing | AdExchanger
Generative AI tools can generate misinformation, break copyright law, and perpetuate stereotypes, but humans are responsible for the consequences.
Companies should establish frameworks for ethical and responsible AI use and prioritize fact-checking and obtaining proper rights and permissions for copyrighted material.
Google squashes AI teams together in push for fresh models
Google consolidates generative AI teams to boost development under DeepMind.
The UK's Agile, Sector-Specific Approach to AI Regulation Is Promising
The UK government plans to place greater responsibility on existing regulators to oversee AI development and may introduce binding requirements for highly capable AI systems.
The government has announced increased funding for AI, including funding for regulators, research hubs, and projects developing responsible AI solutions.
Exclusive: Public trust in AI is sinking across the board
Trust in AI companies has decreased globally, with a significant drop in the U.S.
Emphasis on responsible AI, transparency, and putting control back in users' hands to rebuild trust.
The UK's Agile, Sector-Specific Approach to AI Regulation Is Promising
The UK government plans to place greater responsibility on existing regulators to oversee AI development and may introduce binding requirements for highly capable AI systems.
The government has announced increased funding for AI, including funding for regulators, research hubs, and projects developing responsible AI solutions.
Exclusive: Public trust in AI is sinking across the board
Trust in AI companies has decreased globally, with a significant drop in the U.S.
Emphasis on responsible AI, transparency, and putting control back in users' hands to rebuild trust.
Kay Firth-Butterfield On Harnessing AI's Power Responsibly
Responsible AI expert Kay Firth-Butterfield expresses caution about the over-reliance on current AI models in healthcare.
Firth-Butterfield has worked on creating frameworks and playbooks to ensure responsible development and use of AI, and advises governments and organizations on implementing AI responsibly.
New York City Takes Aim at AI
Political leaders are taking notice of the impact of AI, with the US and EU both implementing measures to regulate the technology.
New York City has released a comprehensive AI action plan to guide the responsible use of AI within the city.
The plan addresses factors such as guiding principles, risk assessment standards, and ways to promote knowledge and AI skill development.
Kay Firth-Butterfield On Harnessing AI's Power Responsibly
Responsible AI expert Kay Firth-Butterfield expresses caution about the over-reliance on current AI models in healthcare.
Firth-Butterfield has worked on creating frameworks and playbooks to ensure responsible development and use of AI, and advises governments and organizations on implementing AI responsibly.
New York City Takes Aim at AI
Political leaders are taking notice of the impact of AI, with the US and EU both implementing measures to regulate the technology.
New York City has released a comprehensive AI action plan to guide the responsible use of AI within the city.
The plan addresses factors such as guiding principles, risk assessment standards, and ways to promote knowledge and AI skill development.
100m boost in AI research will propel transformative innovations
Nine new AI research hubs in the UK will deliver innovative AI technologies.
The hubs will focus on various applications of AI, such as healthcare and power-efficient electronics.
An Alliance Calling For More Open AI Should Heed Their Own Call
Facebook, IBM, and over 50 other founding members have formed an AI Alliance to promote open, safe, and responsible AI.
The alliance aims to reduce the risk of harm caused by advanced AI models and establish standards and benchmarks.
Federal support for alternative AI pipelines and nonproprietary knowledge is crucial for diversifying the tech sector and ensuring democratic safeguards for AI.
Critical Futures Talks: a new event and podcast series from the Master in Design for Responsible AI by Elisava and IAM
The Master in Design for Responsible AI is organizing Critical Futures Talks, a series of hybrid events, interviews, and a podcast on the intersection of Responsible AI, media, and design.
The talks are open to the public and will feature a range of perspectives from faculty, program participants, and special guests.
MIT SMR's 10 AI Must-Reads for 2023
Artificial intelligence, especially OpenAI's ChatGPT, was a dominant topic in 2023.
Ethical challenges and responsible AI (RAI) programs are struggling to keep up with the fast pace of AI advancements.