OpenAI on the defensive after multiple PR setbacks in one week
OpenAI faced negative attention over GPT-4o's flirty AI assistant, leading to defensive actions and key resignations in the Superalignment team. [ more ]
OpenAI on the defensive after multiple PR setbacks in one week
OpenAI faced negative attention over GPT-4o's flirty AI assistant, leading to defensive actions and key resignations in the Superalignment team. [ more ]
Who owns your voice? Scarlett Johansson OpenAI complaint raises questions
The use of generative artificial intelligence (genAI) raises questions about existing laws' ability to protect a person's appearance and voice. [ more ]
International Writing Guilds Set Out 'Ethical Framework' for Use of AI in Screenwriting
Professional screenwriters guilds are working on creating an ethical framework for AI use in screenwriting.
Key principles include maintaining writers' creative authority, transparency in AI use, consent for training AI on writers' work, and fair compensation. [ more ]
Who owns your voice? Scarlett Johansson OpenAI complaint raises questions
The use of generative artificial intelligence (genAI) raises questions about existing laws' ability to protect a person's appearance and voice. [ more ]
International Writing Guilds Set Out 'Ethical Framework' for Use of AI in Screenwriting
Professional screenwriters guilds are working on creating an ethical framework for AI use in screenwriting.
Key principles include maintaining writers' creative authority, transparency in AI use, consent for training AI on writers' work, and fair compensation. [ more ]
Moral AI And How We Get There with Prof Walter Sinnott-Armstrong | Practical Ethics
Building and using AI ethically is a crucial topic explored by Walter Sinnott-Armstrong and others in the book 'Moral AI and How We Get There'. [ more ]
Microsoft CEO Bashes Human-Like AI After OpenAI's Scarlett Johansson Scandal
Nadella believes AI should be viewed as a tool, not given human attributes like 'intelligence.' AI's human-like traits raise concerns about its role in society. [ more ]
Moral AI And How We Get There with Prof Walter Sinnott-Armstrong | Practical Ethics
Building and using AI ethically is a crucial topic explored by Walter Sinnott-Armstrong and others in the book 'Moral AI and How We Get There'. [ more ]
Microsoft CEO Bashes Human-Like AI After OpenAI's Scarlett Johansson Scandal
Nadella believes AI should be viewed as a tool, not given human attributes like 'intelligence.' AI's human-like traits raise concerns about its role in society. [ more ]
The next AI winter could be caused by users' trust issues-and 'mindful friction' can keep it from happening
AI in the workplace should act as a co-pilot, not on autopilot, with a need for next-level controls and designing products that consider both AI capabilities and human judgment. [ more ]
The next AI winter could be caused by users' trust issues-and 'mindful friction' can keep it from happening
AI in the workplace should act as a co-pilot, not on autopilot, with a need for next-level controls and designing products that consider both AI capabilities and human judgment. [ more ]
UK data protection watchdog ends privacy probe of Snap's GenAI chatbot, but warns industry | TechCrunch
The UK's data protection watchdog closed investigation on Snap's AI chatbot, satisfied with privacy measures; warns industry to assess risks before launching generative AI tools. [ more ]
Breaking: EU Passes First Major Piece of AI Regulation
EU passed comprehensive AI regulation act to categorize AI technologies based on risk levels, aiming to align deployment with societal values and ethical standards. [ more ]
AI-generated legal outputs often contain errors and falsehoods, leading to real-world consequences.
Hallucination, where AI models produce responses that don't align with reality, poses a significant challenge in the use of large language models. [ more ]
Inside Higher Ed | Higher Education News, Events and Jobs
AI-generated legal outputs often contain errors and falsehoods, leading to real-world consequences.
Hallucination, where AI models produce responses that don't align with reality, poses a significant challenge in the use of large language models. [ more ]
Mistral CEO Says AI Companies Are Trying to Build God
The CEO of Mistral, a new AI firm, rejects the notion of creating artificial general intelligence (AGI) equating it to a desire to create God. [ more ]
AI has opposing factions - doomers fearing disaster, accelerationists predicting abundance, not representative of whole industry.
AI ethicists work to mitigate harm from AI, critique racist biases in predictive policing, and school algorithms. Doomers overlook existing harms. [ more ]
Mistral CEO Says AI Companies Are Trying to Build God
The CEO of Mistral, a new AI firm, rejects the notion of creating artificial general intelligence (AGI) equating it to a desire to create God. [ more ]
AI has opposing factions - doomers fearing disaster, accelerationists predicting abundance, not representative of whole industry.
AI ethicists work to mitigate harm from AI, critique racist biases in predictive policing, and school algorithms. Doomers overlook existing harms. [ more ]
Should artists be paid for training data? OpenAI VP wouldn't say | TechCrunch
Artists whose work was used to train AI like ChatGPT may not be compensated due to legal complexities and fair use arguments by companies like OpenAI. [ more ]
Should artists be paid for training data? OpenAI VP wouldn't say | TechCrunch
Artists whose work was used to train AI like ChatGPT may not be compensated due to legal complexities and fair use arguments by companies like OpenAI. [ more ]
Arvida Byström's artwork explores womanhood in the context of the internet, challenging societal norms.
Her latest project involves creating pornographic self-portraits using an ethically questionable AI tool and selling them as real on a subscription site. [ more ]
A pink slime site used AI to rewrite our AI ethics article - Poynter
Artificial intelligence likely wrote a near-identical article to Poynter's AI ethics guide shortly after its release on a sketchy website.
The Tech Gate article copied Poynter's content with minor changes, likely generated by AI, impacting the credibility of original news sources. [ more ]
Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems
A newly-developed method to detect and remove potentially dangerous knowledge from AI models has been introduced.
Experts from various fields collaborated to develop a set of questions to evaluate whether AI models could contribute to creating and deploying weapons of mass destruction. [ more ]