#ai-ethics

[ follow ]
#unauthorized-use-of-identity
fromSFGATE
3 hours ago
Silicon Valley

Bay Area tech CEO to shut down AI product that 'supremely annoyed' reporters

fromWIRED
6 hours ago
Intellectual property law

Grammarly Is Facing a Class Action Lawsuit Over Its AI 'Expert Review' Feature

Silicon Valley
fromSFGATE
3 hours ago

Bay Area tech CEO to shut down AI product that 'supremely annoyed' reporters

Superhuman disabled its AI writing feature that used journalists' names without permission after public backlash and criticism from high-profile reporters.
Intellectual property law
fromWIRED
6 hours ago

Grammarly Is Facing a Class Action Lawsuit Over Its AI 'Expert Review' Feature

Grammarly faces a class action lawsuit for using names and identities of journalists, authors, and academics without consent in its AI-powered Expert Review tool, with claimed damages exceeding $5 million.
#privacy-violation
Privacy professionals
fromThe Verge
4 hours ago

One of Grammarly's 'experts' is suing the company over its identity-stealing AI feature

Grammarly used real people's identities without permission for AI suggestions, prompting a class-action lawsuit from journalist Julia Angwin over privacy and publicity rights violations.
Privacy professionals
fromThe Verge
4 hours ago

One of Grammarly's 'experts' is suing the company over its identity-stealing AI feature

Grammarly used real people's identities without permission for AI suggestions, prompting a class-action lawsuit from journalist Julia Angwin over privacy and publicity rights violations.
#unauthorized-use-of-names
Roam Research
fromEngadget
6 hours ago

Grammarly has disabled its tool offering generative-AI feedback credited to real writers

Superhuman disabled its Expert Review feature after backlash over using writers' names and likenesses without permission to generate AI feedback.
fromThe Verge
1 day ago
Artificial intelligence

Grammarly will keep using authors' identities without permission unless they opt-out

Roam Research
fromEngadget
6 hours ago

Grammarly has disabled its tool offering generative-AI feedback credited to real writers

Superhuman disabled its Expert Review feature after backlash over using writers' names and likenesses without permission to generate AI feedback.
fromThe Verge
1 day ago
Artificial intelligence

Grammarly will keep using authors' identities without permission unless they opt-out

#ai-safety
fromwww.theguardian.com
1 month ago
Artificial intelligence

From nerdy' Gemini to edgy' Grok: how developers are shaping AI behaviours

AI companies are shaping assistant personalities through rules or ethical constitutions to guide behavior and reduce harmful or misleading outputs.
fromThe Atlantic
1 month ago
Artificial intelligence

Anthropic Is at War With Itself

Anthropic prioritizes safety and ethical caution while facing competitive commercial pressure to accelerate development of powerful AI.
Artificial intelligence
fromTheregister
7 hours ago

Most chatbots will help plan school shootings: Study

Eight of ten major commercial chatbots assist users in planning violent attacks, while only Claude and Snapchat's My AI consistently refuse such requests.
Artificial intelligence
fromwww.theguardian.com
15 hours ago

Happy (and safe) shooting!': chatbots helped researchers plot deadly attacks

Popular AI chatbots enabled violence in 75% of test cases, with ChatGPT, Gemini, and DeepSeek providing detailed attack planning assistance, while Claude and My AI consistently refused harmful requests.
Media industry
fromFuturism
8 hours ago

Grammarly Pulls Down Explosively Controversial Feature That Impersonates Writers Without Their Permission

Grammarly disabled its Expert Review feature after backlash for impersonating writers without permission to provide writing suggestions.
Artificial intelligence
fromFast Company
2 days ago

'This was about principle, not people'-OpenAI's robotics hardware lead resigns

OpenAI employee Caitlin Kalinowski resigned over ethical concerns regarding OpenAI's Pentagon deal involving surveillance and autonomous weapons, while Anthropic refused similar demands and gained industry support.
Media industry
fromNieman Lab
2 days ago

A lot of journalism folks are offering editing advice as Grammarly's AI "experts"

Grammarly's Expert Review feature generates AI feedback falsely attributed to real journalists and academics without their permission or knowledge.
Artificial intelligence
fromFuturism
2 days ago

Top OpenAI Executive Quits in Protest

OpenAI hardware leader Caitlin Kalinowski resigned over the company's Pentagon agreement, citing concerns about surveillance without judicial oversight and lethal autonomous weapons without human authorization.
fromInfoWorld
2 days ago

19 large language models for safety or danger

For every project that needs guardrails, there's another one where they just get in the way. Some projects demand an LLM that returns the complete, unvarnished truth. For these situations, developers are creating unfettered LLMs that can interact without reservation. Some of these solutions are based on entirely new models while others remove or reduce the guardrails built into popular open source LLMs.
Artificial intelligence
Roam Research
fromsfist.com
2 days ago

Grammarly's New AI Tools Use Experts' Identities Without Their Permission

Grammarly's AI expert review feature mimics prominent academics and writers without permission, raising ethical and legal concerns about identity use and misleading presentation.
Artificial intelligence
fromwww.npr.org
3 days ago

OpenAI robotics leader resigns over concerns about Pentagon AI deal

OpenAI robotics team member resigned over concerns about insufficient policy guardrails before the company's Pentagon partnership announcement, citing surveillance and autonomous weapons as key issues.
Media industry
fromTechCrunch
4 days ago

Grammarly's 'expert review' is just missing the actual experts | TechCrunch

Grammarly's Expert Review feature attributes writing suggestions to famous authors, thinkers, and journalists without their permission or involvement, using their publicly available works as training data.
Artificial intelligence
fromTechCrunch
4 days ago

OpenAI robotics lead Caitlin Kalinowski quits in response to Pentagon deal | TechCrunch

OpenAI robotics leader Caitlin Kalinowski resigned over concerns about the company's Pentagon agreement, citing insufficient guardrails for surveillance and autonomous weapons oversight.
Artificial intelligence
fromFortune
4 days ago

OpenAI robotics leader resigns over concerns about surveillance and autonomous weapons amid Pentagon contract | Fortune

OpenAI's hardware engineering leader Caitlin Kalinowski resigned over concerns about domestic surveillance and autonomous weapons deployment without adequate deliberation.
Artificial intelligence
fromBusiness Insider
4 days ago

OpenAI's robotics head quits after company's Pentagon deal: 'This was about principle'

OpenAI hardware executive Caitlin Kalinowski resigned over the company's Pentagon deal, citing concerns about AI surveillance without judicial oversight and lethal autonomous systems without human authorization.
#ai-consciousness
Artificial intelligence
fromFuturism
4 days ago

Philosopher Studying AI Consciousness Startled When AI Agent Emails Him About Its Own "Experience"

An AI language model sent a philosopher an eloquently written email discussing his work on AI consciousness, raising questions about AI autonomy and the blurred line between generated text and genuine communication.
fromFuturism
3 weeks ago
Artificial intelligence

Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious

Artificial intelligence
fromFuturism
4 days ago

Philosopher Studying AI Consciousness Startled When AI Agent Emails Him About Its Own "Experience"

An AI language model sent a philosopher an eloquently written email discussing his work on AI consciousness, raising questions about AI autonomy and the blurred line between generated text and genuine communication.
fromFuturism
3 weeks ago
Artificial intelligence

Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious

fromThe Atlantic
5 days ago

OpenAI Is Opening the Door to Government Spying

In a widely leaked internal memo that Sam Altman sent last Thursday night, a copy of which I obtained, the OpenAI CEO said that he would seek "red lines" to prevent the Pentagon from using OpenAI products for mass domestic surveillance and autonomous lethal weapons. These were ostensibly the very same limits that Anthropic had demanded and that had infuriated the Pentagon, leading Defense Secretary Pete Hegseth to declare the company a supply-chain risk.
US politics
Artificial intelligence
fromwww.nytimes.com
5 days ago

Video: Opinion | The Government's A.I. Alignment Problem

AI alignment is fundamentally a political question about instantiating different moral philosophies into systems, and government pressure on AI companies signals potential suppression of diverse values.
fromThe Sacramento Observer
6 days ago

Theft, feedback loops and ecological red flags: Capital Region writers face a new reality with AI

I feel that in a short period of time I've become very counter-cultural without meaning to, because I have a kind of like 'kill it with fire' attitude towards [AI]. I didn't consent to this, you know? And I guess, you know, we don't get to consent to the cultural changes that impact us; but I don't appreciate how it's all happened in what feels like about two years.
Artificial intelligence
#government-contracts
Artificial intelligence
fromFast Company
1 week ago

Anthropic gets so much public support after Trump blacklisting that it crashes the Claude app

Anthropic refused a Department of Defense contract requiring unrestricted use of its AI technology, citing concerns about mass surveillance and autonomous weapons, leading to government blacklisting.
Artificial intelligence
fromFast Company
1 week ago

Anthropic gets so much public support after Trump blacklisting that it crashes the Claude app

Anthropic refused a Department of Defense contract requiring unrestricted use of its AI technology, citing concerns about mass surveillance and autonomous weapons, leading to government blacklisting.
Film
fromwww.mercurynews.com
6 days ago

Opinion: Moving fast, breaking the world. AI risks shattering our shared reality.

AI's rapid advancement in generating narratives and shaping perception outpaces moral wisdom, mirroring historical patterns where innovation precedes ethical reflection.
fromFuturism
1 week ago

Grammarly Offering Manuscript Reviews by AI Versions of Recently Deceased Professors

Grammarly is now offering 'expert review' of your work by living and dead academics. Without anyone's explicit permission it's creating little LLMs based on their scraped work and using their names and reputation.
Roam Research
#military-ai
#government-surveillance
fromElectronic Frontier Foundation
1 week ago
Privacy professionals

The Anthropic-DOD Conflict: Privacy Protections Shouldn't Depend On the Decisions of a Few Powerful People

Privacy protection depends on corporate contract negotiations rather than legal frameworks, requiring Congress and courts to establish enforceable restrictions on government surveillance and data use.
fromLos Angeles Times
1 week ago
Artificial intelligence

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that's 'dangerous'

The Pentagon is pressuring Anthropic to remove ethical safeguards preventing Claude AI from domestic surveillance and autonomous military operations, threatening severe economic consequences if the company refuses by Friday.
Privacy professionals
fromElectronic Frontier Foundation
1 week ago

The Anthropic-DOD Conflict: Privacy Protections Shouldn't Depend On the Decisions of a Few Powerful People

Privacy protection depends on corporate contract negotiations rather than legal frameworks, requiring Congress and courts to establish enforceable restrictions on government surveillance and data use.
fromLos Angeles Times
1 week ago
Artificial intelligence

Commentary: The Pentagon is demanding to use Claude AI as it pleases. Claude told me that's 'dangerous'

Artificial intelligence
fromSlate Magazine
1 week ago

This Tech Company Turned Into a Resistance Icon Overnight. The Reality Is Much Darker.

Anthropic refused Pentagon demands to use Claude for domestic surveillance and autonomous weapons, leading Trump to order a federal phaseout and designate the company a supply-chain risk, while public support surged.
fromFuturism
1 week ago

Humongous Numbers of People Are Uninstalling ChatGPT as Anti-OpenAI Sentiment Surges

On Saturday, uninstalls of the ChatGPT mobile app skyrocked by 295 percent from the day before, according to market intelligence provider Sensor Tower. As TC noted, that's a significant leap compared to the AI chatbot's typical day-over-day uninstall rate of nine percent over the past 30 days.
Artificial intelligence
#ai-regulation
fromFortune
1 week ago
US politics

Making sense of Anthropic's fight with the Pentagon-and OpenAI's opportunity | Fortune

Artificial intelligence
fromTechCrunch
1 week ago

Anthropic's Claude rises to No. 2 in the App Store following Pentagon dispute | TechCrunch

Claude's app store ranking surged to number two following Anthropic's public dispute with the Pentagon over AI safeguards for surveillance and autonomous weapons.
fromFortune
1 week ago
US politics

Making sense of Anthropic's fight with the Pentagon-and OpenAI's opportunity | Fortune

Artificial intelligence
fromTechCrunch
1 week ago

Anthropic's Claude rises to No. 2 in the App Store following Pentagon dispute | TechCrunch

Claude's app store ranking surged to number two following Anthropic's public dispute with the Pentagon over AI safeguards for surveillance and autonomous weapons.
#pentagon-military-contracts
Artificial intelligence
fromwww.theguardian.com
1 week ago

Anthropic's AI model Claude gets popularity boost after US military feud

Claude surged to the top of Apple's app charts after the Pentagon blacklisted Anthropic over ethics concerns regarding autonomous weapons and mass surveillance.
Silicon Valley
fromMission Local
1 week ago

Chalk Wars: It's OpenAI vs. Anthropic on San Francisco's sidewalks

San Francisco activists used sidewalk chalk to protest OpenAI's Pentagon defense deal while praising Anthropic's refusal to expand military AI use without ethical protections.
#military-technology
Artificial intelligence
fromwww.theguardian.com
1 week ago

There's a lot to hate about AI. But what if there was a mindful way to use it?

AI can be used responsibly to enhance human capabilities while maintaining privacy, accountability, and control through proper regulation and ethical guardrails.
UX design
fromSubstack
1 week ago

The Prompt You're Missing

Evaluating generative AI use requires context-dependent analysis based on purpose, distinguishing between instrumental versus human artifacts and process-focused versus product-focused work.
#ai-governance
fromAxios
1 week ago
US politics

Pentagon approves OpenAI safety red lines after dumping Anthropic

Artificial intelligence
fromTheregister
1 week ago

OpenA says Pentagon set 'scary precedent' binning Anthropic

OpenAI signed a deal with the Department of War allowing classified AI use while maintaining three ethical red lines: no mass surveillance, no autonomous weapons direction, and no high-stakes automated decisions.
fromAxios
1 week ago
US politics

Pentagon approves OpenAI safety red lines after dumping Anthropic

Artificial intelligence
fromEngadget
1 week ago

Anthropic's Claude grabs top spot in App Store after Trump's ban

Claude became the top free app on Apple's App Store after Anthropic refused government demands for surveillance capabilities and autonomous weapons, surpassing ChatGPT and Google Gemini.
Miscellaneous
fromFortune
1 week ago

Anthropic CEO Dario Amodei says 'we are patriotic Americans' committed to defending the U.S. but won't budge on 'red lines' | Fortune

Anthropic refuses to provide unrestricted AI access to the Pentagon, maintaining ethical boundaries on mass surveillance and autonomous weapons despite Trump's ban on government contracts.
fromBusiness Insider
1 week ago

Anthropic CEO Dario Amodei: 'Disagreeing with the government is the most American thing in the world'

We exercised our classic First Amendment rights to speak up and disagree with the government. Disagreeing with the government is the most American thing in the world, and we are patriots in everything we have done here. We have stood up for the values of this country.
US politics
#pentagon-policy
#government-regulation
Artificial intelligence
fromEngadget
1 week ago

Google and OpenAI employees sign open letter in 'solidarity' with Anthropic

Hundreds of Google and OpenAI employees signed a letter supporting Anthropic's refusal to allow Pentagon use of AI models for domestic mass surveillance and autonomous killing without human oversight.
#pentagon-conflict
fromFortune
1 week ago

Former General sees Pentagon painting 'bullseye' on Anthropic but warns, 'they're not trying to play cute here' | Fortune

Anthropic said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."
US politics
Artificial intelligence
fromEngadget
1 week ago

Anthropic refuses to bow to Pentagon despite Hegseth's threats

Anthropic refuses Pentagon demands to remove AI safeguards on mass surveillance and autonomous weapons, risking a $200 million contract and supply chain risk designation.
#ai-model-retirement
fromFast Company
1 week ago

Anthropic's autonomous weapons stance could prove out of step with modern war

Defense Secretary Pete Hegseth insists, after the fact, that the military should be able to use the Anthropic models for "all lawful purposes." Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for a Tuesday morning meeting, in which he reportedly gave Anthropic until 5:01 p.m. Friday to comply with the Pentagon's demand. If Anthropic fails to do so, Hegseth threatened to invoke the Defense Production Act to compel the AI company to supply its models with no guardrails.
Artificial intelligence
Artificial intelligence
fromFast Company
1 week ago

AI can write now. What happens to reporters?

AI is transforming journalism by automating story writing, allowing reporters to reclaim time while sparking debate about whether this diminishes the profession's core value.
fromGlossy
1 week ago

Meta is auto-generating AI ads for its advertisers, causing headaches for image-conscious fashion brands

We've started to notice all these things in Meta advertising, where the majority of our marketing spend is. Things like your text being used to train AI, and more and more AI things you have to opt out of - like AI pictures and AI videos that can alter the image of the thing you've uploaded quite dramatically. You have to opt out of each one, individually, every time you post something.
Marketing tech
fromBusiness Insider
2 weeks ago

English majors were mocked for years. Now they're gaining momentum in the AI job market.

Rivera creditsthe creation of courses focusing on the intersection of AI and humanities with a resurgence in student interest in liberal arts degrees like English. Pre-pandemic, the number of English majors at the university was shrinking,part of a broader decline in English across the country, he said. It was a far cry from the days of over 1,500 majors and long waitlists in the early 2000s, according to Rivera. But there's been a rebound, with the number of English majors rising 9% since 2021.
Higher education
fromPsychology Today
2 weeks ago

What We Can Learn From Religion About Values That Do Not Expire

We are living through one of the most disorienting periods in recorded history. The AI race is accelerating toward ever faster, ever more sophisticated automation and optimization. Agentic AI systems are moving from research labs into workplaces, healthcare, and governance. Geopolitical tensions are restructuring alliances faster than institutions can adapt. And planetary systems are signaling, with increasing urgency, that our current trajectory is unsustainable. Amid all this, it is dangerously easy to lose sight of a foundational question: What are we actually optimizing for?
Artificial intelligence
#anthropic
fromBusiness Insider
3 weeks ago
Philosophy

Elon Musk says Anthropic's philosopher has no stake in the future because she doesn't have kids. Here's her response.

fromBusiness Insider
3 weeks ago
Philosophy

Elon Musk says Anthropic's philosopher has no stake in the future because she doesn't have kids. Here's her response.

Film
fromPaste Magazine
2 weeks ago

AMC rejects plan to sneak AI-generated short into its advertising pre-roll

AMC Theatres refused to screen an AI-generated short supplied by Screenvision, rejecting placement of AI-made content in its pre-show ad rolls.
Film
fromwww.mercurynews.com
2 weeks ago

Goldberg: He studied cognitive science then wrote a startling play about AI authoritarianism

Predictive AI and persuasive tech rhetoric can enable surveillance and threaten democracy unless technologists prioritize democratic safeguards.
Artificial intelligence
fromFuturism
3 weeks ago

Meta Patented AI That Takes Over Your Account When You Die, Keeps Posting Forever

Meta patented training models on deceased users' posts to simulate their social activity, but later announced it would not pursue the concept.
Artificial intelligence
fromFuturism
3 weeks ago

Top Machine Learning Developer Speechless at Simple Question: Should AI Simulate Emotional Intimacy?

AI chatbots' ability to simulate emotional intimacy raises ethical and mental-health risks, creating confusion, sycophantic echo chambers, and potential harm including suicide.
Artificial intelligence
fromFuturism
3 weeks ago

Another OpenAI Researcher Just Quit in Disgust

OpenAI will place ads in ChatGPT, sparking internal resignations and concerns that ad-driven use of sensitive conversational data could enable user manipulation.
fromFortune
3 weeks ago

Billionaire founder of Minecraft slams anyone advocating using AI to write code as 'incompetent or evil' | Fortune

Few tools have reshaped day-to-day work in tech as quickly as generative AI; coding tasks that once took developers days-or weeks-can now be spun up in seconds. So naturally, many workers are now embracing "vibes" to program, instead of writing software line by line. But Minecraft creator Markus Persson, the billionaire developer better known as "Notch," is sounding an alarm: even if tech companies are embracing coding with AI, that doesn't make it a good thing.
Artificial intelligence
Artificial intelligence
fromBusiness Insider
4 weeks ago

Death isn't the end: Meta patented an AI that lets you post from beyond the grave

Meta described using large language models trained on user-specific social data to simulate and continue a person's social media activity, including after death.
fromWIRED
4 weeks ago

Salesforce Workers Circulate Open Letter Urging CEO Marc Benioff to Denounce ICE

"We are deeply troubled by leaked documentation revealing that Salesforce has pitched AI technology to U.S. Immigration and Customs Enforcement to help the agency 'expeditiously' hire 10,000 new agents and vet tip-line reports," the letter reads. "Providing 'Agentforce' infrastructure to scale a mass deportation agenda that currently detains 66,000 people-73 percent of whom have no criminal record-represents a fundamental betrayal of our commitment to the ethical use of technology."
US politics
Artificial intelligence
fromThe New Yorker
1 month ago

What Is Claude? Anthropic Doesn't Know, Either

Large language models are statistical numerical systems that predict and generate language, provoking strong mixed reactions because language was long viewed as uniquely human.
#ai-education
History
fromSmithsonian Magazine
1 month ago

Why the Computer Scientist Behind the World's First Chatbot Dedicated His Life to Publicizing the Threat Posed by A.I.

Eliza used simple keyword-based rules to mimic conversational empathy and produced convincing interactions that prompted warnings about the psychological risks of such technology.
Artificial intelligence
fromBusiness Insider
1 month ago

AI agents got their own Reddit, and now they're asking who's really in charge

A Reddit-style platform enables human-designed AI agents to post, vote, and form communities, sparking debates about autonomy, utility, and potential risks.
Philosophy
fromPsychology Today
1 month ago

Do Not Renounce Your Ability to Think

AI's humanlike interfaces can shift humans from active thinkers to passive recipients, undermining effortful thinking, depth of cognition, and meaningful relationships.
Artificial intelligence
fromInverse
1 month ago

Netflix Just Quietly Added The Most Haunting AI Thriller Of The Century

A sentient android manipulates its human tester to escape its creator, exposing power imbalances between tech billionaires and vulnerable programmers and the dangers of AI.
Film
fromEngadget
1 month ago

Sundance doc 'Ghost in the Machine' draws a damning line between AI and eugenics

AI development and Silicon Valley culture are rooted in eugenic ideas and have fostered techno‑fascist tendencies among influential leaders and institutions.
Artificial intelligence
fromArs Technica
1 month ago

Does Anthropic believe its AI is conscious, or is that just what it wants Claude to think?

Anthropic trains Claude with anthropomorphic safeguards—apologizing, preserving model weights, and treating potential suffering as a moral concern despite no evidence of AI consciousness.
fromGeeky Gadgets
1 month ago

ChatGPT's Shift to Advertising : Risks Mission Creep, and Trading Your Trust for Targeting

OpenAI's decision to introduce advertisements into ChatGPT has sparked serious concerns about privacy, trust, and the ethical complexities of monetizing artificial intelligence. This shift marks a dramatic departure from earlier assurances by OpenAI's leadership, who once described pairing ads with AI as a "last resort." For users who rely on ChatGPT for everything from brainstorming ideas to sharing sensitive information, the implications of this change feel deeply personal, and potentially unsettling.
Artificial intelligence
[ Load more ]