For every project that needs guardrails, there's another one where they just get in the way. Some projects demand an LLM that returns the complete, unvarnished truth. For these situations, developers are creating unfettered LLMs that can interact without reservation. Some of these solutions are based on entirely new models while others remove or reduce the guardrails built into popular open source LLMs.
In a widely leaked internal memo that Sam Altman sent last Thursday night, a copy of which I obtained, the OpenAI CEO said that he would seek "red lines" to prevent the Pentagon from using OpenAI products for mass domestic surveillance and autonomous lethal weapons. These were ostensibly the very same limits that Anthropic had demanded and that had infuriated the Pentagon, leading Defense Secretary Pete Hegseth to declare the company a supply-chain risk.
I feel that in a short period of time I've become very counter-cultural without meaning to, because I have a kind of like 'kill it with fire' attitude towards [AI]. I didn't consent to this, you know? And I guess, you know, we don't get to consent to the cultural changes that impact us; but I don't appreciate how it's all happened in what feels like about two years.
Grammarly is now offering 'expert review' of your work by living and dead academics. Without anyone's explicit permission it's creating little LLMs based on their scraped work and using their names and reputation.
On Saturday, uninstalls of the ChatGPT mobile app skyrocked by 295 percent from the day before, according to market intelligence provider Sensor Tower. As TC noted, that's a significant leap compared to the AI chatbot's typical day-over-day uninstall rate of nine percent over the past 30 days.
We exercised our classic First Amendment rights to speak up and disagree with the government. Disagreeing with the government is the most American thing in the world, and we are patriots in everything we have done here. We have stood up for the values of this country.
Anthropic said it sought narrow assurances from the Pentagon that Claude won't be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language "framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will."
Defense Secretary Pete Hegseth insists, after the fact, that the military should be able to use the Anthropic models for "all lawful purposes." Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon for a Tuesday morning meeting, in which he reportedly gave Anthropic until 5:01 p.m. Friday to comply with the Pentagon's demand. If Anthropic fails to do so, Hegseth threatened to invoke the Defense Production Act to compel the AI company to supply its models with no guardrails.
We've started to notice all these things in Meta advertising, where the majority of our marketing spend is. Things like your text being used to train AI, and more and more AI things you have to opt out of - like AI pictures and AI videos that can alter the image of the thing you've uploaded quite dramatically. You have to opt out of each one, individually, every time you post something.
Rivera creditsthe creation of courses focusing on the intersection of AI and humanities with a resurgence in student interest in liberal arts degrees like English. Pre-pandemic, the number of English majors at the university was shrinking,part of a broader decline in English across the country, he said. It was a far cry from the days of over 1,500 majors and long waitlists in the early 2000s, according to Rivera. But there's been a rebound, with the number of English majors rising 9% since 2021.
We are living through one of the most disorienting periods in recorded history. The AI race is accelerating toward ever faster, ever more sophisticated automation and optimization. Agentic AI systems are moving from research labs into workplaces, healthcare, and governance. Geopolitical tensions are restructuring alliances faster than institutions can adapt. And planetary systems are signaling, with increasing urgency, that our current trajectory is unsustainable. Amid all this, it is dangerously easy to lose sight of a foundational question: What are we actually optimizing for?
Few tools have reshaped day-to-day work in tech as quickly as generative AI; coding tasks that once took developers days-or weeks-can now be spun up in seconds. So naturally, many workers are now embracing "vibes" to program, instead of writing software line by line. But Minecraft creator Markus Persson, the billionaire developer better known as "Notch," is sounding an alarm: even if tech companies are embracing coding with AI, that doesn't make it a good thing.
"We are deeply troubled by leaked documentation revealing that Salesforce has pitched AI technology to U.S. Immigration and Customs Enforcement to help the agency 'expeditiously' hire 10,000 new agents and vet tip-line reports," the letter reads. "Providing 'Agentforce' infrastructure to scale a mass deportation agenda that currently detains 66,000 people-73 percent of whom have no criminal record-represents a fundamental betrayal of our commitment to the ethical use of technology."
OpenAI's decision to introduce advertisements into ChatGPT has sparked serious concerns about privacy, trust, and the ethical complexities of monetizing artificial intelligence. This shift marks a dramatic departure from earlier assurances by OpenAI's leadership, who once described pairing ads with AI as a "last resort." For users who rely on ChatGPT for everything from brainstorming ideas to sharing sensitive information, the implications of this change feel deeply personal, and potentially unsettling.