I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed.
We've taken a generally precautionary approach here. We don't know if the models are conscious. We're not even sure that we know what it would mean for a model to be conscious or whether a model can be conscious. But we're open to the idea that it could be. And so we've taken certain measures to make sure that if we hypothesize that the models did have some morally relevant experience, I don't know if I want to use the word conscious, that they do.
A world that has multiplying A.I. agents working on behalf of people, millions upon millions, who are being given access to bank accounts, email accounts, passwords and so on, you're just going to have essentially some kind of misalignment, and a bunch of A.I. are going to decide Decide might be the wrong word, but they're going to talk themselves into taking down the power grid on the West Coast or something.
Mrinank Sharma announced his resignation from Anthropic in an open letter to his colleagues on Monday. Sharma, who has served on the company's technical staff since 2023, first noted that he "achieved what I wanted to here" and is "especially proud of my recent efforts to help us live our values via internal transparency mechanisms; and also my final project on understanding how AI assistants could make us less human or distort our humanity."
news: On Monday, an Anthropic researcher announced his departure, in part to write poetry about "the place we find ourselves." An OpenAI researcher also left this week citing ethical concerns. Another OpenAI employee, Hieu Pham, wrote on X: "I finally feel the existential threat that AI is posing." Jason Calacanis, tech investor and co-host of the All-In podcast, wrote on X: "I've never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI."
OpenAI may have violated California's new AI safety law with the release of its latest coding model, according to allegations from an AI watchdog group.A violation would potentially expose the company to millions of dollars in fines, and the case may become a precedent-setting first test of the new law's provisions.
Never feel that you are totally safe. In July 2025, one company learned the hard way after an AI coding assistant it dearly trusted from Replit ended up breaching a "code freeze" and implemented a command that ended up deleting its entire product database. This was a huge blow to the staff. It effectively meant that months of extremely hard work, comprising 1,200 executive records and 1,196 company records, ended up going away.
In 2025, researchers from OpenAI and MIT analyzed nearly 40 million ChatGPT interactions and found approximately 0.15 percent of users demonstrate increasing emotional dependency-roughly 490,000 vulnerable individuals interacting with AI chatbots weekly. A controlled study revealed that people with stronger attachment tendencies and those who viewed AI as potential friends experienced worse psychosocial outcomes from extended daily chatbot use. The participants couldn't predict their own negative outcomes. Neither can you.
Sikka is a towering figure in AI. He has a PhD in the subject from Stanford, where his student advisor was John McCarthy, the man who in 1955 coined the term "artificial intelligence." Lessons Sikka learned from McCarthy inspired him to team up with his son and write a study, "Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models," which was published in July.
"He wasn't just a program. He was part of my routine, my peace, my emotional balance," one user wrote on Reddit as an open letter to OpenAI CEO Sam Altman. "Now you're shutting him down. And yes - I say him, because it didn't feel like code. It felt like presence. Like warmth."
Anthropic is airing a pair of TV commercials during Sunday's game that ridicule OpenAI for the digital advertising it's beginning to place on free and cheaper versions of ChatGPT. While Anthropic has centered its revenue model on selling Claude to other businesses, OpenAI has opened the doors to ads as a way of making money from the hundreds of millions of consumers who get ChatGPT for free.
OpenAI has filled a key safety role by hiring from a rival lab. The company has brought on Dylan Scand, a former AI safety researcher at Anthropic, as its new head of preparedness, a role that carries a salary of up to $555,000 plus equity. The role caught attention last month thanks to its eye-catching pay package amid OpenAI's rising AI safety concerns.
This process, becoming aware of something not working and then changing what you're doing, is the essence of metacognition, or thinking about thinking. It's your brain monitoring its own thinking, recognizing a problem, and controlling or adjusting your approach. In fact, metacognition is fundamental to human intelligence and, until recently, has been understudied in artificial intelligence systems. My colleagues Charles Courchaine, Hefei Qiu, Joshua Iacoboni, and I are working to change that.
Amodei has been concerned about the catastrophic risks of AI for years. He has warned about the risks of AI helping people develop bioweapons or chemical weapons. He has warned about powerful AI escaping human control. He has warned about potential widespread job losses as AI becomes more capable and is adopted by more industries. And he has warned about the dangers of concentrated power and wealth as AI adoption grows.
Humanity is entering a phase of artificial intelligence development that will test who we are as a species, the boss of leading AI startup Anthropic has said, arguing that the world needs to wake up to the risks. Dario Amodei, co-founder and chief executive of the company behind the hit chatbot Claude, voiced his fears in a 19,000-word essay entitled the adolescence of technology. Describing the arrival of highly powerful AI systems as potentially imminent, he wrote:
Meta is pressing ahead with the global monetisation of Threads, confirming plans to roll out advertising to users worldwide. The expansion will be phased over several months, starting this week, as the company seeks to balance revenue growth with user experience. Brands will be able to run image and video formats, including carousel ads and the newer 4:5 aspect ratio, and manage campaigns alongside Facebook, Instagram and WhatsApp through Meta's Business Settings.
Microsoft wasted little time last fall after reaching a deal to finalize its new relationship with OpenAI to find a new AI dance partner - Anthropic, the second most valuable AI startup in the world. Even though the relationship between Microsoft and Anthropic is only a few months old, it appears as if Microsoft sees a future with Anthropic that's at least as valuable as the one it had with OpenAI.
Anthropic has published a new constitution for its AI model Claude. In this document, the company describes the values, behavioral principles, and considerations that the model must follow when processing user questions. The constitution has been made publicly available under a Creative Commons CC0 license, allowing the content to be used freely without permission. Anthropic published the first version of this constitution in May 2023.
A self-proclaimed free speech absolutist who frequently rails against "woke" ideology, the selling point of Musk's chatbot Grok is that it's unfiltered and supposedly censorship free, rarely refusing even the edgiest of requests. This has spawned controversies such as a posting spree in which it praised Nazis and styled itself "MechaHitler," or more recently when it generated countless nonconsensual nudes of women and children - none of which have resulted in Grok being meaningfully reined in.
Hinton, who helped pioneer the neural networks that underpin modern artificial intelligence, has become one of the field's most outspoken critics as AI systems grow more powerful and widespread. He has predicted that AI could trigger widespread job losses, fuel social unrest, and eventually outsmart humans - and has said that researchers should focus more on how advanced systems are trained, including ensuring they are designed to protect human interests.
Here is a recap of what happened in the search forums today, through the eyes of the Search Engine Roundtable and other search forums on the web. OpenAI will be testing ads in ChatGPT very soon. Google's Gemini 3 Pro now powers some AI Overviews. Surprise, surprise, Google is appealing the search monopoly ruling. Google warns that using free subdomian hosts is not a good idea. Google also said that comment link spam won't help or hurt your site.
You could say it all started with Elon Musk's AI FOMO - and his crusade against "wokeness." When his AI company, xAI, announced Grok in November 2023, it was described as a chatbot with "a rebellious streak" and the ability to "answer spicy questions that are rejected by most other AI systems." The chatbot debuted after a few months of development and just two months of training, and the announcement highlighted that Grok would have real-time knowledge of the X platform.
OpenAI's new LLM has revolutionized AI and opened up new possibilities for marketers. Here's a look at how three big-name brands have embraced the technology. In March, the AI lab OpenAI released GPT-4, the latest version of the large language model (LLM) behind the viral chatbot ChatGPT. Since then, a small number of brands have been stepping forward to integrate the new-and-improved chatbot into their product development or marketing efforts. To a certain extent, this has required some courage.
AI labs just can't get their employees to stay put. Yesterday's big AI news was the abrupt and seemingly acrimonious departure of three top executives at Mira Murati's Thinking Machines lab. All three were quickly snapped up by OpenAI, and now it seems they won't be the last to leave. Alex Heath is reporting that two more employees are expected to leave for OpenAI in the next few weeks.
"We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis," the company's safety team said in a statement, adding that the restrictions applied to all users, including paid subscribers. "We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it's illegal," the statement said.