The contract language we received overnight from the Department of War made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons. New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will. Despite DOW's recent public statements, these narrow safeguards have been the crux of our negotiations for months.
A senior Pentagon official told Business Insider that Anthropic has until 5:01 p.m. Eastern Time on Friday to agree to the Defense Department's terms; otherwise, it will find other levers to compel the AI startup to cooperate with the military. The official said Hegseth is prepared for the use of the Defense Production Act (DPA) - a decades-old wartime law that gives the president broad authority over private companies in the interest of national security - on top of designating Anthropic a supply chain risk.
The transformation of the Santa Clara Valley from a bucolic grower of fruit into the technological powerhouse of Silicon Valley thanks largely to Stanford University's presence fueled a dramatic evolution of California's economy, growing it into the fourth largest in the world, were it a nation. Technology isn't just a linchpin of the economy; the immense personal wealth of its creators has perhaps unfortunately become a crucial source of revenue for the state.
"It's one thing if ... we're fighting China and you're developing your model, but once you start selling sexualized chatbots to kids in my state, now I have a problem with that, and I'm going to get involved there, and the Supreme Court is going to back me up on that," Cox said.
San Francisco public school teachers and their union celebrated Friday after negotiating a tentative agreement for a new contract with higher pay and fully funded family healthcare, ending a four-day walkout that was the city's first educator strike in nearly half a century. United Educators of San Francisco (UESF) said its bargaining team reached a two-year tentative deal with the San Francisco Unified School District (SFUSD) at around 5:30 am local time Friday.
"With this law, we are implementing European requirements in a maximally innovation-friendly way and creating lean AI supervision with a clear focus on the needs of the economy," Federal Digital Minister Karsten Wildberger said in a statement.
The ads are funded by a pro-AI political action committee that supports the expansion of artificial intelligence, yet they aim to weaken Bores's candidacy by tying him to his past work in tech. They accuse Bores, who has recently called for abolishing Immigration and Customs Enforcement (ICE), of hypocrisy because he previously worked at Palantir, a data analytics company whose contracts with ICE have made it a frequent target of activists.
Closer cooperation between regulators and increased funding are needed for the UK to deal effectively with the human rights harms associated with the proliferation of artificial intelligence (AI) systems. On 4 February 2026, the Joint Committee on Human Rights met to discuss whether the UK's regulators have the resources, expertise and powers to ensure that human rights are protected from new and emerging harms caused by AI. While there are at least 13 regulators in the UK with remits relating to AI, there is no single regulator dedicated to regulating AI.
Scope3 makes second round of layoffs Scope3 has implemented another round of redundancies, its second in less than half a year, as the adtech firm continues to reshape its business around agentic media capabilities. The company, headed by programmatic advertising pioneer Brian O'Kelley, would not confirm the number of positions impacted but said it had made additional changes across its commercial and engineering functions in response to evolving market needs.
The briefing paper pointedly cites Newsom's veto of last year's Senate Bill 7, a union-backed bill to bar employers from using AI to make employee discipline and termination decisions. In rejecting it, Newsom said the measure was overly broad and would prevent even innocuous uses of AI. Newsom's veto exemplifies his efforts, as the AI industry explodes, to satisfy both the tech industry, with which he has decades-long political ties, and those who worry about AI's societal and economic impacts.
Not me, thanks: children need the human connection the love that gives life meaning. As he works towards launching SpaceX on to the stock market, in perhaps the biggest ever such share sale, the world's richest man has every incentive to talk big. Yet as Musk waxed eccentrically about this robotic utopia, it was a reminder that major decisions about the direction of technological progress are being taken by a small number of very powerful men (and they are mainly men).
South Korea has launched a landmark set of laws to regulate AI before any other country or bloc (the EU's regulations are set to go into effect in stages through next year). Under Korea's AI Basic Act, companies must ensure there is human oversight for "high-impact" AI in fields like nuclear safety, drinking water, transport, healthcare, and financial uses like credit evaluation and loan screening.
Silicon Valley is already pouring tens of millions of dollars into the midterm elections taking place across the US in 2026, as the tech industry's war over AI regulation moves decisively into American politics. Technology executives, investors, and companies tied to the AI boom are funding a new network of AI-focused super PACS, which is poised to make AI a major issue in this year's state and federal elections races.
One year ago today, Donald Trump was inaugurated as president of the United States. Standing alongside him that day were the leaders of the tech industry's most powerful companies, who had donated to him in an unprecedented bending of the knee. In the ensuing year, the companies have reaped enormous rewards from their alliance with Trump, which my colleague Nick Robins-Early and I wrote about last month after Trump signed an executive order prohibiting states from passing laws regulating AI.
In a few short years, artificial intelligence has transformed from what many viewed as a moonshot to the source of countless real-world benefits. At Pinterest, for instance, we're deploying AI to flip the script on social media, using it to more aggressively promote user well being rather than the alternative formula of triggering engagement by enragement. I believe AI can benefit our 600 million users for years to come and at a fraction the cost that many associate with the technology.
Identifying the best global expansion strategies isn't the only step AI companies should take to accelerate business growth and reach new audiences. It may be easier than ever to reach buyers on the other side of the world, but doing so brings its own set of challenges and hiccups. For starters, AI regulations differ by region, meaning that you have to know and abide by the rules in different regions.
The AI gold rush has put new pressure on governments and other public agencies. As enterprises look to gain a competitive advantage from emerging technologies, governing bodies are eager to implement rules and regulations that protect individuals and their data. The most high-profile AI legislation is the EU's AI Act. However, global law firm Bird & Bird has developed an AI Horizon Tracker that analyzes 22 jurisdictions and presents a broad spectrum of regional approaches.
AMY GOODMAN: In The Wall Street Journal, you recently revealed that ventures launched since Trump's reelection have generated at least $4 billion in proceeds and paper wealth for the Trump family, that figure based on company statements and security filings. In addition, you've reported how one of the family businesses, Trump Media & Technology, recently announced a $6 billion merger with a firm aiming to build the world's first viable nuclear fusion plant to power AI projects and data centers,