"Broad preemption of state protections is particularly ill-advised because constantly evolving emerging technologies, like AI, require agile regulatory responses that can protect our citizens," they write in a Tuesday memo. "This regulatory innovation is best left to the 50 states so we can all learn from what works and what does not. New applications for AI are regularly being found for healthcare, hiring, housing markets, customer service, law enforcement and public safety, transportation, banking, education, and social media."
I want to thank [the PAC] for their partnership in raising up the issue of how we regulate an incredibly powerful technology so that the future is one that benefits all of us,
The Trump administration and congressional Republicans are trying again to eliminate state-level AI regulations in favor of a federal standard. The plan faces opposition from many state governments and civil-society organizations, while AI vendors have welcomed it.
* "If you kill this witness, the case will be dismissed," advised attorney. Man, these MPRE hypos are getting super easy. [ Toronto Star] * Trump signs bill to release the Epstein files. Unclear if he drew a woman's curves around it before signing this time. [ Reuters] * Texas governor demands action on Sharia Law, so you know it's a bad news cycle for him. [ KXAN]
The speed of AI development gets most of the headlines, but the law is running a race of its own. Legislators and regulators are releasing new rules at a pace that can surprise even the most seasoned compliance teams. For in-house counsel, this creates a constant challenge: how to give sound, forward-looking advice when the ground under your feet is shifting.
Speaking during the inquiry's second evidence session on 29 October 2025, expert witnesses told Parliament's Joint Committee on Human Rights that, as it stands, the UK's "uncritical and deregulatory" approach to AI will fail to deal with the clear human rights harms presented by the technology. This includes harms related to surveillance and automated decision-making, which can variously impact both collective and individual rights to privacy, non-discrimination, and freedom of assembly; especially given the speed and scale at which the technology operates.
Once a fringe curiosity, the deepfake economy has grown to become a $7.5 billion market, with some predictions projecting that it will hit $38.5 billion by 2032. Deepfakes are now everywhere, and the stock market is not the only part of the economy that is vulnerable to their impact. Those responsible for the creation of deepfakes are also targeting individual businesses, sometimes with the goal of extracting money and sometimes simply to cause damage.
"Regulation is going to have to be self-regulation," said Sir Martin Sorrell, founder and executive chairman of S4 Capital, this week at the Fortune Global Forum in Riyadh. "The cat is out of the bag. We've missed the Oppenheimer moment. Many people compare it to the control of nuclear weapons."
We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.
Moving forward, California will require any companion bot platforms-including ChatGPT, Grok, Character.AI, and the like-to create and make public "protocols to identify and address users' suicidal ideation or expressions of self-harm." They must also share "statistics regarding how often they provided users with crisis center prevention notifications to the Department of Public Health," the governor's office said. Those stats will also be posted on the platforms' websites, potentially helping lawmakers and parents track any disturbing trends.
A union between human and machine? Not on this Ohio Republican's watch. A bill introduced last month by Buckeye state representative Thaddeus Claggett, from Licking County, would block AI systems from having legal personhood by declaring them to be "nonsentient entities," NBC4 News reports. It would also mean that AIs wouldn't be able to marry a human or another AI.
Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging. At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
This week in Other Barks & Bites: USPTO Acting Commissioner for Patents Valencia Martin Wallace sends an internal email to staff indicating that 1% of the agency's workforce will be laid off; U.S. sales of electric vehicles hit a record during the third quarter of 2025 just as federal subsidies for EV purchases ended; the Federal Circuit nixes US Inventor's pursuit of associational standing to sue the USPTO for denying its petition for rulemaking on discretionary denial criteria for AIA trials;
The Transparency in Frontier Artificial Intelligence Act, or S.B. 53, requires the most advanced A.I. companies to report safety protocols used in building their technologies and forces the companies to report the greatest risks posed by their technologies. The bill also strengthens whistle-blower protections for employees who warn the public about potential dangers the technology poses. State Senator Scott Wiener, a Democrat from San Francisco, who proposed the legislation, said the law was crucial to fill a vacuum to protect consumers from potential harms from A.I.