AI is already shaping the future. So why do so few of us get to decide what that future will be?
Briefly

A small group of Silicon Valley executives are making decisions that will shape billions of lives with limited public awareness. The White House published America's AI Action Plan calling for revising the NIST AI Risk Management Framework to remove references to misinformation, diversity, equity, inclusion, and climate change. The plan seeks to roll back prior AI orders, loosen oversight, and fast-track infrastructure and energy for data centers, recasting AI as a geopolitical race to win. When policy prioritizes speed and dominance, accountability and stewardship are deprioritized. Europe sets guardrails first with the EU AI Act phasing risk-based obligations through 2026. Compute, models, and distribution remain concentrated among a handful of firms that will shape who benefits from AI governance.
In July, the White House published "America's AI Action Plan," a 28-page document that reads like an industrial policy for a new arms race. Buried in Pillar I is a line that tells you exactly where U.S. policy is headed: Revise the NIST AI Risk Management Framework to eliminate references to misinformation, diversity, equity, inclusion, and climate change. When governments start crossing out those words by design, it's fair to ask who is setting the terms of our technological future-and for whose benefit.
This is more than rhetoric. The same plan boasts of rolling back the prior administration's AI order, loosening oversight, and fast-tracking infrastructure and energy for data centers. It recasts artificial intelligence primarily as a geopolitical race to "win," not as a societal system to govern. It's a perspective less about stewardship and more about deal-making, a style of governance that treats public policy like a term sheet. That framing matters: When the policy goal is speed and dominance, accountability becomes a "nice-to-have."
Europe has chosen a completely different sequence: Set guardrails first, then scale. The EU AI Act entered into force in August 2024 and phases in obligations through 2026, with enforcement designed around risk. Imperfect? Sure. But the message is unambiguous: Democratic institutions-not just corporate PR-should define acceptable uses, disclosures, and liabilities before the technology is everywhere. Meanwhile, the center of gravity in AI sits with a handful of firms that control compute, models, and distribution.
Read at Fast Company
[
|
]