AI may never be as cheap as it is today
Briefly

AI may never be as cheap as it is today
"These LLM companies are going to go public and they're going to raise prices because they have to. New models from OpenAI, Google and Anthropic are generally getting faster and cheaper, yet margins remain negative for AI labs. OpenAI is projected to burn $14 billion in 2026, up from $8 to $9 billion in 2025, demonstrating the unsustainable economics of current pricing strategies."
"Fierce competition has pushed labs to price aggressively, squeezing profits. In February, 90% of VC funding dollars went to AI startups, with OpenAI and Anthropic alone capturing 74% of VC dollars. Labs also get discounted compute through strategic partnerships sometimes described on Wall Street as circular financing, yet even with those discounts, OpenAI and Anthropic are still losing money."
"Every time you send a complex query, the AI lab is effectively losing money on the transaction. Free accounts have limited token use, which is expanded when you sign up for a standard consumer subscription. But those low-cost subscriptions are among the most heavily subsidized, indicating the unsustainable nature of current pricing models."
AI companies are experiencing downward pressure on token pricing due to efficiency improvements in inference computing, yet most remain unprofitable. OpenAI projects $14 billion in losses for 2026, while Anthropic recently swung to positive margins but faces continued pressure from inference costs. Fierce competition and strategic partnerships with discounted compute have forced aggressive pricing that squeezes profits. As these companies approach IPO timelines, they face mounting pressure to demonstrate profitability, which will likely necessitate price increases. Current consumer subscriptions are heavily subsidized, with companies effectively losing money on complex queries. The combination of rising usage, increased corporate spending, and the need to show profits before going public suggests the era of cheap AI services may be ending.
Read at Axios
Unable to calculate read time
[
|
]