
"The research paper written by Jaime Sevilla, Hannah Petrovic and Anson Ho, suggests that while running an AI model may generate enough revenue to cover its own R&D costs, any profit is outweighed by the cost of developing the next big model. So, it said, "despite making money on each model, companies can lose money each year." The paper seeks to answer three questions: How profitable is running AI models? Are models profitable over their lifecycle? Will AI models become profitable?"
"To answer question one, researchers created a case study they called the GPT-5 bundle, which they said included all of OpenAI's offerings available during GPT-5's lifetime as the flagship model, including GPT-5 and GPT-5.1, GPT-4o, ChatGPT, and the API, and estimated the revenue from and costs of running the bundle. All numbers gathered were based on sources of information that included claims by OpenAI and its staff, and reporting by media outlets, primarily The Information, CNBC, and the Wall Street Journal."
Any revenue surplus from a deployed AI model can be outweighed by the expense of developing the next model. Revenue from a GPT-5-era bundle that included GPT-5, GPT-5.1, GPT-4o, ChatGPT, and the API totaled $6.1 billion for August–December. Estimated inference compute costs for the bundle were $3.2 billion based on public 2025 spending estimates and assumed compute allocation. Revenue and cost numbers relied on OpenAI statements and reporting from outlets such as The Information, CNBC, and the Wall Street Journal. Four main cost categories were identified, with inference compute listed as the first.
Read at InfoWorld
Unable to calculate read time
Collection
[
|
...
]