
"GPT-5.3-Codex-Spark is described by the company as a "smaller version" of that model, one that is designed for faster inference. To power that inference, OpenAI has brought in a dedicated chip from its hardware partner Cerebras, marking a new level of integration in the company's physical infrastructure. The partnership between Cerebras and OpenAI was announced last month, when OpenAI said that it had reached a multi-year agreement with the firm worth over $10 billion."
"Spark, which OpenAI says is designed for swift, real-time collaboration and "rapid iteration," will be powered by Cerebras' Wafer Scale Engine 3. The WSE-3 is Cerebras' third-generation waferscale megachip, decked out with 4 trillion transistors. OpenAI describes the new lightweight tool as a "daily productivity driver, helping users with rapid prototyping" rather than the longer, heavier tasks that the original 5.3 is designed for. Spark is currently enjoying a research preview for ChatGPT Pro users in the Codex app."
OpenAI released GPT-5.3-Codex-Spark, a lightweight Codex model optimized for lower latency and faster inference. The model uses a dedicated Cerebras Wafer Scale Engine 3 chip to accelerate real-time performance. The Cerebras partnership is a multi-year agreement valued at over $10 billion, and Spark represents the first milestone in that integration. Spark targets rapid prototyping and real-time collaboration as a daily productivity driver, while the original GPT-5.3 remains aimed at longer, heavier tasks requiring deeper reasoning and execution. Spark is available as a research preview to ChatGPT Pro users in the Codex app.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]