#cerebras-wse-3

[ follow ]
Artificial intelligence
fromTechzine Global
16 hours ago

OpenAI swaps Nvidia for Cerebras with GPT-5.3-Codex-Spark

GPT-5.3-Codex-Spark is a Cerebras-optimized, low-latency encoding model generating over 1,000 tokens/sec to enable immediate, minimal, real-time developer code adjustments.
Artificial intelligence
fromZDNET
1 day ago

OpenAI's new Spark model codes 15x faster than GPT-5.3-Codex - but there's a catch

Codex-Spark enables conversational, real-time coding with major latency improvements (15x faster code generation; 80% roundtrip, 50% time-to-first-token) using Cerebras WSE-3.
Artificial intelligence
fromTechCrunch
1 day ago

A new version of OpenAI's Codex is powered by a new dedicated chip | TechCrunch

OpenAI released GPT-5.3-Codex-Spark, a lightweight, low-latency Codex model using Cerebras WSE-3 hardware to accelerate inference and enable real-time collaboration and rapid prototyping.
Artificial intelligence
fromTheregister
4 weeks ago

OpenAI to serve ChatGPT on Cerebras' AI dinner plates

OpenAI will deploy 750 megawatts of Cerebras wafer-scale accelerators through 2028 in a $10B+ deal to accelerate inference using massive SRAM bandwidth.
[ Load more ]