OpenAI00:00Feature UpdatesOfficial Blog
OpenAI launches Codex-Spark for ultra-low latency
You can iterate faster with less waiting between small code edits.
Key Points
- 1Positioned as a smaller GPT-5.3-Codex variant
- 2Claims 1000+ tokens/sec inference speed
- 3Research preview for ChatGPT Pro users
- 4128k context window
OpenAI released GPT‑5.3‑Codex‑Spark as a research preview. It targets near-instant interactive coding using a low-latency serving tier with Cerebras hardware. This shifts coding assistance toward rapid, iterative edits and feedback loops.