Google23:00Feature UpdatesOfficial Blog
Google Announces TPU 8t for Large-Scale Pre-Training
Accelerates large AI training at lower cost for faster R&D.
Key Points
- 1SparseCore speeds irregular access
- 2FP4 doubles MXU throughput
- 3Virgo Network 4x DCN bandwidth
- 4TPUDirect faster data access
Google Cloud announced TPU 8t, optimized for pre-training with SparseCore, FP4, and 4x faster Virgo Network. Supports 9,600-chip superpods for world models like Genie 3. AI devs can train at massive scale more efficiently.