Discussion about this post

User's avatar
JC's avatar
Apr 8Edited

Helpful analysis. Regarding "The plain number [of compute for remote access] does not properly explicate the compute’s usefulness; due to latency requirements, Chinese actors likely cannot utilize this compute pool for large-scale training purposes" -- I wouldn't think latency between China & overseas datacenters is a barrier for training, and even for inference purposes it shouldn't be the main blocker for most use cases? Regulatory restrictions on exporting Chinese user data overseas is likely the bigger barrier for training models overseas

JC's avatar

This gov speech (https://www.nda.gov.cn/sjj/jgsz/jld/llh/llhldhd/0323/20260323202204680553721_pc.html) claims that by end of 2025, China's AI compute reached 1,590,000 PFlops. Does this affect your estimate?

5 more comments...

No posts

Ready for more?