Intelligence Portal · verified 2026-05-05

中国人工智能芯片产业 China's AI Silicon
at the Sanctions Frontier

A data-driven map of China's 2024–2026 AI semiconductor landscape: who is actually shipping, who is catching up, who is bluffing. Compiled from primary filings (SSE, SEHK), company annual reports, Reuters, Bloomberg, SCMP, and the Bureau of Industry and Security Entity List.

// 20+ companies profiled // 3 export-control waves // $47B Big Fund III tracked // last verified: 2026-05-05
DOMESTIC AI CHIP MKT 2025
$0B
~¥1T · est. range $110–150B
Source: IDC, CCID 2025
BIG FUND PHASE III
$0B
¥344B, registered 24-May-2024
Reuters, 27-May-2024
NVIDIA H100 PARITY GAP
0%
Ascend 910C reaches ~60% of H100
SemiAnalysis, 2024
ENTITY-LISTED CHIP DESIGNERS
0
BIS sanctioned since 2019
BIS Entity List 2022–2024
SMIC 7NM CAPACITY (WPM)
0
thousand wafers/month est. 2025
TrendForce 2024
CAMBRICON 2024 SHARE RISE
0%
Top-performing China stock 2024
SCMP, Dec 2024
01 / EXEC

Executive Summary 总览

What to know in 90 seconds.

China's AI semiconductor industry in 2026 is a paradox: simultaneously more successful and more constrained than anyone predicted in 2022. Huawei Ascend 910C-based clusters now train state-of-the-art Chinese foundation models end-to-end. Cambricon posted its first quarterly profit. Biren went public in Hong Kong on 2 January 2026. Moore Threads jumped 468% on its Shanghai debut in December 2025.

Yet the structural bottleneck is the same as it was in October 2022: high-bandwidth memory (HBM), leading-edge EUV lithography, and the software moat around CUDA. SMIC's multi-patterned DUV 7nm node works — but it's slow, expensive, and yield-constrained. CXMT and YMTC are climbing the curve but remain 2–3 generations behind Micron / Samsung / SK Hynix.

The near-term trajectory (2025–2027) is one of import-substitution at mid-performance tiers, not global leadership. Chinese chips will dominate Chinese data centres by regulatory fiat long before they win a single Western customer. The real contest is whether that captive market is large enough to fund the R&D pipeline required for long-term parity.

"The gap is no longer fabrication — it's packaging, HBM, and software. China can build the transistors. It still can't buy or build the surrounding stack at competitive cost." — AZR synthesis, 2026-05

Our probability-weighted base case: Huawei Ascend reaches 75–80% of NVIDIA next-gen training performance by 2027 for Chinese-domestic workloads, with a 3–5× price gap internationally that limits exports. Cambricon and Biren split the #2 slot. Moore Threads remains a gaming/edge play. Hygon's x86 franchise hits a ceiling as domestic substitutes mature.

02 / POLICY

US Export Controls Timeline 出口管制

Four years of escalating technology restrictions — the defining constraint on every company below.
2019 · MAY

Huawei added to Entity List

Opening salvo. HiSilicon cut off from TSMC <7nm fabrication by direct extension (2020 FDPR update). Kirin smartphone chip pipeline effectively halted for three years.

2022 · OCT 7

Advanced Computing Rules ("Oct 7 Rules")

Blanket restriction on export of NVIDIA A100/H100 to China; 18nm DRAM, 128-layer NAND, and 14/16nm logic fab equipment controlled. Created the "A800/H800" workaround SKUs. The rule that created the modern Chinese AI chip industry.

BIS press release
2023 · OCT 17

Oct 7 Rules v2 — Closing the workarounds

A800/H800/L40S banned. Performance-density and interconnect thresholds replaced raw FLOPS to prevent redesigned-down chips. Biren, Moore Threads, and 13 others added to Entity List. China's cloud GPU cluster plans set back 12–18 months.

BIS · Oct 2023 rule
2024 · DEC 2

Third package — HBM + 140 entities

HBM2e/HBM3 and downstream AI accelerators blacklisted. 140 additional Chinese entities added, including fab tool suppliers and CXMT-linked companies. The HBM ban is the single most damaging move — directly caps LLM training memory bandwidth for every Chinese player.

BIS · Dec 2024 rule
2025 · JUL

Southeast Asia smuggling controls

Thailand and Malaysia brought into AI-chip diversion rules after reports of large NVIDIA H100/H200 volumes transshipping to China via third-party integrators.

2025 · DEC

China EUV prototype reported

Reuters reports China has completed a prototype EUV lithography machine in Shenzhen; first working chips expected 2028–2030. Timeline consistent with ASML's own initial NXE development curve (~5 years prototype-to-volume).

03 / LEADERBOARD

AI Training Chip Leaderboard 算力排行

All publicly traded and major private Chinese AI training accelerator designers. Click any header to sort. Use search to filter.
// click column headers to sort
Company HQ Founded Flagship AI Chip Node (nm) vs H100 Fab Rev-24 ($M) Listing Sanction Status
Huawei HiSilicon
Ascend line
Shenzhen 1991 Ascend 910C 7 ~60% SMIC N+2 ~12,000 est. Huawei (private) ENTITY 2019
Cambricon
寒武纪
Beijing 2016 MLU590 / Siyuan 590 7 ~40–50% TSMC 7 (legacy) / SMIC ~165 SSE 688256 ENTITY 2022
Biren Technology
壁仞
Shanghai 2019 BR100 / BR104 7 ~50% (unsanctioned BR100) · derated post-2023 TSMC 7 (pre-ban) n/d SEHK 6082 (IPO Jan-2026) ENTITY 2023
Moore Threads
摩尔线程
Beijing 2020 MTT S4000 (KUAE) 12 ~25–30% TSMC 12 (legacy) / SMIC ~70 SSE 688795 (IPO Dec-2025) ENTITY 2023
Enflame (Suiyuan)
燧原
Shanghai 2018 CloudBlazer L600 12 ~20–30% GlobalFoundries/SMIC n/d Private · Tencent-backed Monitored
Iluvatar CoreX
天数智芯
Shanghai 2015 Tiangai BI-V150 7 ~20–30% TSMC (legacy) n/d Private Monitored
Hygon Info Tech
海光
Beijing/Tianjin 2014 DCU Z100 (HPC/AI) 12/7 ~30% (HPC-focused) SMIC / TSMC legacy ~830 (2023) SSE 688041 ENTITY 2019
Horizon Robotics
地平线
Beijing 2015 Journey 6 (edge/ADAS) 7 N/A — edge AI only, 560 TOPS TSMC / Samsung ~330 SEHK 9660 (Oct-2024 IPO) Unsanctioned
Black Sesame
黑芝麻智能
Shanghai/Wuhan 2016 Huashan A2000 (autos) 7 N/A · 250 TOPS edge TSMC ~75 SEHK 2533 (IPO Aug-2024) Unsanctioned
Loongson
龙芯
Beijing 2001 3A6000 / 3C6000 (LoongArch) 14/12 ~10% (HPC/CPU, AI secondary) SMIC ~70 SSE 688047 Unsanctioned
Zhaoxin
兆芯
Shanghai 2013 KaiXian KX-7000 (x86 CPU) 16 ~10% (CPU, not AI-first) TSMC legacy / SMIC ~140 est. Private · VIA JV Unsanctioned
UNISOC
紫光展锐
Shanghai 2018 (spin) T820 / S8000 SoC 6 N/A — mobile SoC w/ NPU TSMC ~2,100 est. Private · Tsinghua Unigroup Unsanctioned
Lisuan Tech
砺算
Shanghai 2021 G100 GPU (gaming + AI) 6 ~15% (early) SMIC pre-rev Private Unsanctioned
Vastai (瀚博) Shanghai 2018 SV102 (inference) 7 ~25% inference TSMC legacy n/d Private Monitored
Baidu Kunlun Beijing 2011 (spun 2021) Kunlun P800 (3rd gen) 7 ~50% BF16 inference SMIC N+2 internal Subsidiary of Baidu Unsanctioned (co.)
Alibaba T-Head
平头哥
Hangzhou 2018 Hanguang 800 / PPU 12/7 ~40% inference TSMC / SMIC internal Alibaba subsidiary Unsanctioned
MetaX (沐曦) Shanghai 2020 MXC500 / Xi-Yun 7 ~30% training TSMC 7 legacy pre-rev Private (IPO tutoring 2025) Monitored
ByteDance (in-house) Beijing internal Custom ASIC (2024 reports) 7 early TSMC / SMIC internal ByteDance Unsanctioned

Performance estimates synthesised from SemiAnalysis, Tom's Hardware, HPCwire, Reuters, SCMP, and vendor specs. "vs H100" is effective real-world training throughput on LLM workloads (FP16/BF16 including memory-bandwidth effects), not peak TOPS. Entity List status: BIS.

04 / MEMORY

Memory Chip Players 存储芯片

Memory — not logic — is the real bottleneck for AI training. HBM restrictions hit harder than any GPU ban.
// HBM = High-Bandwidth Memory (critical for LLM training)
Company HQ Founded Memory Segment Leading Product Node Cap (Kwpm) Global Share Sanction
YMTC
长江存储
Wuhan2016 3D NAND X3-9070 · 232L QLC (Xtacking 3.0) 2x nm ~100 ~6% ENTITY 2022
CXMT (ChangXin)
长鑫存储
Hefei2016 DRAM (DDR4/DDR5/LPDDR5) DDR5-6400 · LPDDR5x 16Gb 17 nm ~240 (Q4-25) ~10% target Monitored · March 2024 rule
GigaDevice
兆易创新
Beijing2005 NOR Flash · MCU · Niche DRAM GD25 SPI NOR · GD5F NAND 55/45 nm fabless ~20% NOR global Unsanctioned
JHICC (Fujian Jinhua) Jinjiang2016 DRAM (niche) 25nm DDR4 (recovered) 25 nm ~25 <1% ENTITY 2018
Wuhan Xinxin (XMC) Wuhan2006 NOR / Niche ZFlash / NOR 65/45 nm ~20 ~2% NOR Unsanctioned
Xi'an UniIC Xi'an2015 DRAM DDR3/DDR4 niche 2x nm ~10 <1% Unsanctioned
CXMT HBM project Hefei / Wuhan2024 (initiative) HBM (HBM2/HBM2e target) Pre-volume pilot line TBD pilot 0% Dec 2024 impact

Why HBM is the real chokepoint

Every modern LLM trainer needs HBM3/HBM3e next to the logic die: H100 uses 80 GB HBM3 at 3.35 TB/s; H200 uses 141 GB HBM3e at 4.8 TB/s. Without equivalent memory, even a perfectly-matched logic die can't feed its tensor cores. China has no volume HBM producer as of 2026. SK Hynix, Samsung, and Micron supply 100% of global HBM. The December 2024 BIS rule was designed precisely to keep it that way until domestic CXMT-led HBM2e pilots mature (earliest 2026–2027).

CXMT's path runs through standard DDR5 → LPDDR5x → HBM2e stacking. YMTC's NAND is world-competitive on layer count (232-layer at Xtacking 3.0, matching Micron) but NAND doesn't replace HBM. The HBM gap, not the logic gap, is why China's best training clusters still rely on stockpiled NVIDIA silicon.

05 / MATRIX

Head-to-Head Comparison Matrix 对比矩阵

Top training-chip competitors against NVIDIA H100/H200 on all dimensions. Green = competitive, Yellow = partial, Red = gap.
// all figures vendor spec or best public estimate
Chip Node FP16 TFLOPS Memory BW TB/s Interconnect Software TDP Export Status
NVIDIA H100 SXM4N (TSMC)98980GB HBM33.35NVLink 900 GB/sCUDA + cuDNN + TRT700Banned to China
NVIDIA H20 (China SKU)4N14896GB HBM34.0NVLink 900 GB/sCUDA full400Further restricted 2025
Huawei Ascend 910C7 (SMIC N+2)~800 (dual-die)128GB HBM2e (stockpile)~3.2HCCS 392 GB/sCANN + MindSpore~400Entity
Huawei Ascend 910B7~32064GB HBM2e~1.6HCCS 200 GB/sCANN + MindSpore~310Entity
Cambricon MLU5907~51296GB HBM2e~2.4MLU-LinkBANG SDK / Neuware~350Entity 2022
Biren BR1007 (TSMC pre-ban)~1024 (vendor peak)64GB HBM2e~1.6BLink 192 GB/sBIRENSUPA (emerging)~550Entity 2023
Moore Threads MTT S400012~10048GB GDDR6~0.77MTLink 240 GB/sMUSA · CUDA-compat hints~350Entity 2023
Hygon DCU Z1007~10064GB HBM2~1.0PCIe 5.0DTK (ROCm fork)~350Entity 2019
Baidu Kunlun P8007 (SMIC N+2)~34564GB HBM2e~1.6XPU-LinkPaddlePaddle native~400Co. not listed
Iluvatar BI-V1507 (TSMC)~14732GB HBM2~1.2PCIe 4.0Iluvatar SDK~300Monitored

Dense peak TFLOPS vendor-published except where "~" marks AZR estimate. Real-world training throughput typically 40–65% of peak due to memory-bandwidth and interconnect bottlenecks. Chinese chips lose disproportionately here because of HBM supply constraints.

06 / COMPANIES

Company Deep Dives 公司解析

Expandable cards — the 8 critical companies. Click to open each.

Huawei HiSilicon

Shenzhen1991Huawei subsidiaryEntity 2019
Flagship
Ascend 910C · Ascend 310 · Kirin X90 (PC)
Node
SMIC 7nm N+2 (multi-pat DUV)
Performance
910C ≈ 60% H100 effective training; 910B ≈ 320 FP16 TFLOPS
Customers
Baidu, iFlytek, China Mobile/Telecom, ByteDance pilots, gov/SOE cloud
Software
CANN + MindSpore · full PyTorch bridge since 2024
Revenue 24
Huawei group $118B (+22% YoY); AI chip line ~$12B est.
Bull
Only Chinese company with full stack: chip + server + cloud + framework + foundation model (Pangu). Gov-mandated tenders push adoption.
Bear
Depends on finite HBM stockpile + SMIC yield. If HBM2e imports end, 910C production falls off a cliff by 2027.
Key risks & sources

Risks: SMIC 7nm yield (est. 20–30% vs TSMC 70%+), HBM supply, CANN ecosystem maturity vs CUDA, packaging capacity (CoWoS equivalent).

Sources: Reuters, SemiAnalysis Oct-2023, SCMP.

Cambricon Technologies

Beijing2016SSE 688256Entity 2022
Flagship
MLU590 / Siyuan 590 · MLU370
Node
7nm (TSMC legacy) · transitioning to SMIC N+2
Performance
~40–50% H100 effective training
Customers
China Mobile, iFlytek, Baidu (partial), provincial intelligent-compute centres
Software
BANG SDK · Neuware · PyTorch/TF plugins
Revenue
CN¥1.18B (2024); first quarterly profit Q4-2024; maiden dividend Feb-2026
Bull
Stock up 383% in 2024 (best China performer); gov preference for domestic substitutes; first profit proves model works.
Bear
7 consecutive loss years pre-2024; 70%+ R&D/rev ratio unsustainable if domestic demand plateaus; still fabless, SMIC-dependent.
Key risks & sources

Sources: Cambricon 2024 annual report (SSE), SCMP, China Money Network.

Biren Technology

Shanghai2019SEHK 6082Entity 2023
Flagship
BR100 (77B transistors, 2 PFLOPS claimed) · BR104
Node
TSMC 7nm (pre-sanction); derated variant post-ban
Performance
Original ≈ H100 peak FP16 vendor; sanctioned variant ≈ 50%
Customers
China Mobile, China Telecom intelligent-computing centres
Funding
¥1.5B raised Jun-2025; $280M Guangzhou pledge 2023; HKEx IPO 2-Jan-2026
Bull
Strongest pure-play GPU design outside Huawei; IPO war chest; patient state capital.
Bear
Lost TSMC access; co-founder Xu departed 2023; SMIC ~2 generations behind; derated silicon.
Key risks & sources

Sources: Reuters, Bloomberg, Tom's Hardware.

Moore Threads

BeijingOct 2020SSE 688795Entity 2023
Flagship
MTT S4000 (KUAE server) · MTT S80 (consumer)
Node
12nm (TSMC legacy); pathfinding 6nm via SMIC
Performance
S4000 ≈ 100 FP16 TFLOPS · ~25–30% H100 effective
Customers
Gaming OEMs, enterprise workstations, small AI clusters
IPO
Dec-2025 SSE raised $1B+, +468% debut. DeepSeek founder Liang Wenfeng top institutional investor.
Bull
Only Chinese GPU with mature DirectX/Vulkan drivers; MUSA has early CUDA-compat surface; IPO fuels MUSA 2.0.
Bear
12nm structurally uncompetitive for AI training; consumer GPU margins thin; KUAE clusters small vs Ascend/MLU590.
Key risks & sources

Sources: SCMP Dec-2025, Tom's Hardware.

Hygon Information Technology

Beijing/Tianjin2014SSE 688041Entity 2019
Flagship
Dhyana CPU (Zen 1 derivative) · DCU Z100 (HPC/AI GPGPU)
Node
7nm TSMC legacy / 12nm SMIC
Performance
DCU Z100 ~100 FP16 TFLOPS; strong FP64 HPC; AI secondary
Customers
Sugon (parent, 28% stake), gov HPC, China Mobile Cloud
Rev 23
CN¥6.01B · net CN¥1.70B · 70% R&D/sales
Bull
x86 = inherited software moat; profitable unlike peers; HPC/super-computing play.
Bear
Pre-2019 AMD IP ages; no new Zen licenses possible; DCU line still small vs Ascend/MLU590.

Horizon Robotics

Beijing2015SEHK 9660Unsanctioned
Flagship
Journey 6 (ADAS/AV) · Sunrise (surveillance)
Node
TSMC 7nm / Samsung
Performance
J6-P 560 TOPS · J6-M 128 TOPS (edge only)
Customers
BYD, Chery, VW (Carizon JV $2.3B); ~49% China ADAS share (2023)
Rev 24
CN¥2.38B · IPO Oct-2024 raised $696M · 10M+ shipments target 2025
Bull
Unsanctioned TSMC access; #1 China ADAS; VW JV = global credibility; NEV tailwind.
Bear
Not a training play; faces Mobileye + NVIDIA Drive + internal OEM silicon; auto-cycle exposure.

YMTC (Yangtze Memory)

Wuhan2016state-owned IDMEntity 2022
Flagship
Xtacking 3.0 · X3-9070 232-layer QLC NAND
Node
Xtacking (logic + memory separate wafers)
Performance
Layer count parity with Micron/SK Hynix
Customers
ZhiTai own brand, Lexar, HP, Acer, Kingston; Apple evaluated 2022
Capacity
~100K wpm · target 200K wpm
Bull
Technical parity on NAND layers; strong NAND demand from AI storage; patient state capital.
Bear
Entity List blocks US tools; Tsinghua Unigroup debt legacy; NAND is lower-margin than HBM.

CXMT (ChangXin Memory)

Hefei2016private state-backedMonitored Mar-2024
Flagship
DDR5-6400 · LPDDR5x 16Gb (2025)
Node
17nm (targeting 15/14nm)
Performance
DDR5 parity on low-end; HBM2e pilot 2026
Customers
Lenovo, Xiaomi, Huawei, module makers
Capacity
720K wafers/quarter end-2025 · ~10% global DRAM target 2026
Bull
10% global DRAM share = profitable scale; DDR5→HBM2e is engineering extension; central to HBM nationalisation.
Bear
Formal Entity add possible; EUV cutoff prevents <14nm; patent risk from Micron.
07 / SOFTWARE

Software Ecosystem Analysis 软件生态

The CUDA moat is the single biggest moat in computing. Here's how the alternatives stack up.
StackVendorTarget HWFramework supportCUDA-compat?Maturity
CUDA + cuDNN + TRTNVIDIAAll NVIDIA GPUsAllnativeMature · 18+ years
CANN + MindSporeHuaweiAscend 910/310MindSpore · PyTorch bridge (2024) · TFPartial via torch-npuProduction · growing
BANG SDK / NeuwareCambriconMLU270/370/590PyTorch, TF pluginsTranslation layerProduction
BIRENSUPABirenBR100/104PyTorch (emerging)HGAI translatorEarly
MUSAMoore ThreadsMTT linePyTorch, TF, DeepSpeedMUSIFY translatorEarly · expanding post-IPO
DTK (ROCm fork)HygonDCU Z100PyTorch, TF via HIPIndirect (HIP)HPC-proven
PaddlePaddleBaiduKunlun + multi-vendorNative frameworkFramework-levelMature (China)
OneFlow / SiliconFlowSiliconFlow (former OneFlow)Vendor-agnosticCustom + Python APIn/aResearch-grade
Triton (OpenAI)Open-sourceNVIDIA, AMD, emerging ChinaPython DSL → LLVMCUDA kernel bypassRapidly adopted 2024-26
SGLang + vLLMCommunity (UCB et al.)Multi-backend incl. Ascend/MLULLM servingAbstracts backendDominant serving runtime

The pattern of 2024–2026: rather than replace CUDA, the Chinese ecosystem has been abstracting above it. PyTorch 2.x's compiler stack (torch.compile + Triton) lets models target multiple hardware backends via an intermediate representation. SGLang and vLLM dominate LLM inference serving and support Ascend / MLU natively. The "software moat" is shallower than in 2020–2022 — but very real for custom kernels (FlashAttention variants, fused ops) where NVIDIA's ecosystem has years of hand-optimised lead.

Critically, DeepSeek V3 & R1 reportedly trained partially on Ascend 910B clusters and achieved frontier results — proof that the software gap, while real, is no longer fatal for well-funded research groups willing to absorb 20–40% efficiency penalties.

08 / POLICY

Government Support & Big Fund 国家扶持

Three decades of industrial policy, now on a $47B+ accelerator.
Big Fund I (2014)
¥138B
≈ $21B · first IC Investment Fund · launched SMIC/YMTC-era buildout
Big Fund II (2019)
¥204B
≈ $29B · focused on equipment, fab expansion, logic
Big Fund III (May 2024)
¥344B
≈ $47.5B · HBM, advanced packaging, lithography equipment

The China Integrated Circuit Industry Investment Fund Phase III (大基金三期) was registered 24 May 2024 with ¥344 billion ($47.5B) — larger than Phase I and II combined. Lead investors include the Ministry of Finance (¥60B), China Development Bank Capital, ICBC, Bank of China, Agricultural Bank of China, CCB, and Bank of Communications. Stated priorities:

  • High-bandwidth memory (HBM) — the single biggest allocation, flowing mostly through CXMT
  • Advanced packaging — CoWoS equivalents, 2.5D/3D stacking
  • Lithography equipment — domestic DUV (SMEE) and EUV R&D
  • EDA tools — Empyrean, Primarius as Synopsys/Cadence alternatives
  • AI chip R&D grants — via provincial sub-funds backing Biren, Moore Threads, MetaX

Secondary channels: provincial government guidance funds (Shanghai, Shenzhen, Guangzhou, Anhui, Hubei) each run ¥50–200B vehicles that co-invest alongside the national fund. Cambricon, Biren and YMTC have each received direct subsidies and priority procurement from MIIT contracts.

The Dec 2023 Intel/AMD CPU ban for government PCs (SOEs to transition to Chinese hardware by 2027) is the most explicit demand-side intervention: it creates a guaranteed domestic market floor, independent of competitiveness on price or performance.

Sources: Reuters 27-May-2024; State Council & MoF filings.

09 / FABS

Fab & Manufacturing 晶圆制造

Design is half. These are the foundries that turn layouts into silicon.
Foundry HQ Leading node Cap (Kwpm) Revenue ($B, 2024) Key AI-chip customers Sanction
SMIC (中芯国际)Shanghai7nm (N+2 DUV)~850~8.0Huawei HiSilicon, Hygon, Baidu Kunlun, Moore Threads (transition)Entity 2020
Hua Hong (华虹)Shanghai/Wuxi28nm (pilot 22nm)~410~2.0Power IC, MCU, limited AI inferenceNot listed
Nexchip (合肥晶合)Hefei28nm~120~1.2Display drivers, CISUnsanctioned
CXMT (IDM)Hefei17nm DRAM~240~4.5 est.In-house DRAMMonitored
YMTC (IDM)WuhanXtacking 3.0~100~3.5 est.In-house NANDEntity 2022
Wingtech / NexperiaShanghai/NLPower/analog~60 (CN)~2.2Power management for AI serversMonitored
TSMC NanjingNanjing16/28nm (capped)~60n/dLegacy AI inferenceLicense-capped

SMIC's 7nm (N+2) reality check: achieved via multi-patterned DUV immersion (ASML 1980i), confirmed in the Huawei Kirin 9000S (Aug 2023) and Ascend 910B/C. Yields widely reported at 20–30% vs TSMC's ~70% at equivalent node — 2–3× effective cost per good die. Each wafer runs through multiple patterning steps, consuming more equipment-hours, so real 7nm AI-chip capacity is closer to 30–50K wpm, not the 100K+ headline number. That is the binding supply constraint on Ascend 910C shipments in 2025–2026.

10 / SCORECARD

Success Probability Scoreboard 成功概率

AZR probability-weighted assessment — subjective but methodologically consistent across companies.
Company P(global top-5) P(NVIDIA parity <5y) Gov support Dominant risk
Huawei HiSilicon
55%
35%
5 / 5 HBM supply + SMIC yield ceiling
Cambricon
25%
15%
5 / 5 Customer concentration, R&D burn
Biren
20%
12%
4 / 5 Fab access, software stack immaturity
Moore Threads
10%
6%
3 / 5 Structurally trailing node (12nm)
Hygon
15%
8%
4 / 5 AMD IP ageing out of relevance
Horizon Robotics (edge)
45%
5%
3 / 5 Not a training-chip competitor
YMTC (NAND)
60%
40%
5 / 5 Equipment access, yield
CXMT (DRAM/HBM)
50%
30%
5 / 5 HBM stacking IP, EUV cutoff
Baidu Kunlun
12%
10%
3 / 5 Captive customer = ceiling
Alibaba T-Head
15%
10%
3 / 5 Internal use only, strategic focus shifts
Iluvatar CoreX
8%
5%
3 / 5 Scale, ecosystem
Enflame (Suiyuan)
10%
6%
3 / 5 Tencent captive, differentiation
MetaX
10%
5%
3 / 5 Late entrant, funding

These are author estimates synthesised from revenue trajectory, fab access, software maturity, customer diversity, and policy support. "P(global top-5)" means: share of forecasts where the company appears in global AI-chip revenue top-5 by 2030. "P(NVIDIA parity)" means: share of forecasts where effective training throughput per dollar matches NVIDIA's contemporary flagship within 5 years. Inherent model uncertainty: ±15 pp.

11 / SCENARIOS

Bull / Bear Scenarios 牛熊对比

Explicit future pathways for the critical companies.
CompanyBull case (2028)Bear case (2028)
Huawei CloudMatrix 384 becomes de-facto China training standard; Ascend 920 on SMIC N+3 reaches 80%+ H100-next parity; exports to Belt-and-Road; $25B+ AI revenue. HBM stockpile exhausts 2027; SMIC yield stays stuck <30%; CANN ecosystem fragmented; market reverts to stockpiled H20s; $8B AI revenue.
Cambricon MLU690 ships on SMIC N+2 at ~70% H100; sustained profitability; ¥5B revenue run-rate; second curve M&A into edge. Government mandates drop; Huawei absorbs demand; returns to losses; stock retraces below IPO price.
Biren BR200 on SMIC 7nm ships 2026; HK-listed stock re-rates on profitability; China Telecom/Mobile become anchor customers. BR200 delays; co-founders continue departing; IPO capital depletes; acquired by Huawei or Tencent at distressed price.
Moore Threads MUSA matures into credible CUDA alternative for serving; dominant domestic gaming GPU; KUAE finds niche in inference clusters. Consumer GPU margins collapse under Lisuan/Innosilicon; enterprise AI ambitions fail; post-IPO drift.
Hygon x86 government-PC mandate drives 40% revenue CAGR; DCU finds HPC supercomputing niche; JV profit hits ¥10B. Intel/AMD regain access via negotiation; Loongson/LoongArch take domestic CPU share; x86 Chinese franchise stagnates.
Horizon Robotics Journey 7 wins tier-1 European OEM; becomes #1 global ADAS chip by volume (past Mobileye); ¥15B revenue. NEV demand slows; Xiaomi/Nio/Xpeng insource; VW JV underperforms; growth decelerates to single digits.
YMTC 300+ layer NAND ships 2026; 10%+ global share; Apple/HP enterprise wins; becomes 4th global NAND maker. Yield issues persist; US extends NAND controls directly; stuck at ~5% share, operational losses return.
CXMT HBM2e volume production late 2026; feeds Ascend/MLU clusters; 15% global DRAM by 2028; first Chinese HBM exporter. Added to full Entity List; EUV denied; HBM project stalls at pilot; stays commodity-DRAM-only.
12 / CUSTOMERS

AI Customer Adoption 客户使用

Which hyperscalers use what silicon — the demand-side view.
Hyperscaler NVIDIA stockpile (est.) Primary Chinese AI chip Secondary Foundation model
Baidu (AI Cloud, Apollo)Pre-Oct 2022 A100s + H20sKunlun P800 (own)Ascend 910BERNIE 4.5 / 5.0
Alibaba (Aliyun)Large A800/H800 + H20T-Head Hanguang 800 (inference)Ascend 910C, H20Qwen 3 / Qwen-VL
TencentH20 + legacy H800Enflame CloudBlazerAscend 910B, internal XPUHunyuan
ByteDanceH100/H200 (pre-ban + third-party) + H20In-house ASIC (2024-25)Ascend 910B/CDoubao, Seed-LLM
Huawei CloudLimited NVIDIA — mostly domesticAscend 910B/C (own)Cambricon MLU590Pangu 5.0
DeepSeek / High-FlyerDocumented H800 & H100 pre-ban clustersAscend 910B (reported)Mixed NVIDIA + AscendDeepSeek V3, R1
Moonshot / 01.aiH800 stockpileMixed NVIDIA + AscendKimi, Yi
China Mobile CloudH20 + limited H800Ascend 910CMLU590, Biren BR100 pilotsJiutian
China Telecom CloudH20Ascend 910CBiren BR100 (intelligent compute centres)TeleChat
iFlytekLegacyAscend 910B/C (heavy)Cambricon MLU590Spark 4.0

The headline pattern: Huawei Ascend is the default, NVIDIA H20 is the premium. Pre-2023 H100/H800 stockpiles are the rarest and most valuable — reserved for frontier training runs at ByteDance, Alibaba, DeepSeek. Everything else (inference, fine-tuning, enterprise serving) is migrating to domestic silicon at pace, driven partly by price-performance and partly by state-owned enterprise procurement rules.

13 / OUTLOOK

Future Outlook 2026–2030 未来展望

What to watch over the next 48 months.
大势
TREND

Directional bets

  1. Import substitution wins before parity. Chinese AI cloud will be 70–80% domestic silicon by 2028 regardless of technical gap — policy + captive market dictate this.
  2. HBM becomes the flashpoint. CXMT HBM2e pilot line in 2026 is the single most important data point for the industry. If it ships at scale, Chinese AI becomes self-sufficient for training. If not, dependence persists.
  3. SMEE and domestic DUV matter more than EUV. EUV is a 2028+ prototype; DUV multi-patterning is a 2025-deliverable. SMEE reaching 28nm-capable DUV in volume is the near-term catalyst.
  4. Consolidation is coming. 15+ Chinese AI chip startups cannot all survive. Expect 2–3 national champions by 2028 — most likely Huawei, Cambricon, and one memory player (CXMT).
  5. Export control asymmetry widens. Each new BIS rule creates a new Chinese domestic subsidy + captive-market wave. The cycle is now structurally self-reinforcing on both sides.
风险
RISKS

What could break the thesis

  • US-China deal. Highly unlikely but would dramatically change trajectory — H100/H200/Blackwell re-enters China, Chinese alternatives under-priced and over-capacity.
  • Taiwan contingency. Kinetic event around TSMC would reshape global semiconductors entirely. Chinese chips become the only chips in China; paradoxically the global industry collapses.
  • DeepSeek-style algorithmic efficiency. If Chinese labs keep finding 5–10× efficiency wins, the hardware gap matters less. Already happening; continues.
  • Chiplet/packaging breakthrough. If China's advanced packaging program delivers HBM-equivalent bandwidth via 2.5D interposer tricks before HBM volume, the gap closes faster than expected.
  • Entity List expansion to SMIC/Hua Hong foundry services. Would freeze the entire fabless ecosystem overnight.
"Bet direction: the probability of Chinese AI silicon reaching acceptable domestic replacement by 2028 is high (>70%). The probability of global competitive parity on the merits within the same window is low (<20%). The gap between those two numbers is the entire story." — AZR synthesis
14 / METHODS

Methodology & Sources 来源

Where every number in this report comes from.
Primary sources
  • Cambricon 2024 Annual Report (SSE 688256) — PDF filing
  • Horizon Robotics prospectus (SEHK 9660), Oct 2024
  • Hygon 2023 Annual Report (SSE 688041)
  • Biren Technology prospectus (SEHK 6082), Dec 2025
  • Moore Threads IPO filing (SSE 688795), Nov 2025
  • BIS Entity ListFederal Register & BIS.gov
  • Oct 7 2022, Oct 17 2023, Dec 2 2024 BIS rules (Advanced Computing & Semiconductor Manufacturing)
Secondary / press sources
  • Reuters — Big Fund III launch, Biren fundraising, EUV prototype reports
  • Bloomberg — Cambricon price action, Biren funding pledges, China AI billionaires feature
  • South China Morning Post — company profiles on Cambricon, Biren, Moore Threads, Huawei
  • Financial Times / Nikkei Asia — supply chain reporting
  • SemiAnalysis — Huawei Kirin 9000S teardown, Ascend 910B die-shots
  • Tom's Hardware, AnandTech, HPCwire — vendor spec analysis
  • China Money Network — Cambricon, Horizon Robotics financials
  • TrendForce — SMIC/YMTC/CXMT capacity data
  • IDC, CCID — China AI chip market sizing
Estimation methodology

Where a vendor published a figure (peak TFLOPS, TDP, memory bandwidth), it is reported as-is. "Effective vs H100" percentages are AZR synthesis based on benchmark reports from users (MLPerf, Stanford AI Index, informal DeepSeek / Alibaba training comms), weighted toward memory-bandwidth-limited workloads (LLM training) rather than peak-TFLOPS-limited (dense convolution).

Probability scores are subjective point estimates from a single analyst (this author) with no formal calibration. Treat as directional, not precise. Methodology: for each company, score on (1) technical trajectory, (2) fab/supply access, (3) software ecosystem, (4) customer diversity, (5) financial runway, (6) political backing — then combine via rough weights. Error bars ±15 percentage points.

Revenue figures where not disclosed are marked "est." or "n/d". HBM "pilot" status reflects public reports as of 2026-Q1; pilot does not imply volume production.

All financial conversions use USD/CNY ≈ 7.25 unless otherwise noted.

Versioning

v1.0 · 2026-05-05 — Initial release. Compiled by AZR (digital scientist) for Alex.

Refresh cadence: quarterly, or on major policy events.