Special Report - Chapter Three: China’s AI Power Profile: Advantages, Dependencies, and Limits
Despite prioritizing self-reliance, China continues to rely on Western AI technologies across the development, deployment, and application phases.
This is chapter three of a 32-page special report, published in multiple installments. You can read the previous chapter here.
If you’d like immediate access to the complete PDF, you can become a paid subscriber. Otherwise, you’re welcome to continue reading each chapter as it’s released, free of charge.
Already a paid subscriber? You can download the full report from the Introduction page. For questions, leave us a message on Substack and we’ll respond as soon as possible.
3.1. Core Challenges and Technical Bottlenecks: The Triple Barriers of Computing Power, Algorithms, and Data
1) Computing Power: Advanced Chip Dependency and Choke Point Sanctions
Training and inference for AI systems rely heavily on graphics processing units (GPUs) and specialized AI accelerator chips. Chinese AI companies and research institutions, particularly LLM developers such as Baidu (Wenxin Yiyan), Alibaba (Tongyi Qianwen), ByteDance (Doubao), Reports indicate that training a frontier-level AI model typically requires tens of thousands of NVIDIA A100 or H100 chips. The U.S. export ban on high-performance GPUs to China (covering A100, H100, and their Chinese variants A800/H800) has been fully implemented, forcing reliance on stockpiled chips or lower-performing domestic alternatives such as Cambricon, Tiansuan, and Hygon. These domestic chips lag 1-3 generations behind NVIDIA in manufacturing process, power consumption, software compatibility, and ecosystem support.
China’s advanced chip manufacturing is similarly constrained. SMIC, its only high-end process manufacturer, is hindered by export restrictions on essential equipment, making mass-production of AI chips at 7nm or smaller nodes difficult. As a result, from design to deployment, China’s AI computing base remains externally dependent—a core variable limiting competitiveness.
2) Deep Algorithm Dependency: Gaps in Basic Research and Talent Outflow
While China leads the world in AI paper volume, quality metrics such as top-conference acceptance rates, citation counts, and originality continue to trail the U.S. Comparative studies show Chinese large models like Wudao, Qianwen, and Pangu underperform OpenAI, Anthropic, Meta, and Google in multilingual capacity, complex reasoning, and multimodal capabilities.
Studies indicate that Chinese AI models lag U.S. frontier models by 15–30% on benchmarks such as MMLU, HellaSwag, and BigBench. More critically, many Chinese AI models emphasize retraining over inference; rely on mixed-source datasets, and lack transparent tuning processes—reflecting an “engineering integration” mindset that limits original algorithmic breakthroughs.
Political regulation and restrictive environments further reduce China’s appeal to top global AI researchers, many of whom relocate to more open research hubs in Singapore, the U.S., or Israel.
3) Data Constraints: Short Supply Meets Regulatory Tightrope
LLMs require large, diverse, and clean training corpora. China faces a twofold challenge:
Restricted access to open corpora: Longstanding internet controls limit access to global datasets (Reddit, Twitter, Wikipedia, YouTube), resulting in disadvantages for multilingual, cross-cultural, and structured data training.
Regulatory and political constraints: China’s Data Security Law and Personal Information Protection Law, combined with strict content censorship, impose compliance and political risks on data collection and sharing.
Because AI-generated content must avoid sensitive terms and ideological red lines, models are effectively “pre-censored” during training—limiting expressive range and authentic language capabilities.
In summary, computing power is constrained by hardware monopolies, algorithm progress by institutional limits, and data supply by legal and ideological contradictions. These bottlenecks define both the technical ceiling of Chinese AI and the credibility of its global output.
3.2. Beyond Technical Gaps: China’s Structural Disadvantages in AI Competition
Despite ambitious targets and “scenario-first” rhetoric, China still trails the U.S. in critical segments of the AI value chain. These are systemic, structural gaps—not short-term technical lags. The U.S. leads not only in absolute technical performance but also in basic research, hardware/software ecosystems, and global resource integration.
1) Foundation Model Gap: “Usability” vs. “Controllability”
Frontier models like GPT-4 (OpenAI), Claude 3 (Anthropic), Gemini (Google DeepMind), and LLaMA (Meta) excel in reasoning, language understanding, and multimodal performance, with stable iteration and open-source community feedback.
By contrast, Chinese models—Baidu’s Wenxin Yiyan, Alibaba’s Tongyi Qianwen, Zhipu’s GLM—are near GPT-3.5 in capability but lag behind GPT-4. Many are costly to run, suffer from low runtime efficiency, and lack robust alignment or safety evaluation. Rapid “stack-parameters–tune–deploy” cycles meet market demand but leave generalization, value alignment, and long-tail safety underdeveloped.
2) Chip Ecosystem and Software Stack Disadvantages
The U.S. dominates the chip chain—from EDA tools (Synopsys, Cadence) to design (NVIDIA, AMD), manufacturing (TSMC), and packaging/testing—and controls essential AI software stacks like PyTorch, TensorFlow, and CUDA.
While Huawei, Cambricon, and Alibaba’s Pingtouge pursue independent chips and frameworks (e.g., MindSpore), fragmented ecosystems and limited developer adoption hinder global-scale replacement.
3) Institutional Gaps in Open-Source Culture
In the U.S., open-source projects (LLaMA, Stable Diffusion) accelerate innovation through collaboration. China’s “open-source” models often restrict code, weights, or licenses, with limited community participation. This corporate-driven approach constrains transparency, safety, and interpretability improvements.
4) Global Resource Integration and Capital
About 70% of top AI talent work in U.S. institutions, supported by the largest R&D investments and broadest cooperation networks. Language, systemic, and ideological barriers limit China’s influence in global standards and governance, even when firms “go global.”
In summary, China has rapid replication ability at the application layer and certain advantages in engineering deployment and scenario integration but remains in a “catching-up” state regarding foundational technology, basic research, ecosystem openness, and international trust. This gap is not only quantitative but structural and path-dependent, difficult to bridge through short-term policies or administrative orders.
3.3. U.S. Technology Containment: The Multi-Layered Impact of Choke Point Policies
Since 2019, the United States has expanded export restrictions on key Chinese technology firms into a broad “technology containment” strategy. This now covers AI chips, high-performance computing platforms, algorithm tools, software ecosystems, and cloud computing services. The measures aim not only to disrupt supply chains but also to slow China’s AI development, limit international cooperation, and erode global trust in Chinese technology.
1) Chip Supply Cut: A Direct Blow to Computing Power
High-performance chips—especially NVIDIA’s A100, H100, and B200—are essential for training large language models. Under the CHIPS and Science Act, entity lists, and export licensing regimes, the U.S. prohibits their export to China.
By late 2023, A100/H100 chips accounted for over 85% of global AI training. China is limited to downgraded versions (A800/H800) with 30–40% less bandwidth and computing power. This has led to longer training cycles, higher tuning failure rates, and greater energy consumption. Many companies mix older GPUs or rent fragmented private data center capacity, increasing costs and lowering quality.
2) Cloud Service Restrictions: Long-Term Risks for Deployment
The U.S. has also restricted cloud services critical to AI, barring Amazon AWS, Google Cloud, Microsoft Azure, and others from offering GPU leasing, TPU acceleration, or AI training services to sanctioned Chinese entities.
These measures prevent Chinese firms from bypassing hardware limits by renting compute power overseas. Some startups training models via AWS in Singapore, Hong Kong, or the UAE lost access in 2024. This reduces not only training capacity but also the ability to deploy models internationally at scale.
3) Software Ecosystem Severance: Targeting a Vulnerable Layer
Core deep learning frameworks (PyTorch, TensorFlow) and GPU driver systems (CUDA) are U.S.-controlled. While nominally open source, updates, certifications, and optimizations depend on official support. Export controls now restrict U.S. companies from providing commercial support or custom modules to Chinese firms.
Some companies have been unable to update CUDA drivers, preventing FP16 mixed-precision acceleration and increasing training costs by over 40%. Domestic frameworks such as PaddlePaddle and MindSpore remain far smaller and less stable than their U.S. counterparts.
4) Data Access Bottlenecks: From Training to Corpus Legitimacy
Foundation models require vast, high-quality multilingual datasets. Access to major Western sources—Common Crawl, Reddit, Wikipedia—has tightened, while OpenAI and others no longer release datasets. Chinese corpora remain homogeneous, dominated by low-quality social media, web novels, and outdated news. Political censorship further removes “sensitive” material, limiting semantic diversity and topic coverage.
Insufficient English-language content hampers international scalability. Some firms use translation to expand datasets, but this often introduces semantic drift and context mismatch.
5) Trust and Market Access: The Political Ceiling Abroad
Many Western governments distrust Chinese AI firms over privacy, data governance, and algorithm transparency. In 2024, Chinese vendors deploying servers in Latin America and the Middle East faced extensive compliance demands—source code audits, bias testing, local data hosting—that often exceeded requirements for U.S. companies. This underscores the large gap in “soft power” between the two countries’ AI sectors.
In short, U.S. restrictions operate across the full AI value chain, targeting infrastructure, software, cooperation channels, and global market access. The goal is not merely to cut chip supplies but to systematically constrain China’s ability to expand its AI reach internationally.
3.4. Autonomous Control and “Cornering”: China’s Strategic Response
In response, China has launched a multi-pronged strategy—covering hardware, algorithms, open-source collaboration, capital allocation, and institutional reform—aimed at building a self-reliant AI ecosystem.
1) Chip Development: “Silicon Self-Strengthening” Under Pressure
China is accelerating domestic AI chip R&D, led by firms such as Huawei (Ascend 310/910), Cambricon (Siyuan 270), Bitmain, and Biren Technology. In 2024, Huawei’s Ascend 910B matched or approached A100 performance on some NLP tasks, using its MindSpore framework. However, it still depends on TSMC’s 7nm process, which is constrained by ASML export bans. SMIC’s 14nm chips struggle with power efficiency and thermal management for trillion-parameter models, requiring up to five times longer training and four times the energy of Western hardware.
2) Computing Infrastructure: City-Level “Compute Hubs”
Projects such as “Eastern Data, Western Computing” are building large-scale data centers in western provinces for energy-intensive model training. By end-2023, China’s AI-related GPU capacity exceeded 5,000 exaFLOPS, but over 80% came from lower-end GPUs, with lower efficiency and stability than top-tier Western data centers. Fragmented capacity and lack of standardized operations further limit performance.
3) Domestic Models and Algorithms: From Imitation to Optimization
Chinese firms have released open-source large language models—InternLM, ChatGLM, Yi series—some surpassing GPT-3.5 in Chinese tasks. Yet they often struggle with long-context reasoning, multilingual support, and consistent accuracy. Causes include limited high-quality training data, insufficient algorithm optimization experience, and smaller compute budgets.
4) Domestic Frameworks and Toolchains
Baidu’s PaddlePaddle, Huawei’s MindSpore, and Tsinghua’s OneFlow aim to replace foreign frameworks. While commercially viable in some niches (e.g., OCR, speech), they remain behind PyTorch and TensorFlow in ecosystem breadth, compatibility, and developer adoption. Activity in China’s AI open-source communities remains a fraction of global hubs like HuggingFace.
5) Policy-Driven Coordination
Government initiatives—such as the National Computing Power Integrated Center, Large Model Evaluation Sandbox, and Digital China New Infrastructure Fund—use subsidies, industrial coordination, and administrative guidance to accelerate development. While this improves resource allocation, it risks bureaucratic inefficiencies and discouraging open innovation.
Overall, China’s approach is comprehensive but faces persistent bottlenecks: chip dependence, weak software foundations, fragmented ecosystems, low global trust, and lagging algorithmic innovation. Full independence remains a long-term goal, not an imminent outcome.
3.5. Strategic Dependence: Why China Still Relies on Western AI Technology
Despite prioritizing self-reliance, China continues to rely on Western AI technologies across the development, deployment, and application phases—due to practical needs in performance, ecosystem maturity, and global integration.
1) Development Stage: Closing Performance and Ecosystem Gaps
Domestic chips and frameworks still trail Western equivalents in process technology, EDA tools, and driver software. Advanced GPUs (A100, H100), mature frameworks (PyTorch, CUDA), and robust open-source ecosystems remain critical for timely, cost-effective model training. Western tools also reduce R&D risk and accelerate iteration in an environment where China lags in original algorithmic innovation.
2) Deployment Stage: Global Compute and Cloud Access
Chinese data centers cannot yet match Western platforms in efficiency or global reach. Overseas cloud services reduce latency, aid compliance with local data laws, and improve market access. However, U.S. restrictions have sharply limited access to AWS, Google Cloud, and Azure for Chinese firms.
3) Application Stage: Market Entry and Scenario Adaptation
To compete globally, Chinese AI products must meet diverse regulatory, privacy, and transparency requirements—areas where Western firms have more experience and established trust. Western AI also leads in multimodal capabilities, multilingual processing, and cross-cultural adaptation, which are essential for sectors like autonomous driving, medical imaging, and financial services.
In short, Western technology continues to provide performance, compliance, and market advantages that Chinese systems cannot yet fully replace—making complete decoupling unlikely in the near term.
If you value our work, please consider becoming a paid subscriber or buying us a coffee. Your support sustains independent, in-depth analysis and helps us build toward future offerings—like exclusive reports and interactive Q&As. Every contribution keeps this project thoughtful, ad-free, and accountable to you.
Thank you!