Challenging Giants: Can DeepSeek’s Huawei-Driven V4 Take on Nvidia’s $5 Trillion Empire?

Challenging Giants: Can DeepSeek’s Huawei-Driven V4 Take on Nvidia’s  Trillion Empire?



DeepSeek Launches New Language Models to Challenge Nvidia

DeepSeek Launches New Language Models to Challenge Nvidia

Just as Nvidia reached a $5 trillion market capitalisation, China’s DeepSeek unveiled a new collection of large language models that might once more test the chip giant’s supremacy, focusing more on moving away from a hardware-centric approach.

DeepSeek’s Ambitious V4 Family Release

DeepSeek’s V4 family, introduced on the same day Nvidia hit a $5 trillion market cap, represents the company’s most audacious launch since its R1 model shook global markets in early 2025, erasing almost $600 billion from Nvidia’s valuation.

On a recent podcast, Jensen Huang, CEO of Nvidia, expressed that if DeepSeek optimised its new models using Huawei Technologies chips, it would result in “a dreadful outcome” for the US.

Huang stated that if “future AI models are optimised differently than the American technology framework,” and as “AI spreads globally” with China’s standards and technology, China “will surpass” the US.

Must read: Competitive Edge: China Closes AI Gap with US, Stanford Reports

Cheaper, Faster, and Not Powered by Nvidia

At the heart of this release is V4-Pro, a model containing 1.6 trillion parameters, crafted for coding and intricate agentic tasks, along with the smaller V4-Flash variant designed for speed and economic efficiency.

However, the significant advancements are happening beneath the surface.

For the first time, DeepSeek has tailored its flagship model for domestic chips from Huawei Technologies instead of Nvidia GPUs, marking a pivotal step in China’s ambition to lessen its dependency on US technology.

Previous DeepSeek models, such as V3 and R1, relied on Nvidia hardware for training. In stark contrast, V4 aims to test whether China’s AI ecosystem can thrive with its own chip technology.

Frontier Performance at Reduced Costs

DeepSeek is again focusing heavily on disrupting costs. V4-Pro activates only 49 billion parameters per token despite having a total of 1.6 trillion, enabling it to deliver nearly frontier-level performance at much lower computing costs. The company asserts this allows outputs similar to leading models while being considerably less expensive.

Its API pricing emphasises this strategy. V4-Pro is priced at $1.74 for each million input tokens and $3.48 for every million output tokens, approximately 50 times more affordable than models like Claude Opus. The V4-Flash model is even more competitive, with rates beginning at $0.14 per million input tokens.

This mirrors the approach taken by DeepSeek’s earlier R1 model, which they claimed was trained at a cost of merely $6 million over two months, significantly below typical industry costs. In contrast, it is reported that Meta Platforms invested around $60 million on Llama, while OpenAI has allocated billions to its models.

Must read: DeepSeek Returns After Year-Long Interval, Researchers Warn About AI’s Impact on Employment

Benchmarking Against the Best

Regarding performance, DeepSeek contends that V4-Pro holds its own against premier closed-source models such as GPT-5.4 and Gemini 3.1 while surpassing numerous open-source options across coding, mathematics, and STEM benchmarks.

It also features a context window of 1 million tokens, an increase from the previous flagship’s 128,000 tokens, allowing it to handle much larger datasets in a single operation.

In long-context situations, V4-Pro reportedly consumes only 27% of the computing power required by its predecessor, while V4-Flash reduces that number further to 10%.

Huawei Partnership Enhances Geopolitical Advantages

Shortly after the launch, Huawei revealed full backing for DeepSeek’s V4 models across its Ascend chips, solidifying the synergy between China’s AI software and its domestic hardware ecosystem.

This initiative arises amid reported governmental pressures in China to bolster the usage of local chips, such as sourcing quotas and requirements to integrate foreign hardware with domestic alternatives.

For Nvidia, the issue lies not with one particular model but with an evolving trend.

The company’s supremacy has been based not only on GPUs but on a seamlessly integrated software ecosystem. Transitioning to Huawei’s Ascend chips requires a complete overhaul of code, redevelopment of tools, and validation of performance at scale—challenges that have, thus far, maintained Nvidia’s lead.

However, if companies like DeepSeek can establish similar performance levels at significantly lower costs using alternative hardware, that advantage might start to diminish.

DeepSeek’s R1 launch from last year illustrated how rapidly sentiments can change. The efficiency and output of the model sparked a surge in open-source releases throughout China and called into question the expenses involved in developing advanced AI.

V4 builds on that momentum but with a more tactical approach. By combining low-cost models with domestic chips, DeepSeek is not merely competing on performance; it is aligning with a larger effort to localise the AI stack.

Exit mobile version