DeepSeek has launched preview versions of its V4-Pro and V4-Flash models, marking the first major update since its R1 reasoning model disrupted global tech markets over a year ago.
The Hangzhou-based startup claims its new Pro variant leads all open-source rivals in math and coding performance, trailing only Google’s closed Gemini 3.1-Pro in world knowledge benchmarks. According to DeepSeek’s social media announcement, the V4-Pro’s capabilities fall just short of OpenAI’s GPT‑5.4 and Gemini 3.1-Pro, suggesting it lags frontier models by three to six months.
The Flash version mirrors the Pro’s reasoning strengths whereas prioritizing speed and cost efficiency, a combination the firm describes as “highly cost-effective” for developers. Like its predecessors, both models are released under an open-source license, permitting free use and modification of the code.
This release arrives amid intensifying competition within China’s AI sector, where Alibaba and ByteDance have also unveiled new models this year. CNBC reported that shares of several other Chinese AI firms declined in Hong Kong trading on the day of DeepSeek’s announcement, reflecting shifting investor focus.
Analysts note the V4 launch lacks the market shock of R1’s debut, not because the technology is weaker, but because traders have already adjusted to the reality of competitive, low-cost Chinese AI. Ivan Su of Morningstar observed that the preview instead highlights DeepSeek’s positioning against domestic peers, a dynamic absent during the R1 rollout.
For more on this story, see DeepSeek Unveils V4 AI Models With Strong Performance and Low Costs.
Neil Shah of Counterpoint Research called the V4 preview “a serious flex,” emphasizing its reduced inference costs — the expense of running a trained model to generate outputs. Wei Sun, Counterpoint’s principal AI analyst, added that the model’s benchmark profile indicates strong agent-based performance at significantly lower expense.
DeepSeek’s origins trace to 2023, with its V3 model gaining attention in late 2024 for achieving strong results using less powerful hardware. The January 2025 release of R1, built in two months for under $6 million using older Nvidia chips, challenged assumptions about U.S. AI leadership and prompted scrutiny of Massive Tech’s spending levels.
Some analysts questioned whether DeepSeek truly operated under such tight resource constraints, suggesting access to greater funding or advanced chips than disclosed. The model’s rise also triggered regulatory pushback, with multiple U.S. States, Australia, Taiwan, South Korea, Denmark and Italy imposing restrictions on DeepSeek-R1 over data privacy and national security concerns.
Despite these headwinds, the Stanford AI Index 2026 concluded that Chinese companies have “effectively closed” the performance gap with U.S. Rivals in AI development, though Silicon Valley maintains a slight edge in creating the most advanced frontier models.
Why hasn’t the V4 launch matched the market impact of DeepSeek’s R1 model?
Traders have already priced in the expectation that Chinese AI models are competitive and cheaper to run, reducing the surprise factor of new releases despite ongoing technical improvements.

How does DeepSeek’s V4 compare to leading U.S. AI models in performance and cost?
The V4-Pro leads all open-source models in math and coding, trails only Google’s Gemini 3.1-Pro in world knowledge, and performs marginally behind OpenAI’s GPT‑5.4 and Gemini 3.1-Pro, while offering lower inference costs than previous versions.
