JPMorgan: DeepSeek V3.2 Triggers Second Wave of Impact in China's AI Market, Benefiting Most Stakeholders

Stock News12-05 10:40

JPMorgan Chase stated that the release of DeepSeek V3.2 marks the second wave of "DeepSeek Impact" in China's AI market. This development means that near-cutting-edge open-source inference capabilities are now available at moderate domestic prices, benefiting most stakeholders in China's AI ecosystem, including cloud operators, AI chip manufacturers, AI server manufacturers, AI agent platforms, and SaaS developers.

Analyst Alex Yao noted in the report that DeepSeek has reduced model API pricing by 30%-70%, while long-context inference could save 6-10 times the workload. Beneficiaries include Alibaba (09988), Tencent (00700), Baidu (09888), AMEC (688012.SH), NAURA (002371.SZ), Huaqin Technology (603296.SH), and Inspur Information (000977.SZ).

On December 1, DeepSeek announced the official release of the DeepSeek-V3.2 model. The model aims to balance inference capability and output length, making it suitable for daily use, such as Q&A scenarios and general agent tasks. In public benchmark tests for inference, DeepSeek-V3.2 achieved performance comparable to GPT-5, slightly trailing Gemini-3.0-Pro. Compared to Kimi-K2-Thinking, V3.2 significantly reduces output length, lowering computational costs and user wait times.

Unlike previous versions that lacked tool invocation in "thinking mode," DeepSeek-V3.2 is the company's first model to integrate thinking with tool usage, supporting both thinking and non-thinking modes for tool calls. The company introduced a large-scale agent training data synthesis method, constructing numerous "hard-to-solve, easy-to-verify" reinforcement learning tasks (1,800+ environments, 85,000+ complex instructions), greatly improving the model's generalization capability.

While the earlier V3.1 model was optimized for NVIDIA CUDA, the new V3.2/V3.2-Exp models provide Day-0 support for Huawei Ascend, Cambricon, and Hygon, along with ready-made kernels for SGLang, vLLM, and other inference frameworks—signaling a clear shift toward domestic hardware independence.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment