DeepSeek V4, the anticipated trillion-parameter Mixture-of-Experts (MoE) multimodal large language model from Chinese AI lab DeepSeek, remains unreleased as of mid-April 2026, driving trader caution amid repeated delays from initial February-March expectations. Internal confirmation from founder Liang Wenfeng on April 10 targets a late April rollout, following extensive rewrites to optimize for Huawei Ascend chips—evading U.S. export controls on Nvidia hardware and enabling 35x faster inference at 1/70th GPT-4 costs, with 1M-token context and native text/image/video/audio support. Competitive pressures from OpenAI's GPT-5.4 and Anthropic models have prompted benchmark-focused refinements, boasting projected 83.7% SWE-bench and 92.8% MMLU scores. Traders should monitor platform previews like Expert and Vision modes for imminent API access or open weights on Hugging Face, potentially resolving markets within weeks.
Resumo experimental gerado por IA com dados do Polymarket. Isto não é aconselhamento de trading e não tem qualquer papel na resolução deste mercado. · AtualizadoDeepSeek V4 lançado por...?
DeepSeek V4 lançado por...?
$1,304,899 Vol.
30 de abril
81%
15 de maio
87%
$1,304,899 Vol.
30 de abril
81%
15 de maio
87%
Intermediate versions (e.g., DeepSeek-V3.5) will not count; however, versions such as DeepSeek V4 or V5 would count.
The "next DeepSeek V model" refers to the next major release in the DeepSeek V series, explicitly named as such or clearly positioned as a successor to DeepSeek-V3.
Only releases representing a core version progression in the DeepSeek V series, “clearly positioned as a successor to DeepSeek-V3,” will qualify. Other models, such as derivative models (e.g., "V4-Lite," "V4-Mini"), task-specialized models, R-series reasoning models, and experimental or preview releases (e.g., "V4-Exp," "V4-Preview"), that are not positioned as the new V flagship model, will not qualify.
For this market to resolve to "Yes," the next DeepSeek V model must be launched and publicly accessible, including via open beta or open rolling waitlist signups. A closed beta or any form of private access will not suffice. The release must be clearly defined and publicly announced by DeepSeek as being accessible to the general public.
If a qualifying model is made publicly accessible and explicitly labeled with the relevant version name within the company’s official website, this will qualify as “publicly announced”. Labeling errors, placeholder text, or version names displayed on the website that do not correspond to a model that is actually accessible to the general public under the rules will not qualify.
The primary resolution source for this market will be official information from DeepSeek, with additional verification from a consensus of credible reporting.
Mercado Aberto: Mar 31, 2026, 1:11 PM ET
Resolver
0x65070BE91...Intermediate versions (e.g., DeepSeek-V3.5) will not count; however, versions such as DeepSeek V4 or V5 would count.
The "next DeepSeek V model" refers to the next major release in the DeepSeek V series, explicitly named as such or clearly positioned as a successor to DeepSeek-V3.
Only releases representing a core version progression in the DeepSeek V series, “clearly positioned as a successor to DeepSeek-V3,” will qualify. Other models, such as derivative models (e.g., "V4-Lite," "V4-Mini"), task-specialized models, R-series reasoning models, and experimental or preview releases (e.g., "V4-Exp," "V4-Preview"), that are not positioned as the new V flagship model, will not qualify.
For this market to resolve to "Yes," the next DeepSeek V model must be launched and publicly accessible, including via open beta or open rolling waitlist signups. A closed beta or any form of private access will not suffice. The release must be clearly defined and publicly announced by DeepSeek as being accessible to the general public.
If a qualifying model is made publicly accessible and explicitly labeled with the relevant version name within the company’s official website, this will qualify as “publicly announced”. Labeling errors, placeholder text, or version names displayed on the website that do not correspond to a model that is actually accessible to the general public under the rules will not qualify.
The primary resolution source for this market will be official information from DeepSeek, with additional verification from a consensus of credible reporting.
Resolver
0x65070BE91...DeepSeek V4, the anticipated trillion-parameter Mixture-of-Experts (MoE) multimodal large language model from Chinese AI lab DeepSeek, remains unreleased as of mid-April 2026, driving trader caution amid repeated delays from initial February-March expectations. Internal confirmation from founder Liang Wenfeng on April 10 targets a late April rollout, following extensive rewrites to optimize for Huawei Ascend chips—evading U.S. export controls on Nvidia hardware and enabling 35x faster inference at 1/70th GPT-4 costs, with 1M-token context and native text/image/video/audio support. Competitive pressures from OpenAI's GPT-5.4 and Anthropic models have prompted benchmark-focused refinements, boasting projected 83.7% SWE-bench and 92.8% MMLU scores. Traders should monitor platform previews like Expert and Vision modes for imminent API access or open weights on Hugging Face, potentially resolving markets within weeks.
Resumo experimental gerado por IA com dados do Polymarket. Isto não é aconselhamento de trading e não tem qualquer papel na resolução deste mercado. · Atualizado
Cuidado com os links externos.
Cuidado com os links externos.
Frequently Asked Questions