Skip to main content
Market icon

Anthropic Pentagon ile bir anlaşma yapacak mı?

Market icon

Anthropic Pentagon ile bir anlaşma yapacak mı?

Evet

9% olasılık
Polymarket

$58,346 Hac.

Evet

9% olasılık
Polymarket

$58,346 Hac.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.Trader consensus on Polymarket reflects a 92.5% implied probability against Anthropic forging a new deal with the Pentagon, driven primarily by the February 2026 collapse of their prior $200 million Department of Defense contract over irreconcilable AI safety guardrails. Anthropic refused to grant unrestricted access to its Claude large language models for applications like autonomous weapons or mass surveillance, prompting the Pentagon to terminate the agreement, label the firm a national security supply chain risk, and pivot to OpenAI. Recent appeals court rejection of Anthropic's injunction bid on April 8 has entrenched the standoff, amid ongoing litigation. While talks on Anthropic's next AI model continue with the Trump administration, a realistic shift would require policy concessions unlikely given both parties' firm stances on responsible AI deployment.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Hacim
$58,346
Bitiş Tarihi
30 Nis 2026
Piyasa Açıldı
Mar 6, 2026, 1:33 PM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.Trader consensus on Polymarket reflects a 92.5% implied probability against Anthropic forging a new deal with the Pentagon, driven primarily by the February 2026 collapse of their prior $200 million Department of Defense contract over irreconcilable AI safety guardrails. Anthropic refused to grant unrestricted access to its Claude large language models for applications like autonomous weapons or mass surveillance, prompting the Pentagon to terminate the agreement, label the firm a national security supply chain risk, and pivot to OpenAI. Recent appeals court rejection of Anthropic's injunction bid on April 8 has entrenched the standoff, amid ongoing litigation. While talks on Anthropic's next AI model continue with the Trump administration, a realistic shift would require policy concessions unlikely given both parties' firm stances on responsible AI deployment.

In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products.

This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.

A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).

An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.

Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.

Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.

The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Hacim
$58,346
Bitiş Tarihi
30 Nis 2026
Piyasa Açıldı
Mar 6, 2026, 1:33 PM ET
In February 2026, the Pentagon announced it would designate Anthropic as a national security supply chain risk after Anthropic refused to remove AI safety restrictions from its acceptable use policy. Donald Trump subsequently directed all federal agencies to cease using Anthropic's technologies, with a six-month phase-out period for agencies such as the Department of Defense which are actively using Anthropic's products. This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”. A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count). An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect. Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period. Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement. The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.

Harici bağlantılara dikkat edin.

Sıkça Sorulan Sorular

"Anthropic Pentagon ile bir anlaşma yapacak mı?", yatırımcıların ne olacağına inandıklarına göre hisse alıp sattığı 2 olası sonuçlu Polymarket'teki bir tahmin piyasasıdır. Mevcut lider sonuç 9% ile "Anthropic Pentagon ile bir anlaşma yapacak mı?"dir. Fiyatlar gerçek zamanlı kitle kaynaklı olasılıkları yansıtır. Örneğin, 9¢ fiyatlı bir hisse, piyasanın toplu olarak o sonuca 9% olasılık atadığı anlamına gelir. Bu oranlar, yatırımcılar yeni gelişmelere ve bilgilere tepki verdikçe sürekli değişir. Doğru sonuçtaki hisseler piyasa çözümlemesinde her biri 1$ karşılığında tahsil edilebilir.

Bugün itibarıyla "Anthropic Pentagon ile bir anlaşma yapacak mı?" toplam $58.3K işlem hacmi oluşturmuştur piyasa Mar 6, 2026 tarihinde açıldığından beri. Bu düzeyde işlem aktivitesi Polymarket topluluğundan güçlü katılımı yansıtır ve mevcut oranların derin bir piyasa katılımcıları havuzu tarafından bilgilendirilmesini sağlar. Bu sayfada canlı fiyat hareketlerini takip edebilir ve herhangi bir sonuç üzerinde doğrudan işlem yapabilirsiniz.

"Anthropic Pentagon ile bir anlaşma yapacak mı?" üzerinde işlem yapmak için bu sayfada listelenen 2 mevcut sonuca göz atın. Her sonuç, piyasanın ima edilen olasılığını temsil eden bir güncel fiyat gösterir. Pozisyon almak için en olası olduğuna inandığınız sonucu seçin, lehine işlem yapmak için "Evet" veya aleyhine işlem yapmak için "Hayır" seçin, miktarınızı girin ve "İşlem Yap"a tıklayın. Piyasa çözümlendiğinde seçtiğiniz sonuç doğruysa, "Evet" hisseleriniz her biri 1$ öder. Yanlışsa 0$ öderler. Ayrıca kâr kilitlemek veya zararı kesmek isterseniz çözümlemeden önce istediğiniz zaman hisselerinizi satabilirsiniz.

Bu tamamen açık bir piyasa. "Anthropic Pentagon ile bir anlaşma yapacak mı?" için mevcut lider yalnızca 9% ile "Anthropic Pentagon ile bir anlaşma yapacak mı?"dir. Hiçbir sonuç güçlü bir çoğunluk elde edemediğinden, yatırımcılar bunu oldukça belirsiz olarak görüyor ve bu benzersiz işlem fırsatları sunabilir. Bu oranlar gerçek zamanlı güncellenir, bu yüzden olasılıkların nasıl geliştiğini izlemek için bu sayfayı yer imlerine ekleyin.

"Anthropic Pentagon ile bir anlaşma yapacak mı?" için çözümleme kuralları, her sonucun kazanan olarak ilan edilmesi için tam olarak ne olması gerektiğini tanımlar — sonucu belirlemek için kullanılan resmi veri kaynakları dahil. Bu sayfadaki yorumların üzerindeki "Kurallar" bölümünde tam çözümleme kriterlerini inceleyebilirsiniz. İşlem yapmadan önce kuralları dikkatli bir şekilde okumanızı öneririz, çünkü bu piyasanın nasıl çözümlendiğini yöneten kesin koşulları, istisnai durumları ve kaynakları belirtir.