Bras de fer entre Anthropic et le Pentagone : l’IA face à ses limites éthiques et militaires
Washington/San Francisco, February 27, 2026 – Hours before a crucial deadline, artificial intelligence giant Anthropic is maintaining its firm stance in the face of intense pressure from the U.S. Department of Defense (Pentagon). CEO Dario Amodei reaffirmed Thursday that his company “cannot in good conscience” accept the demands of Defense Secretary Pete Hegseth, who is calling for unrestricted access to the Claude model for “any lawful” military use. The conflict, which has been simmering for several months, reached its peak this week. Anthropic, known for its commitment to “safe and responsible” AI, deployed Claude – one of the most advanced models on the market – on classified U.S. government networks, becoming the first border model authorized at this level of sensitivity. The contract in question, valued at up to $200 million, covers intelligence, operational planning, and strategic analysis applications. But the Pentagon, under the leadership of Pete Hegseth, is now demanding the removal of the ethical safeguards built into Anthropic. Two "red lines" remain non-negotiable for the company:
In a statement published Thursday on Anthropic's website, Dario Amodei explained that the Pentagon's most recent proposals "have made virtually no progress" on these crucial points. "These threats do not change our position: we cannot in good conscience comply with their demands," he stated, emphasizing that Claude is not yet reliable enough to make life-or-death decisions without strict constraints. In response, the Pentagon set a firm deadline: Friday, February 27, at 5:01 p.m. Eastern Time. After this deadline, the threats include:
These threats have sparked shock and criticism in tech and political circles. Observers point to the inconsistency: on the one hand, treating Anthropic as a national security threat; on the other, claiming its AI is so critical that it must be compelled to cooperate. AI law and policy experts describe the approach as "contradictory" and "unprecedented" when it targets an American company. Anthropic, which has always championed security as a core value (through its internal "Constitution" for Claude), says it is prepared to lose this strategic partnership rather than compromise its principles. The company continues to affirm its support for national defense, but within clear ethical boundaries. This standoff raises major issues for the future of AI:
As the deadline rapidly approaches, the industry is following the case minute by minute. A last-minute resolution remains possible, but for now, Anthropic is holding firm – at the risk of becoming the first American tech company to be blacklisted by its own government. What do you think? Should ethical safeguards take precedence, even at the cost of a conflict with the Pentagon? Share your opinion in the comments! (Sources: Anthropic press releases, statements by Dario Amodei, The Guardian, Reuters, Axios, The New York Times, Politico, NPR, Washington Post – February 2026)
Commentaires (1)
Participer à la Discussion
Règles de la communauté :
💡 Astuce : Utilisez des emojis depuis votre téléphone ou le module emoji ci-dessous. Cliquez sur GIF pour ajouter un GIF animé. Collez un lien X/Twitter, TikTok ou Instagram pour l'afficher automatiquement.