Anthropic and Alignment
Thompson takes a realist position on Anthropic’s standoff with the Department of War over autonomous weapons and mass surveillance use cases. Anthropic refused to allow “any lawful use” of Claude, and the DoW threatened to designate the company a supply-chain risk and invoke the Defense Production Act.
Thompson’s core argument: Anthropic’s position is “fundamentally misaligned with reality.” If AI is as powerful as Amodei claims (comparing it to nuclear weapons), then the U.S. government cannot tolerate an unelected executive retaining veto power over military use. The binary choice is either Anthropic accepts a subservient position to the elected government, or the government destroys Anthropic or removes Amodei. International law is ultimately a function of power, and the U.S. will not allow an independent power structure to develop that asserts autonomy from U.S. control.
Thompson also criticizes Amodei’s broader pattern: opposing open-source AI and advocating chip export controls to China without acknowledging the systemic risks (cutting China off from chips reduces the cost of destroying TSMC, potentially eliminating AI for everyone). On open source: (1) closed-only AI concentrates unimaginable power in a few hands, (2) AI proliferation is inevitable given incentives, (3) more AI is actually safer than limited AI because AI is the best defense against AI.
Thompson acknowledges legitimate surveillance concerns but argues the solution is new laws and accountable oversight, not corporate veto power.
RDCO Mapping
Critical reference for understanding Anthropic’s positioning and risk profile. Cross-reference with vault’s Jaya Gupta moat thesis on Anthropic. The open-source vs. closed debate directly affects RDCO’s tooling and infrastructure decisions. Thompson’s realist framing is useful for thinking about AI governance content angles for Sanity Check.
Related
- 2026-01-05-stratechery-ai-human-condition
- 2026-02-03-stratechery-microsoft-software-survival