06-reference

every who isnt using gpt 5 5

Wed Apr 29 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·reference ·source: Every ·by Laura Entis
gpt-5-5opus-4-7model-switching-costscto-to-ic-pipelineanthropicopenaiagent-deployerdaily-driverevery

“Who Isn’t Using GPT 5.5” — @Laura Entis (Every, Context Window)

Why this is in the vault

One week into the GPT-5.5 release, Every’s Context Window publishes the adoption-friction read: not “is the model good” (the Apr 23 Vibe Check already settled that) but “why aren’t more people switching?” The interesting layer is the CTO-to-IC pipeline observation — six former CTOs of billion-dollar companies (Instagram, Workday, Box) have left C-suite roles to become individual contributors at Anthropic. That detail lands directly on top of two threads RDCO has filed in the last 48 hours: Reiner Pope’s inference-architecture filing and Jon Siddharth’s enterprise-superintelligence-loop thesis. Both argue that operational depth — not abstract leadership — is where the next AI moat gets built. The Anthropic CTO-to-IC migration is observational evidence for that claim.

Body did not render via Gmail (Every’s known plaintextBody-empty pattern). Reconstructed from the canonical URL on every.to — flag for manual review on first pass. Frontmatter source_fidelity: reconstructed-from-web.

⚠️ Sponsorship

Monologue (transcription/voice-input tool) ran a marked sponsor placement in the body. Standard third-party paid; no editorial conflict with the GPT-5.5 framing. Disclosed clearly in-line.

The core argument

Adoption is unevenly distributed; switching costs are the bottleneck, not capability. GPT-5.5 is widely conceded to be excellent on speed and instruction-following, but established Claude workflows generate enough sunk-cost gravity that “do I really have to?” is the dominant team response. The reluctance is not a referendum on model quality — it’s about the migration tax on already-tuned prompt libraries, integrations, and agent harnesses.

The CTO-to-IC pipeline at Anthropic. Six former CTOs from billion-dollar companies (Instagram, Workday, Box) have moved out of C-suite roles into IC engineering positions at Anthropic. Entis frames this as evidence that AI has so dramatically reordered engineering work that experienced leaders need hands-on exposure to the tools their teams now use daily — leadership-by-management is no longer a viable position from which to set technical direction.

One-week-in field reports from Every staff.

The “slot machine” addiction loop. Writer Willie Williams’s framing: each prompt iteration is a pull on a slot machine that might produce the perfect output. The compulsive-toggling behavior this creates is itself a productivity tax. The skill to develop is knowing when to accept “fuzzy edges” and ship — a distinct capability from prompt engineering itself.

The goblin-ban revelation. OpenAI discovered an internal personality-tuning quirk in GPT-5.5: the reward model encouraged creature metaphors (specifically goblins/gremlins), which proliferated through training. Developer instructions now restrict creature-chat unless directly relevant. A small but interesting tell about how much of model “personality” is artifact of training-reward shape rather than deliberate design.

Issue contents

Context Window is hybrid (essay lead + Mini-Vibe-Check curation). For this issue the lead piece is the GPT-5.5 adoption-friction essay; the Mini-Vibe-Check sidebar appears to have been folded into the body as the staff field-report quotes (Klaassen, Tedesco, Claudie head-to-head, Williams). No separate third-party curation links section appeared in the rendered article — atypical for a Context Window issue, but consistent with a “single-topic week” framing where the whole issue collapses around one news beat (the GPT-5.5 release).

Mapping against Ray Data Co

Strong mapping. Three live RDCO threads converge.

1. The CTO-to-IC pipeline confirms the agent-deployer role thesis from a different angle. 2026-04-14-levie-agent-deployer-role-jd argues the new bottleneck role is the operator who can stand up agents in real workflows — a hands-on, IC-shaped job, not a managerial one. Entis’s data point — six billion-dollar-company CTOs moving down the org chart at Anthropic — is the strongest external evidence yet for that thesis. When experienced engineering leaders self-demote to IC at frontier labs, the signal is that the strategic value of being close to the model outweighs the strategic value of being above the org. This is the same shape RDCO is betting on with the phData AI Workforce engagement model: the consultant who can ship is more valuable than the consultant who can advise. File this CTO-to-IC observation as a citeable proof-point in any phData deck slide on “why agent deployment is an IC discipline.”

2. Switching-cost-as-moat pairs cleanly with Reiner Pope’s inference-architecture filing. 2026-04-29-dwarkesh-reiner-pope-gpt5-claude-gemini-training argued that inference architecture is now where the durable moat lives — specific cluster topologies, batching strategies, and prompt-cache layouts that can’t be easily ported between providers. Entis’s adoption-friction read is the demand-side mirror of Pope’s supply-side claim: customers don’t switch because their prompt-cache, agent-harness, and integration layer are optimized to one provider’s serving stack. Both pieces converge on the same conclusion — the lock-in is structural and operational, not capability-based. For RDCO consulting work, this means advising on switching is a high-value engagement: the client who walks in saying “we should be on GPT-5.5” actually needs help auditing what they’d lose if they migrate.

3. The Claudie-beats-GPT-5.5 anecdote validates the Turing superintelligence-loop thesis. 2026-04-30-jonathan-siddharth-turing-superintelligence-loop argues the moat is the data and deployment loop, not the foundation model. Every’s consulting team finding that their Claude-tuned agent “Claudie” beat raw GPT-5.5 on a domain-specific task (sales-proposal generation) is the textbook illustration: a specialized harness wrapping a slightly-less-capable base model outperforms the newer, faster general-purpose model on a real workload. Same week, two filings making the same argument from different angles — that’s a strong-cluster signal worth a content piece, not just a vault note.

4. Loose connection to the Meta Ads CLI launch. 2026-04-30-meta-ads-cli-agent-native-launch is the same-day reminder that vendors are now shipping agent-native primitives (CLIs, APIs, structured surfaces) explicitly designed for agent consumption. The Entis adoption-friction story is the customer side of that supply-side shift: vendors are racing to ship agent-native surfaces, but customer adoption is throttled by switching costs in the existing harness. Useful framing for RDCO if/when we write on agent-native platform shifts: the supply is moving faster than the demand can switch, which means the integration consultant has a multi-year window.

Sanity Check angle (not a derivative recap, per the no-derivative-pieces feedback). The Entis piece is the source. The original re-frame is: the CTO-to-IC migration is the canary for how AI rewrites the management ladder. When the senior person on the team needs to be the one closest to the model, the org chart inverts — and most enterprises don’t have a job ladder for “principal IC who reports to the CEO.” That’s the gap consultants fill until the ladder gets rebuilt. That’s a Sanity Check angle that uses Entis’s data point as evidence and lands in RDCO’s actual area (operating-model discipline + AI deployment) rather than rehashing what Every published.

Methodological flag. The “six former CTOs” count is asserted without a public list. The named companies (Instagram, Workday, Box) are credible but unverified in the article. Worth confirming via LinkedIn before quoting in any client-facing material — not a blocker, just a “trust but verify” before deck use.