“AI Hardware, Meta Display, Redefining VR and AR” — Ben Thompson
Why this is in the vault
Thompson did the rare thing of publicly retracting half of his own September 2025 take on the Meta Ray-Ban Display after actually using it. The retraction matters more than the device review: he is now arguing that AR is a categorically different product space from VR (not a continuum, not converging), that the input method (Neural Band) is the load-bearing innovation, and that the killer feature is anticipatory AI delivering “just-in-time UI” — not apps. That last claim sits directly on top of RDCO’s always-on agent thesis. If Thompson is right that the next compute surface is glasses + neural input + ambient AI, the harness/skill substrate Ray is already building is one of the natural backends for it.
Issue contents
Hybrid Update covering three connected topics:
-
AI Hardware (Apple). Reuses the Gurman/Bloomberg leak about Apple’s roadmap (smart glasses, tabletop robot, smart display, security camera, AirPods with cameras, pendant). Smart glasses slipping to 2027, tabletop robot to 2028. Thompson’s frame: more devices is genuine growth for Apple (every iPhone owner buys another wearable for their face, plus stuff around the home), but it also accelerates the phone’s demotion from hub to peripheral — which means the actual enabler of growth is AI quality, not device count. Apple has a structural device-volume story and an unresolved AI execution problem.
-
Meta Display (the actual device review). Thompson finally bought the Meta Ray-Ban Display and re-tried Orion. Scoring his September predictions: wrong, very wrong, right, very right, right. The two big “wrongs” go together and produce the Update’s load-bearing reframe (next section).
-
Redefining VR and AR. The reframe: VR vs AR is not about whether your eyes are occluded — Vision Pro pass-through is high-fidelity and still feels like VR. It’s about whether the device demands your focus or sits in your periphery. Input method determines the category. Eye-tracking + hand-pinching (Orion, Vision Pro) requires immersion → VR. Neural Band (Display) lets you control the device with your hand in your pocket while your attention stays on the real world → AR. By that test, Orion is VR-in-glasses-form, and Thompson now calls it a dead end (“if I’m going to immerse, I’d rather have the brighter screens of a Quest”). The Display is the first true AR product he’s used. Notifications + live captions + live translation + teleprompter are the right primitives; apps are the wrong primitive; AI mediation is the only correct chrome. Apple’s VR-style playbook (intentional, app-based) will not transfer to AR — these will diverge, not converge, the way PCs and tablets stayed separate. He thinks Meta is structurally better positioned than Apple in AR specifically. Personal note: he won’t actually wear them regularly because of his glasses prescription.
Mapping against Ray Data Co
Strong direct hit on the always-on agent thesis. Thompson’s “everything should be mediated and ideally anticipated by AI, showing me what I need when I need it and getting out of the way” is almost a verbatim restatement of the RDCO design intent for the Mac Mini agent — with the form factor swapped from desktop+channels (iMessage/Discord) to face+notifications. The shape is the same: ambient, anticipatory, no app launcher, AI as the only chrome. If glasses become the dominant AR surface, the RDCO architecture (skill library + memory + working-context substrate + channel-style messaging) ports naturally onto them — the channels just become “lens notifications” instead of iMessage. The skill stays the same; the surface changes. Cross-link to 2026-04-09-ramp-glass-ai-coworker and the always-on agent posture in 2026-03-29-infrastructure-decisions.
Reinforces the harness-thesis cluster from a hardware angle. Thompson explicitly argues the Display should not be an app platform and that AI is the right interface. That’s the harness thesis stated in the consumer surface vocabulary — the orchestration layer (AI deciding what to show, when, mediated through a single input device) eats the “what app do I open” decision. Sits next to 2026-04-22-stratechery-john-ternus-spacexai-cursor (Apple betting on hardware + edge silicon as the AI-era differentiator) — together these two Updates are Thompson’s coherent April 2026 read: the hardware companies that win the AI era are the ones that get the interaction model right, not the ones with the most devices on the truck. That sharpens the open tension flagged in the Ternus note: Apple’s edge-silicon bet wins compute economics; Meta’s Neural Band + AI-mediation bet wins the interaction model. They might not be the same company.
Neural Band as a generalizable interaction primitive — worth tracking. Thompson’s Tesla FSD anecdote (wanting the band to control a phone screen across the cabin without touching it) and the Garmin/CES 2026 demo of using the band to control car infotainment both suggest the Neural Band is not a glasses accessory — it is an input device looking for surfaces. If it generalizes, it becomes the keyboard-equivalent for the augmented-world layer Thompson is describing. Worth a vault watch-item: track Neural Band SDK / partnerships separately from Meta’s glasses roadmap.
Apple AR risk. Thompson explicitly flags that Apple’s old playbook (intentional, app-based, focus-demanding) is exactly wrong for AR, and that PCs-vs-tablets is the historical analog. If correct, Apple’s smart-glasses 2027 launch is structurally walking into a Windows-8 pattern. Cross-link to 2026-03-31-stratechery-apple-50-years-integration for the integration arc Thompson is now arguing has a hard limit at the AR boundary.
Operational read for Ray. Don’t bet content or product on a near-term glasses surface — Thompson himself notes prescription-lens issues alone will exclude a meaningful slice of users for years. But do bet on the interaction principles: anticipatory > intentional, AI-mediated > app-launcher, channel-style notifications > UI chrome. The RDCO agent already operates this way. The work is to keep the skill substrate portable across surfaces so when one of these glasses platforms (Meta Display v3, Apple’s 2027 smart glasses, whatever follows) crosses the daily-wear threshold, RDCO’s agent already speaks the right interaction grammar.
Related
- 2026-04-22-stratechery-john-ternus-spacexai-cursor — Thompson’s prior Update; Apple as hardware-defined company in the AI era
- 2026-03-31-stratechery-apple-50-years-integration — Apple’s integration arc; the historical playbook Thompson now says won’t transfer to AR
- 2026-01-14-stratechery-meta-compute — the Meta Reality Labs / compute tradeoff; Display is the bet that’s starting to pay
- 2026-01-13-stratechery-apple-gemini-ucp — Apple’s AI execution gap; same gap that makes its AR roadmap risky per this Update
- 2026-04-15-stratechery-amazon-globalstar-apple-angle — Apple as platform-supplier-dependent; pattern-match for the AR layer
- 2026-03-09-stratechery-macbook-neo-thin-macbook-memory — “Thin is In” thesis; same edge-vs-cloud-vs-AI tension Display sits inside
- 2024-06-13-moonshots-ep104-alvin-graylin-ai-agi-metaverse — earlier framing of AI+XR convergence; pre-Neural-Band era
- 2026-04-09-ramp-glass-ai-coworker — anticipatory AI assistant pattern; same interaction principles in a different surface
- 2026-03-29-infrastructure-decisions — RDCO always-on agent architecture; the harness this Update implicitly argues for