Every — (Re(Re))Introducing Sparkle: Agent-Native File Organizer (Apr 14 2026)
Why this is in the vault
Sparkle’s rebuild is a concrete, shippable instance of the “Claude Code SDK unlocks agent-native consumer apps” pattern. Two takeaways worth keeping: (1) the explicit model-tiering architecture (Opus 4.6 for structural judgment, Haiku 4.5 for routine classification) validates a cost/quality pattern RDCO should adopt; (2) the “agent-native became practical four months ago when Claude Code SDK became available” quote is a concrete time-anchor for the harness-thesis timeline.
⚠️ Sponsorship
This is an Every-internal product launch. Every is promoting their own product Sparkle, bundled as a subscriber benefit. The newsletter is explicitly promoting the paid subscription tier. Not a third-party paid sponsorship but still a non-neutral angle — treat the implementation claims as generally trustworthy (it’s a shipping product) but the “AI produces better outputs when paired with human judgment” framing as marketing copy.
The core argument
The old Sparkle (2024) organized files with a rigid AI-imposed structure. The new Sparkle does three things differently:
- Clean first, then organize. ~80% of files on the average Mac are screenshots, installer DMGs, duplicates, system cache. The tool purges this debris before proposing a structure.
- Collaborative structure-building via chat. The agent proposes a folder structure; the user iterates by chatting (“merge these two,” “rename Projects to Work,” “add a Client Projects subfolder”).
- Model-tiering under the hood.
- Opus 4.6 analyzes a sample of recent files to propose the top-level structure (the expensive judgment call).
- Haiku 4.5 handles ongoing file classification into the established folders (the cheap routine call: “Q1 invoice.pdf” → “Finance”).
- Explicit rationale: use the smart model where it counts, don’t pay for it on routine operations.
The enabling claim: “Sparkle’s agent-native architecture became practical about four months ago, when the Claude Code SDK became available. Before that, you could approximate… but building it safely was much harder.”
Mapping against Ray Data Co
The model-tiering pattern is directly applicable to RDCO’s own stack.
- Right now, most of RDCO’s skills call Claude Code with whatever the default model is. We’re paying Sonnet/Opus rates for routine classification that Haiku could handle.
- Examples of candidates for Haiku:
/process-inboxclassification routing (which folder does this file belong in? — mostly deterministic once the vault structure is stable)/check-boardtask routing (priority-owner sort is deterministic; Haiku could read the board and emit the JSON)- Tracked-author CRM candidate classification
- Candidates that should stay on Opus/Sonnet:
/process-newsletterarticle assessment (requires the “Mapping against RDCO” judgment)/cross-checksynthesis across sources/audit-modeltest-plan generation/improveself-improvement reasoning
Practical next move: when we next touch a skill file, add a model: claude-haiku-4-5 frontmatter override for the routing/classification skills. Saves ~5-10x on per-call cost for the deterministic work.
The time-anchor. The “agent-native became practical four months ago” claim places the inflection at roughly Dec 2025. That’s consistent with the working-context observation that RDCO itself went from ~20 files to ~1,400 over the weekend once the skills architecture stabilized. Same inflection.
Related
- 2026-04-11-garry-tan-thin-harness-fat-skills — Sparkle is a real agent-native consumer product; the Tan architecture is exactly what Sparkle uses
- commentary-tan-fat-skills-thin-harness-2026-04-14 — model tiering is a form of the Fat Code / Fat Skills discipline
- 2026-04-13-moura-entangled-software-agent-harnesses-dead — Moura’s dissent argues harness-less agents; Sparkle counter-evidence (agent-native with a harness ships)
- ../04-tooling/rdco-state-ownership-architecture — same architectural lineage