Why this is in the vault
Two load-bearing data points for active RDCO theses in a single issue: (1) Warp going open-source under AGPL v3 with an explicitly agent-first contribution model — direct, current evidence for the agent-deployer thesis (the dev-tool category is openly redesigning itself around “human directs, agents code”); (2) Sakana AI training a 7B model whose job is to manage other LLMs — direct evidence for the harness-thesis (small specialist model orchestrating larger ones is the architectural pattern we keep arguing matters more than raw frontier scale). The Zed 1.0 and Gemma 4 31B-on-Apple-Silicon notes are secondary but reinforce the local-first / native-perf trend that’s adjacent to the personal-AI-tool layer the founder has been mulling.
Issue contents
Top Repo — Warp goes open-source. Full Rust codebase on GitHub, AGPL v3 (UI framework MIT), 40k stars on launch. Contribution model is agent-first: “Oz” cloud system runs multiple AI coding agents in parallel for the actual code/plan/test work; humans bring ideas and verify. BYO agent supported — Claude Code, Codex, or Gemini CLI all run inside Warp. New /feedback skill for in-terminal issue submission. OpenAI is the founding sponsor (powers the agentic workflows with GPT models).
Top News — Zed 1.0. Five years in, Rust + GPU rendering, no Electron. Ships with git, SSH remoting, debugger, real-time collab, and parallel-agent support (Claude / Codex / OpenCode). Mac/Windows/Linux. Weekly updates.
Top Model — Qwen3.6-35B-A3B. Alibaba MoE: 35B total, 3B active per response. 262K native context (extendable to 1M via YaRN). Scored 51.5 on Terminal-Bench 2.0. Runs at 100-120 tok/s on a single RTX 5080 via Unsloth GGUF. Apache 2.0.
Signals (numbered):
- Microsoft open-sources TRELLIS.2 — 4B-parameter image-to-3D with high-res textured output (game-ready asset from one photo).
- [Sponsored — Viktor] “AI coworker in Slack” that writes docs, ships PRs, runs reports, sends emails.
- TauricResearch releases open-source multi-agent LLM framework for financial trading (54.5k stars).
- Sakana AI trains a 7B model to manage other LLMs and beat them all.
- Xiaomi releases MiMo-V2.5 — native omnimodal (text + vision + audio).
- Google’s Gemma 4 31B now runs locally on Apple Silicon.
Mapping against Ray Data Co
Strong — Warp open-source maps directly to the agent-deployer thesis. This is the clearest public statement yet from a venture-backed dev-tool company that the contribution model itself is being redesigned for agents. The framing — “your job is to bring the ideas and verify the output, the agents do the coding” — is exactly the role-shift the agent-deployer thesis predicts. Three operational implications worth tracking:
- Validates the “humans direct, agents execute” framing as something a serious commercial product is now requiring of its OSS community, not just suggesting. If the founder writes about agent-deployer-as-a-job-category, Warp’s
/feedbackskill + Oz parallel-agent system is the concrete tooling artifact to cite. - AGPL v3 is a deliberate choice — copyleft prevents downstream hyperscaler closure. Worth contrasting with how MAC’s pack ecosystem might handle similar IP concerns if it ever has a community contribution model.
- OpenAI as founding sponsor of an explicitly multi-vendor terminal (BYO Claude/Codex/Gemini) is a small but real signal: model providers see strategic value in not locking the agent runtime to themselves. Coexistence over capture.
Strong — Sakana 7B-managing-other-LLMs maps to the harness-thesis. A purpose-trained small model whose job is orchestration over inference is the architecture we’ve been arguing matters more than frontier scale. Add to the harness-thesis evidence stack alongside the Thariq Anthropic guidance and the “thin harness, fat skills” Tan piece. If a 7B can route/coordinate frontier models well enough to beat them on benchmarks, the value capture in the AI stack continues to migrate toward orchestration intelligence rather than raw weights.
Medium — Zed 1.0 + Gemma 4 31B on Apple Silicon + Qwen at 120 tok/s on a 5080. Local-first / native-perf trend. Relevant to any RDCO surface that wants to ship “private by default” tooling for the founder persona — the hardware substrate to run capable models locally is now mainstream-consumer (RTX 5080, M-series Mac). Doesn’t change a current decision, but worth keeping in the periphery for the “personal AI runtime” question if it ever gets queued.
Skip: TRELLIS.2 (image-to-3D, no current RDCO surface), TauricResearch trading framework (out of domain), MiMo-V2.5 (omnimodal interesting but no current product hook), Qwen weights (interesting benchmark but not a direct RDCO action).
Curation section — notes
- Sponsorship — flagged. Two sponsor placements per AlphaSignal house style: (a) Lambda (“Improve your Model FLOPS Utilization by >25%”) between Top Repo and Top News; (b) Speechmatics (“agent.listen() — but who’s talking?”) after the Zed item. Plus signal #2 in the numbered list is a third paid placement for Viktor (“AI coworker in Slack”) — labeled as “Presented by Viktor” inline. AlphaSignal usually has 1-2 sponsors; today’s three is on the high end but each is clearly marked.
- No self-cross-promotion in this issue (AlphaSignal is purely a curation product; they don’t have their own course/event being plugged today beyond the standard “WORK WITH US” CTA at the footer).
- Lior’s editorial framing (“local-first is winning”, “raw performance is back in fashion”) is on-thesis with what we’d write — useful as evidence that this is a broader narrative crystallizing, not just our reading.
Related
- harness-thesis — Sakana 7B-orchestrator is fresh evidence; add to the citation stack.
- agent-deployer-thesis — Warp’s agent-first contribution model is the cleanest public artifact of this category to date.
- 2026-04-15-thariq-claude-code-session-management-1m-context — context-rot / harness discipline, same architectural family as Sakana’s orchestration approach.
- ai-coding-tools — Warp + Zed + the BYO-agent pattern (Claude Code / Codex / Gemini CLI as interchangeable runtimes) is the cross-cutting trend.
- local-first-ai — Gemma 4 31B on Apple Silicon, Qwen at 120 tok/s on consumer GPU.