3Blue1Brown — Vectors | Chapter 1, Essence of linear algebra
Why this is in the vault
This is the canonical opener of Grant Sanderson’s Essence of Linear Algebra series — 11.5M views as of April 2026, posted August 2016 — and the single best lay-accessible answer to the question “what is a vector, really, and why does it matter that mathematicians, physicists, and computer scientists each see a different object when they hear the word?” The video keeps because (1) it is the prerequisite explainer to point any audience member at before they engage with anything in our AI trilogy from cycle 30 — neural networks, LLMs, and diffusion models all live in vector spaces, and a reader who hasn’t internalized the arrow ↔ list-of-numbers translation will read those AI explainers as magical thinking; (2) the three-perspectives framing (physics-arrow / CS-list / mathematician-abstract-axiom) is the cleanest available decomposition of why “what is a vector?” is genuinely a confusing question with multiple right answers, and explicitly diagnoses the cross-disciplinary translation problem that data engineers face daily when moving between SQL columns, ML feature tensors, and geometric embeddings; (3) Sanderson’s thesis statement — “the usefulness of linear algebra has less to do with either one of these views than it does with the ability to translate back and forth between them” — is the load-bearing pedagogical claim of the whole field, and a directly transferable principle for any cross-modal data work; (4) the video’s pedagogical structure (concrete object → multiple framings → operational definition → numerical follow-through → preview of next chapter) is reusable as a template for any technical-explainer Sanity Check piece. The series is also a load-bearing dependency for CA-014 (high-dimensional surface concentration) — that concept is unreadable without first internalizing the arrow ↔ list translation introduced here.
Core argument
- A vector has three legitimate definitions, and the disagreements are the point. Physics: an arrow with magnitude and direction, freely placeable in space. CS: an ordered list of numbers (e.g., (sqft, price) for a house). Mathematician: anything where vector addition and scalar multiplication are sensibly defined, regardless of representation.
- For the linear-algebra series, anchor on “arrow rooted at the origin in a coordinate system.” This is a deliberate restriction relative to the physics view (which lets vectors float anywhere) — and the restriction matters because it is what makes the bijection between vectors and coordinate-tuples one-to-one.
- Coordinates encode walking instructions. The pair
[x, y](written vertically, square brackets, to distinguish from points) means: walk x along the x-axis, then y parallel to the y-axis. Every pair gives exactly one vector; every vector gives exactly one pair. Generalizes to triplets in 3D. - Vector addition is tip-to-tail composition; numerically it’s component-wise. The geometric intuition (move along v, then along w → arrive at v+w) and the numerical recipe (add coordinates pairwise) describe the same operation. Why this definition? Because each vector encodes a movement, and composing movements should commute through the sum. Same shape as 2 + 5 = 7 on a number line.
- Scalar multiplication is stretching/squishing/flipping. Multiplying a vector by 2 doubles its length; by 1/3 squishes it to a third; by -1.8 flips and stretches by 1.8. Numerically: multiply each component by the scalar. The word “scalar” comes from this scaling action and is treated as effectively interchangeable with “number” throughout the field.
- Every linear-algebra topic to follow will revolve around these two operations. Span, basis, transformations, matrices, determinants, eigenvalues — each is built from vector addition and scalar multiplication. The series is structured to make this claim feel inevitable rather than arbitrary.
- The translatability between geometric and numerical views is the discipline’s actual value. Data analysts get a way to see lists of numbers as geometric objects (clarifies pattern-finding); physicists and graphics programmers get a way to drive geometric manipulation through arithmetic that computers can run. The whole power of linear algebra lives in the back-and-forth, not in either view alone.
Mapping against Ray Data Co
- Foundational pre-req for the AI trilogy. The cycle-30 trilogy (2026-04-20-3blue1brown-but-what-is-a-neural-network, 2026-04-20-3blue1brown-large-language-models-explained-briefly, 2026-04-20-3blue1brown-but-how-do-ai-images-and-videos-actually-work) all assume the reader has internalized “vector = arrow rooted at origin = list of coordinates.” A neural network’s input layer of 784 neurons is a vector in R^784; an LLM’s token embedding is a vector in R^512–4096; a diffusion image is a vector in R^(H×W×3). Without Chapter 1’s translation move, those higher-level explainers degenerate into hand-waving for any reader without prior linear-algebra exposure. This is the single most cited prerequisite when introducing AI work to clients who are smart but unfamiliar with the math.
- Sanity Check audience leverage. The data-engineering audience routinely operates on vectors — SQL row tuples, ML feature columns, embedding vectors in pgvector / Pinecone — without holding the geometric view at all. A “your row in a table is a vector, and operations on it inherit a 9-minute-video amount of geometric intuition” piece would land for the segment that’s been doing vector work for years without realizing it. Direct cross-link from any future Sanity Check piece on vector databases or RAG.
- CA-014 dependency (high-dim surface concentration). That concept’s load-bearing intuitions — embedding spaces, parameter manifolds, near-orthogonality of random vectors in high dimensions — are unreadable without the arrow ↔ list translation taught here. Chapter 1 is the prerequisite layer beneath the geometric facts CA-014 captures.
- Pedagogical template for explainer skills. Sanderson’s structure (concrete object → multiple framings → operational definition → numerical follow-through → preview of next) is directly portable to
~/.claude/skills/research-brief/,/draft-review, and any future skill that produces technical content. The “translate back and forth between two views” framing is the highest-leverage move for any topic where the audience is split across two disciplinary backgrounds (which is almost every Sanity Check topic). - Translation discipline as a research-craft principle. The thesis that linear algebra’s value is in translating between views, not in either view alone, generalizes to RDCO’s broader research-output craft: every concept page benefits from at least two framings (mathematical + operational, or theoretical + case-study, or technical + business). The vault concept-promotion bar is implicitly enforcing this — concepts only graduate when they have been seen across multiple framings (CA-006 / CA-008 split is a worked example).
- Visual-language inspiration. Manim (the library Sanderson built to produce these animations) is the load-bearing process-power moat for 3B1B (see CA-022 in CANDIDATES). The visual style here — coordinate grids with extending tick marks, color-coded basis vectors, tip-to-tail addition shown geometrically and numerically side-by-side — is the canonical reference for any future RDCO motion work explaining linear algebra or geometry.
Pedagogical structure (reusable template)
- Open with a definitional disagreement. Three perspectives on “what is a vector” — surfaces the cross-disciplinary translation problem before solving it.
- Pick one as the anchor for the series. Arrow rooted at origin — the deliberate restriction that makes everything else clean.
- Translate to the other view via coordinates. Walking instructions — geometric and numerical bijection established explicitly.
- Generalize the dimension. 2D → 3D, with the same structural recipe — primes the reader for higher-dim generalizations later.
- Define the two foundational operations. Vector addition (geometric tip-to-tail; numerical componentwise) and scalar multiplication (geometric stretch/flip; numerical componentwise). Both shown in both views.
- Justify each operation’s definition by intuition. Why this addition rule? Because it composes movements. Why this scalar rule? Because stretching arrow length matches multiplying components. Definitions that feel arbitrary become inevitable when motivated this way.
- Close by previewing the next chapter and naming the central thesis. Span / basis / linear dependence ahead; the field’s value lives in translating between views.
Notable quotes
- “There are three distinct but related ideas about vectors which I’ll call the physics student perspective, the computer science student perspective, and the mathematician’s perspective.”
- “In linear algebra, it’s almost always the case that your vector will be rooted at the origin.”
- “Every pair of numbers gives you one and only one vector, and every vector is associated with one and only one pair of numbers.”
- “Each vector represents a certain movement — a step with a certain distance and direction in space.”
- “Throughout linear algebra, one of the main things that numbers do is scale vectors, so it’s common to use the word scalar pretty much interchangeably with the word number.”
- “The usefulness of linear algebra has less to do with either one of these views than it does with the ability to translate back and forth between them.”
Related
- ~/rdco-vault/06-reference/transcripts/2026-04-20-3blue1brown-vectors-chapter-1-transcript.md — full transcript
- ~/rdco-vault/06-reference/2026-04-20-3blue1brown-linear-combinations-span-basis-chapter-2.md — direct sequel; introduces span, basis, linear dependence
- ~/rdco-vault/06-reference/2026-04-20-3blue1brown-linear-transformations-matrices-chapter-3.md — Chapter 3; matrices as transformations
- ~/rdco-vault/06-reference/2026-04-20-3blue1brown-but-what-is-a-neural-network.md — uses Chapter 1’s vector intuition for the input layer (R^784) and forward-pass
sigmoid(Wa + b)notation - ~/rdco-vault/06-reference/2026-04-20-3blue1brown-large-language-models-explained-briefly.md — uses Chapter 1’s vector intuition for token embeddings
- ~/rdco-vault/06-reference/2026-04-20-3blue1brown-but-how-do-ai-images-and-videos-actually-work.md — uses Chapter 1’s vector intuition for image-as-flattened-vector framing
- ~/rdco-vault/06-reference/2026-04-20-3blue1brown-volume-higher-dim-spheres-most-beautiful-formula.md — generalizes Chapter 1’s intuition into higher dimensions where it stops matching common sense
- ~/rdco-vault/06-reference/concepts/CANDIDATES.md — CA-014 (high-dim surface concentration) depends on the arrow ↔ list translation Chapter 1 establishes
Source provenance
- Channel: 3Blue1Brown (Grant Sanderson)
- Series: Essence of Linear Algebra, Chapter 1
- URL: https://www.youtube.com/watch?v=fNk_zzaMoSs
- Upload: 2016-08-06
- Duration: 9:51
- View count at ingest: 11.5M
- Sponsorship: None disclosed in this video; series funded by Patreon supporters and (across the channel) historically by Brilliant