06-reference / transcripts

How I animate 3Blue1Brown | A Manim demo with Ben Sparks

Sun Apr 19 2026 20:00:00 GMT-0400 (Eastern Daylight Time) ·transcript ·source: 3Blue1Brown YouTube

Raw transcript — How I animate 3Blue1Brown | A Manim demo with Ben Sparks

Source: https://www.youtube.com/watch?v=rbu7Zu5X1zI Duration: 53m 40s Captured: 2026-04-20

Full clean transcript stored at /tmp/yt-process/rbu7Zu5X1zI.txt during ingestion (11,154 words). Per copyright policy, raw transcript preserved for internal reference. Re-download via:

yt-dlp --write-auto-sub --sub-lang en --skip-download --sub-format vtt -o "%(id)s" "https://www.youtube.com/watch?v=rbu7Zu5X1zI"
python3 ~/.claude/scripts/vtt-to-text.py rbu7Zu5X1zI.en.vtt

The most common question I get about 3Blue1Brown is, what do I use to animate the videos? The short answer is that I wrote a custom Python library, its name is Manim, so it’s all programmatic and it’s also very bespoke. What I wanted to do with this video is offer a behind the scenes to show you what Manim is for those who don’t know, and to show a little bit about how I use it and what the workflow is. I sat down with Ben Sparks when I was in the UK a couple months ago — many of you may recognize him from his many great Numberphile videos. He wanted to know how Manim worked, and I knew a number of other people had that same question. After a simple hello world example, we animate the famous Lorenz Attractor, which is very important in the foundations of chaos.

[00:01:00] The way I started the project, basically when I was finishing my undergrad, was that I wanted this coding project that would somehow let me illustrate mathematical functions better as transformations, made a super scrappy bit of Python code for that, used it to make the first video on this channel, as I made more videos the tool improved. The most recent video I made about holograms — I was pretty proud of the visuals, that would have been dramatically harder to do even two or three years ago, but it was actually kind of a joy to make just because of a lot of the workflow improvements over the years. I’ve always posted all of the code that I make for videos openly on GitHub, and I made the underlying tool Manim open source. But I don’t, as a personality type, really have the constitution to manage an open source project, so I wasn’t the most attentive to issues and pull requests, but a community of people who wanted it to be a more robust tool forked the repo and created the Manim community version.

[00:02:04] Manim community version is generally recommended for beginners. What I’ll be demoing with Ben is my own version, but you should be aware that there is this divide. Without further ado, let’s dive into the Hello World example. It’s all in Python. I’ve created a scene. So all of the scenes that I edit together take the form of a class in Python. And then inside a certain method called construct, this is where all the code that renders stuff is going to live. There’s objects added like a circle, like, you know, we can add like a square in there too. If I run all of that, we have the square. There’s a Python terminal that’s talking to the scene itself.

[00:04:03] A common workflow is to write code, copy it, and paste into the terminal. I have a shortcut written in Sublime — command R — that copies the line and runs the text. So you’ve got a shortcut which runs this code, outputs the visual, immediate check. There’s slightly more going on under the hood. If I pull up a scene from the most recent video on holograms, these can get quite long. The final output was a four and a half minute MP4 file describing diffraction gratings.

[00:05:03] It’s nice to have these long scenes because they share a bunch of context. The thing I want to highlight is that if I’m working on the scene and I’m somewhere in the middle of it, while I’m iterating on just one little section, I might want to be able to run the code of this section and see what it does.

[00:06:01] In order for that to work, when I’m pasting it in, it can’t just literally be pasting the code. It has to revert to the state that the scene had at the start. So it’s really running this little thing called checkpoint paste, which basically says if the thing that was copied starts with a certain comment, I’m going to see if I’ve seen that comment before and cached a state of the scene associated with that. Basically it’s making it behave a little bit more like a notebook, like a Jupyter notebook.

[00:07:00] Most objects default to being in the center. Instead of adding it, a different thing you could do, anytime you’re going to do some kind of animation, there’s a method called play. In this case, maybe I want to write it to make it look like it’s being written on. I could have some parameters and like, hey, maybe I want that to happen a little bit longer, like with a runtime of three. Anyone who’s watched any Manim videos recognizes that effect. A different philosophy with Manim was: anything can transform into anything.

[00:08:03] I’m going to make the H of that text turn into the circle. So that’s going to turn into a circle. So you defined the circle already, but you didn’t add it anywhere. Now you’re going to transform the H into it. So a lot of things derive from transform.

[00:09:01] There’s all sorts of nice smoothing functions. There’s an attribute called the rate function and it defaults to a thing called smooth. If it was linear, then just notice it’s a little jerky. The Lorenz attractor is this very bizarre shape that came up in the early history of chaos theory. It comes from a set of differential equations in three dimensions.

[00:11:02] So for the Lorenz Attractor, our setup, I just have some axes with some coordinates. The way I started here, I went to ChatGPT. What you want to do for the math underlying this is to basically feed some software a differential equation and an initial condition and just say, how does this initial condition evolve? Obviously, people will have built software packages that do it. So I just asked: write me a Python function using some numerical ODE solver to find a solution of the Lorenz equations.

[00:13:00] As you’ve no doubt found, if you’re trying to engage with some new library, that’s just a nice way to see what it is. It solves a lot of the protocol issues that sometimes you take three hours to get past. It’s mentioning the SciPy integrate library and it’s got this solve initial value problem function.

[00:15:00] If our initial condition is something like 0, 0, 0, the solutions are all zero. Initial condition matters. So let’s make that like 10. There’s a function called coordinates to points (c2p), which is going from whatever the coordinate system of the axes are to the manim coordinate system. Python has a little syntactical snazziness where if you put an asterisk in front of an iterable thing, it’ll unpack it.

[00:19:01] We could see how it evolves. So that’s going to take all those points, but just kind of run through them slowly. We want the runtime to match the actual time of the dynamic system. So it should draw it over the course of 10 seconds. When it draws things by default, it does that smoothing function. We want the rate function in this case to be linear because the math that it’s representing is relevant. It needs to be kept rather than masked.

[00:21:02] The reason the system is interesting is because it’s chaotic. If you have initial conditions that are really close to each other but not quite the same, they start evolving the same for a while, but then they stop. We’re going to create multiple curves. I’m going to have curves which I’m going to create as a vgroup just to say these are vectorized objects.

[00:23:00] In this for loop, I’m going to add each curve. Then this is going to create basically a list of animations using show creation, with the asterisk to pass them as multiple different animations. So we’ve got this group of curves with two for now.

[00:25:00] I’ll add an updater function for the dots. This is a thing I want to be called on these dots at every iteration from here on forward. Python question — the zip command is sort of stacking two lists in parallel.

[00:28:04] I’m going to put in a small line that anyone who knows Python is going to vomit at: globals().update(locals()). It is an annoying necessity. I’ll explain later why I would never do it in serious code.

[00:29:00] Future-me here. The IPython embed used by Manim has a scoping bug where defined variables aren’t visible inside functions defined in the embed. Workaround: globals().update(locals()). Better way: pass variables as default arguments to the function. The bug doesn’t reproduce in vanilla IPython.

[00:33:01] Made the dots bigger with glow_dot. If you want to interact with the rendering — pan the camera throughout — there’s a frame.animate.reorient method that lets you smoothly move the camera over the course of an animation.

[00:36:01] Color choices: yellow and blue I can see apart, blue and teal not so much. Either you want them to be starkly different so you see it, or you want it to be aesthetically nice where you’re using shade rather than hue. Strange attractor: all the points are attracted to a certain shape, but that shape isn’t so simple as a cycle and it’s got a fractal nature.

[00:39:00] If we wanted a tail effect following the dots that fades out nicely, there’s a thing called Tracing Tail. Set curve opacity to zero so the curves drive the dots invisibly while only the tails render. Tracing Tail has a parameter for how long the tail follows.

[00:42:03] Once I have the scene I like, I might want to render this out. I have a keyboard shortcut. The way you might call Manim usually is in the command line. You say which scene you want to render. I have a couple different parameters. If I say pre-run, it’ll go through it all without animating to estimate how long it’ll take and catch any errors. The Finder option pops it up in Finder. The W option means write it to file. I’ll usually render at 4K, sometimes the rendering will take a little longer.

[00:43:02] One of the reasons I wanted to make this behind-the-scenes is the way I used to use Manim, and the way I actually suspect a lot of people who use Manim Community do use it, is you’re always running it from the command line. The iteration cycle is annoying if every single time you’re making an update and you want to see it. It was later in the game, around the same time I was changing the implementation to run on OpenGL, to then also have the interactive shell version.

[00:45:05] LaTeX support: there’s a LaTeX object that takes some LaTeX and renders it. If there’s a 3D scene and you want it fixed in the camera frame, you say fix in frame.

[00:46:02] Tex-to-color (t2c): every time you find an X, color it red. Everything that looks like Y, green. The library tries its best by default if you don’t tell it which symbols to separate. There’s another function called isolate that helps when you need explicit control.

[00:48:00] My intention with example scenes: if you want to get started, look at a couple of these. The Manim community version has much better documentation. The animations available are visible in a folder of the library called animation. All of the code I’ve ever written for any video is on GitHub at github.com/3b1b/videos.

[00:51:00] Auto-complete: there’s the language server protocol in Sublime. I don’t like using Copilot with Manim because I know what I want to do and it doesn’t quite know what I want to do. It’s really nice when engaging with a new library, but for Manim I prefer dumber auto-complete tools.

[00:53:00] The example scenes give you some sense of what’s available. By the time I post this I’ll add documentation to the README of the videos repo to outline the actual workflow. I’ll also post the full uncut version of that conversation to Patreon.