Ziglings. Learn Zig by fixing small bugs in small programs. (Inspired by rustlings, though those exercises seem to be broader than just fixing errors.) A good way to learn a programming language, I think.
Blender 2.92 dropped recently. Geometry nodes look promising, and it’s crazy to see how all the grease pencil work has turned Blender into a viable 2D animation studio as well.
PEP 636. Pattern matching! In Python! Very much looking forward to this — I’ve loved using it in Rust.
Joel Hooks on blogs and digital gardens — this makes me want to finish my revamp of Slash so I can more easily add an actual digital garden to this site (and at some point I’ll write about that revamp since I don’t think I’ve gone into any detail on it)
Amy Hoy on how blogs broke the web — it’s not quite as bad as the headline sounds, but still some good food for thought (you could say this is another way of looking at stock vs. flow)
I made the initial animation in Blender, using the wave and displace modifiers and some postprocessing in the node editor. Then I imported the frames into After Effects and did a little more processing (added grain, some color adjustments). I exported the frames as a PNG sequence and then converted them to a GIF using ImageMagick on the command line:
As I’ve been tinkering around with graphics coding, I wanted to figure out how to map a sine wave onto a circle. Here’s how it went down. (Disclaimer: this is all unoptimized code that is almost certainly not the best way to do this. Also, I’m a beginner, so yes, this is very basic stuff.)
First off, I started a new Processing sketch and drew a circle:
The relevant code from the draw() function:
float x = width / 2;
float y = height / 2;
float radius = height / 3;
float angle = 0;
float angleStep = 0.005;
float twopi = PI * 2;
float dx, dy;
while (angle <= twopi) {
dx = x + radius * cos(angle);
dy = y + radius * sin(angle);
angle += angleStep;
ellipse(dx, dy, 2, 2);
}
The basic idea: I loop from 0 to 2π radians (a full circle) in steps of angleStep radians at a time. At each angle I calculate the vector from the center of the circle (x, y) to the point on the circle at a distance of radius. And then I draw a little circle at that point. With a small enough step size, you get a continuous line. (More on that later.)
Mapping a sine wave onto said circle really just means that when you calculate the vector, you apply a sine wave to the point’s distance from the center of the circle. So I added that in, including frequency and amplitude variables I could tweak:
(I should add that I did this first part on my phone in Procoding. But then Procoding crashed and deleted my sketch, so I rewrote it in the actual Processing app on my laptop.)
I pulled this code out into a function so I could loop through it and create a bunch of concentric sinuous circles. I also changed the drawing method to use lines instead of ellipses, so there wouldn’t be gaps. Processing didn’t seem to want to antialias the lines, though, and I’m not sure why. Oh well.
I also saved an alternate version of the sketch that changed the colors to render out a depth map for each frame (the darker it is, the farther from the camera):
Then I animated the amplitude and rotation of each circle and rendered out all the frames, both for the blue-and-white version and for the depth map. I pulled it all into Blender and composited it together. The node setup:
I take the depth map and convert it from RGB to black and white (meaning values from 0 to 1, where black is 0 and white is 1).
Then I invert the depth map so white is 0 and black is 1, because I’m going to use it as a distance map, where white is close to the camera (a distance of 0 from the camera) and black is far from the camera (a distance of 1).
I plug both the rendered frames and my inverted depth map into the defocus node, which gives me depth of field. (It’s postprocessed, so it’s not ideal, but I don’t think there’s a way around that.) The fStop value is how shallow the DOF is (the lower the number, the blurrier it gets). In the camera settings I’ve keyframed the Distance value (the focal point), with a range of 0 to 1. (Ordinarily they use Blender units, but in this case we’re using our depth map and that has a range of 0 to 1.)
I do a fast Gaussian blur to try to make up for the lack of antialiasing. It doesn’t work as well as I’d like.
Then I add a lens distortion with some chromatic aberration (Dispersion) and elliptical distortion (Distort plus Fit so that I don’t get black around the edges).
Finally, I add a Mix node and change it to Soft Light, then plug in some brown-colored noise I’ve painted in Photoshop.
After I rendered the composited frames out to disk, I imported them into Blender’s video editor, added a crossfade to a black color strip at the end, then rendered to H.264 and uploaded to Vimeo. The final result:
The animation itself is somewhat lacking — the timing is uninspiring, the f-stop jumps around too much, etc., and I don’t think it properly conveys a sense of 3D space (of being in a tunnel) — but as a test of the Processing + Blender workflow, I’m quite pleased.
An animation test I painted in Photoshop and threw together in Blender, mainly just to play around with parallax layers (and to get back into doing animation again):
I’ve been playing around some more with the L-system code and modified it to animate the angle property and output each frame to a file. I also added some color and started using blending modes for the brushes. Once I clean up the code, I’ll post it to GitHub.
Anyway, here are some of the animation tests (I used Blender to put the frames together into an animation):
And the first one I did, which is a little too long and a little too fast:
Playing around more with Blender, I used a lattice to deform a cube. (I’m teaching myself animation.)
The background is painted in Photoshop, then composited in via Blender’s node editor. As for the animation itself, I subdivided a cube, then added a lattice modifier and used shapekeys on the lattice. Seems to work pretty well.