Home / Blog Menu ↓

Blog: #cg

45 posts / tag feed / about the blog / archive / tags

Daily Blender 01

Inspired by Zuggamasta’s 365 day project, I’m going to be making a new Blender render each day in June (excepting Sundays). Each render will be 800x800, and many of them will be somewhat on the pathetic side. But it’ll be a good challenge for me.

Day 1:


Reply via email

Sine circle test animation

As I’ve been tinkering around with graphics coding, I wanted to figure out how to map a sine wave onto a circle. Here’s how it went down. (Disclaimer: this is all unoptimized code that is almost certainly not the best way to do this. Also, I’m a beginner, so yes, this is very basic stuff.)

First off, I started a new Processing sketch and drew a circle:

The relevant code from the draw() function:

float x = width / 2;
float y = height / 2;
float radius = height / 3;
float angle = 0;
float angleStep = 0.005;
float twopi = PI * 2;
float dx, dy;

while (angle <= twopi) {
    dx = x + radius * cos(angle);
    dy = y + radius * sin(angle);

    angle += angleStep;

    ellipse(dx, dy, 2, 2);
}

The basic idea: I loop from 0 to 2π radians (a full circle) in steps of angleStep radians at a time. At each angle I calculate the vector from the center of the circle (x, y) to the point on the circle at a distance of radius. And then I draw a little circle at that point. With a small enough step size, you get a continuous line. (More on that later.)

Mapping a sine wave onto said circle really just means that when you calculate the vector, you apply a sine wave to the point’s distance from the center of the circle. So I added that in, including frequency and amplitude variables I could tweak:

float freq = 20;
float amp = 20;

dx = x + (radius + sin(angle * freq) * amp) * cos(angle);
dy = y + (radius + sin(angle * freq) * amp) * sin(angle);

And I got this:

(I should add that I did this first part on my phone in Procoding. But then Procoding crashed and deleted my sketch, so I rewrote it in the actual Processing app on my laptop.)

I pulled this code out into a function so I could loop through it and create a bunch of concentric sinuous circles. I also changed the drawing method to use lines instead of ellipses, so there wouldn’t be gaps. Processing didn’t seem to want to antialias the lines, though, and I’m not sure why. Oh well.

I also saved an alternate version of the sketch that changed the colors to render out a depth map for each frame (the darker it is, the farther from the camera):

Then I animated the amplitude and rotation of each circle and rendered out all the frames, both for the blue-and-white version and for the depth map. I pulled it all into Blender and composited it together. The node setup:

  1. I take the depth map and convert it from RGB to black and white (meaning values from 0 to 1, where black is 0 and white is 1).
  2. Then I invert the depth map so white is 0 and black is 1, because I’m going to use it as a distance map, where white is close to the camera (a distance of 0 from the camera) and black is far from the camera (a distance of 1).
  3. I plug both the rendered frames and my inverted depth map into the defocus node, which gives me depth of field. (It’s postprocessed, so it’s not ideal, but I don’t think there’s a way around that.) The fStop value is how shallow the DOF is (the lower the number, the blurrier it gets). In the camera settings I’ve keyframed the Distance value (the focal point), with a range of 0 to 1. (Ordinarily they use Blender units, but in this case we’re using our depth map and that has a range of 0 to 1.)
  4. I do a fast Gaussian blur to try to make up for the lack of antialiasing. It doesn’t work as well as I’d like.
  5. Then I add a lens distortion with some chromatic aberration (Dispersion) and elliptical distortion (Distort plus Fit so that I don’t get black around the edges).
  6. Finally, I add a Mix node and change it to Soft Light, then plug in some brown-colored noise I’ve painted in Photoshop.

After I rendered the composited frames out to disk, I imported them into Blender’s video editor, added a crossfade to a black color strip at the end, then rendered to H.264 and uploaded to Vimeo. The final result:

The animation itself is somewhat lacking — the timing is uninspiring, the f-stop jumps around too much, etc., and I don’t think it properly conveys a sense of 3D space (of being in a tunnel) — but as a test of the Processing + Blender workflow, I’m quite pleased.


Reply via email

More Mandelbulber pieces

I’m finding that Mandelbulber is really addictive. First off, two Mandelbulb explorations:

I wised up and started doing a 1px field blur in Photoshop on the rendered images, which helps a lot in getting rid of sharp artifacts. On the first image I also painted in some dots and ran lens blur.

Next, a Mandelbox (Tglad’s variant):

I cheated a bit and used the liquify and oil paint filters in Photoshop to get a more surreal, painted look (hopefully giving it a little more humanity, making it less sterile).

Finally, a Menger sponge:

Going for a folk art look here. I will try very hard not to overuse the oil paint filter. I really will. I promise.


Reply via email

Open Shading Language in Blender

I’ve been playing around with Blender’s relatively new Open Shading Language support, and mmm, it’s powerful. I’m still bending my mind around shader construction, though — writing shaders is really different from writing graphics code the normal way. For example, here’s how you’d draw a circle in JavaScript on Canvas:

context.arc(x, y, radius, 0, 2 * Math.PI);
context.stroke();

But shaders don’t work that way. Instead, the shader program gets called for every pixel on the material, so you have to do something like this instead (using a distance field in this case, based on this GLSL code):

point pt = P;     // The point that gets passed in
float x = pt[0];  // OSL uses array indices for x,y
float y = pt[1];

// Get the distance from (0, 0) to the point we're processing and then
// subtract from the radius to get the distance from the circle to the point
float dist = radius - sqrt(x * x + y * y);

// If dist < 0, point is outside the circle
// If dist > border, point is inside the circle
// If dist <= border && dist > 0, point is on the circle
float t = 0;
if (dist > border) {
    t = 1;
} else if (dist > 0) {
    t = dist / border;
}

// Add the intensity control and make sure the value >= 0
float final = abs(intensity / t);

// Create the output color (and drop the red channel by half)
color c = color(final / 2, final, final);

Longer, definitely. But also much more powerful. Here’s what it looks like rendered:

I’m not sure if doing a parametric circle like this would even be possible in Blender’s material nodes system, but with OSL it’s fairly simple (if a bit mathy) and very customizable. Add some more circles, a noise function on the distance field, and a little postprocessing and you get this:

Another example, where the OSL shader is on the box in the middle:

The shader looks at the x coordinate, and if it’s to the left of the divider (with a sine wave applied), it gets the blue material, and if it’s to the right, it gets the metallic material. It also adds the yellow band. Everything is exposed to the node system through parameters (inputs to the shader), so it’s easy to change values to get different looks:

(The lower right one is a bit more of a tweak, changing the band to emit light and using a clamped abs/mod combo instead of a sine wave.)

This is all just scratching the very top layer of the surface, of course. Now I just need to brush up on my math…


Reply via email

Mandelbox 001–003

Some more Mandelbulbery, this time exploring the Mandelbox fractal instead of the Mandelbulb:

My process for these is to choose a fractal type in Mandelbulber, play around with the fractal parameters till I find an interesting shape, move the camera around until I get a good view, and render it. I then tweak the shader values (colors, specular, ambient occlusion, etc.) and the lights (position, color, etc.) and re-render till I get what I like. Finally I turn on depth of field and add a little fog.

Once I’m happy with the image in Mandelbulber, I export to PNG and open it in Photoshop to add some texturing. I also scale the image down a bit to get rid of some of the rendering artifacts. (It’s not wholly successful, but it does make a big difference.)

And here we are going for an underwater type of atmosphere (via aqua-colored fog and some vignetting in Photoshop):


Reply via email

Mandelbulber sketches

I discovered Mandelbulber yesterday and have been playing around with it a little:

Quite fun. (The Mandelbulb is a 3D version of the Mandelbrot set. There’s more to it than that, but you get the basic idea.)


Reply via email

Letterdrip

A semi-literary art experiment, wherein each letter gets assigned a color, the user types in a sentence, and the color bar gets drawn on the screen and melted up and down:

The colors are chosen by starting with a few base hues and then doing variations off them. Sometimes it produces gorgeous color schemes, and sometimes (okay, maybe most of the time) they’re jarring. What I should have done was start with a random hue, then build an appropriate color scheme (whether monochromatic or complementary or whatever) off of that.

I’m using a really simple dithering-esque algorithm for doing the melting (pulling from a random nearby neighbor pixel). Part of me wants nice antialiasing instead, but I suppose this has a slight 8-bit charm to it. Right now the left and right edges inadvertently end up pulling in some outer darkness, which I kind of like. (Oddly, though, it only happens in the top half of the screen. Not sure why.)

As usual, the code is on GitHub. There’s a live demo as well.

More examples, all using the default sentence (from Pride & Prejudice, unintentionally in honor of the 200th anniversary of its publication yesterday):


Reply via email

Utah County roads

Made in TileMill (which I read about on the Routelines about page — thanks to Tod Robbins for telling me about it).

I took the Census Bureau’s shapefiles for Utah County, imported them into TileMill, styled the lines a little, and exported to SVG. Then I imported it into Illustrator, applied a thin pencil brush (it’s not very noticeable, honestly), exported to PNG, pulled it into Photoshop, and added some color and texture. Voila: the roads of Utah County.


Reply via email

Line art experiments

Last night I got the idea of drawing lines or circles on my phone in Brushes and then applying them as textures to objects in Blender, aiming for a nonphotorealistic style without using Freestyle or toon rendering. And I mostly just wanted to see what it would look like.

My first attempt:

Ignoring the tiling issue, the look of the floor intrigued me, so I made a mountain using Blender’s landscape generator and applied the same line texture:

At this point I realized I could avoid the tiling issue by writing a script to make a line texture for me, at the larger resolution I needed it at. (Update: I’ve posted the Python code for the script.) Here are some of the output textures:

Which gave me a mountain that looked like this:

Not perfect, but not too bad, either. I can see myself using the technique in some illustrations down the road. Here’s a turntable animation of the mountain:

Finally, for the heck of it, I set the displace attribute on the texture and re-rendered:

Kind of like pen and ink, almost.


Reply via email

Parallax animation test

An animation test I painted in Photoshop and threw together in Blender, mainly just to play around with parallax layers (and to get back into doing animation again):


Reply via email