I realized (this is the very small breakthrough I mentioned yesterday) that I could use Blender to add 3D texture to my pieces. Verisimilitude has been the goal all along, and using an actual 3D renderer brings so much to the table that it boggles my mind that I didn’t think of this much earlier.
A closeup of the texture:
How I made this piece: I mocked it up in Illustrator, then exported it to SVG where I manually added the turbulence and displacement filters (in Vim) to distress the edges of the white square, which you can see in that closeup. I used Inkscape to export the SVG to a 6500×6500 PNG.
Then, in Blender, I created a plane and went to town on the shading, using a combination of procedural and image textures to mix the colors together and displace the geometry of the plane. There’s a key light and a dim fill light. And in the compositor I added a little chromatic aberration around the edges with the lens distortion filter.
Rendered it at 5200×5200, which took about two hours on my 16″ MacBook Pro. I tend to work a little smaller and then upscale to 6500x6500 (for square pieces), since Photoshop’s upscaling is fairly decent these days. After upscaling, I added my signature thingie, which I’ll add in Blender in the future so it fits in better.
Here’s the node setup on the plane (and in the future I’ll use groups to make things more manageable):
Overall, I’m happy with this technique. It’s more time-consuming than painting textures in Photoshop, but I can do other things while it’s rendering, and the result looks much better to me. Working in 3D is more fun, too. Most importantly, using Blender gives me loads of new options that would have been harder to do well with my old technique — shiny paint, glowing materials, etc.
Long ago before the pandemic, I took some footage of a short walkthrough of my Sacred Shapes exhibit, for those who couldn’t go see it in person. Finally got around to editing it (adding titles and music, nothing major):
I don’t use Premiere very often, so it took a little while to rediscover how to expand the audio track for keyframing levels, and I wish it supported different leading values in multiline textboxes, but overall the editing didn’t take very long and went smoothly enough. Makes me want to make more videos. (Bad idea right now, when school’s about to start.)
It’s (in my opinion) a much better execution of Before the World Was, which used a quick DrawBot script that didn’t pay much attention to placement.
This time, working off the Generative Artistry circle packing tutorial, I wrote a Python script that places all the circles so there’s no overlap, then outputs an SVG with the turbulence/displacement filters I wrote about not too long ago.
For comparison (original on the left):
I also went with a slightly less saturated background in this new version, and I put a little bit of texture on the circles themselves to make it feel slightly more painterly.
Brief backstory: when I’m doing my minimalist religious art, I usually sketch an idea out first by hand or in Paper on my phone, then mock them up in Illustrator to iterate on the concept. Once it’s satisfactory, I move to execution, either painting the piece in Procreate or using some of the brushes in Illustrator to get a more organic look. And finally I texture the image in Photoshop.
A couple months ago I got interested in exploring alternatives to Illustrator and Photoshop for both execution and texturing processes. And me being me, I wanted to try doing it in code, just to see what it was like. (Some things are easier in code, though I don’t know how often that would actually be the case with these.)
Note: this is still very much a WIP, and who knows if I’ll end up using any of it or not. But here’s the current state of things.
After reading somewhere that SVG has turbulence and displacement filters, I realized I could potentially use those for the execution part of the process, to distress the edges enough to make things more interesting. (And hopefully to be less repetitious than the Illustrator brushes I use.)
I put together an initial test using a few different settings, and it turned out a bit better than I expected. A sample of the code:
The background rectangle, the red figure, and the white figure all have different turbulence and displacement values. The red figure uses two sets of turbulence and displacement filters, which worked out fairly well, I think.
I used Inkscape render it out to a high-res PNG, since Illustrator wasn’t able to handle the filters. Eventually, if I keep going down this path, I’d hopefully be able to find a command-line tool that can do the rendering. (Maybe Inkscape has a headless option.)
Overall, this path seems promising. I don’t know that I’d use it all the time, but for certain things it may be handy. I still need to look into sane ways to round corners, and it seems that the other filters (dilation/erosion, convolution, etc.) may be helpful, too.
I’ve begun writing a Python script called Grain for texturing the final art image. The goal here is to see if I can streamline the process at all, and to see if this idea even works. Grain takes as input a text-based input file that looks like this:
Each block is a layer. Grain starts with the bottom layer (the executed base image) and goes up from there, adding each layer on top with the specified blending mode and opacity.
The :pattern roughdots command would generate procedural dots (not implemented yet), and the textures# bit in the :image command calls is a shortcut to my folder with texture photos.
So far, the results are disappointing. While the layering does currently work, it isn’t yet producing anything remotely publishable. I think there might be some discrepancies between blending modes in pyvips and in Photoshop. Hard to tell.
And, less importantly, it’s a little slow — partially from using high-res images, partially from Python. If the idea ends up working, I’ll most likely port this to Rust or Go, and probably also have scale things down for the exploration phase of texturing (with a final high-res export at the end).
I’ll keep tinkering with it from time to time and we’ll see how it goes.
Lately I’ve felt a bit stuck with the minimalist religious art. Ideas aren’t coming as easily as they used to. Over the last month or so I’ve iterated on a lot of mockups that haven’t gone anywhere, and while that’s normal, usually there are more ideas that make it through.
I’ll keep plugging away at it, though — I feel reasonably confident that there are still many gospel concepts and stories that would work in this format. (And many more that wouldn’t. Some things seem impossible to abstract away into simple shapes.)
Another First Vision lecture! On Sunday night, Eric Jepson gave a lecture entitled Triangulating God as part of the Oakland Stake First Vision Lecture Series. He included First Vision VIII (at around 15:16), First Vision XIII (at around 34:08), and First Vision VI (at around 35:57). I haven’t watched the whole talk yet (same story, I know), but I’m looking forward to it.
Last night Richard L. Bushman gave a Center for Latter-day Saint Arts Zoom keynote:
In commemoration of the 200th anniversary of the Restoration of the Gospel, foremost scholar and historian of the Prophet Joseph Smith, Richard Bushman shares the art of the First Vision. He addresses this seminal event in the Latter-day Saint faith tradition as it has been visually represented from artists all over the globe. Bushman also answers questions about why the arts are significant to revelatory development.
With my permission (though with the artwork’s Creative Commons license it wasn’t actually necessary), he included my Let Him Ask of God piece at the end of his talk. That part starts around 29:54. (I haven’t watched the whole thing yet — it just got uploaded an hour or so ago — but I’m very much looking forward to it.)