Ben Crowder / Blog

Blog: #photoshop

204 posts / tag feed / about the blog / archive / tags

I’ve occasionally used ImageMagick’s erode and dilate filters to make art look a little less digital. Turns out those filters are also available in Photoshop and Affinity Photo, just under different names: minimum blur and maximum blur. I had no idea. (I should mention that there might still be subtle differences between the algorithms. I didn’t do a deep dive. But from my limited testing they seem to do the same thing.)


Reply via email or office hours

I’ve decided to ditch Adobe’s Creative Cloud apps — Photoshop, InDesign, and Illustrator, mainly. I never thought I’d say that, but they’re too expensive. Instead, I’ll be using Affinity Photo, Affinity Publisher, and Affinity Designer. It’s a fairly small one-time cost instead of a dreary, never-ending, money-sucking subscription.

(If/when I need to do motion graphics or video editing in place of After Effects and Premiere, by the way, I’m planning to use the free version of DaVinci Resolve.)

So far I’ve only actually used Affinity Photo, to texture the piece I released yesterday. Worked like a charm. The live split-screen preview when applying a filter is brilliant, and the file sizes are much smaller, too. (In Photoshop I’d regularly end up with a 1–2 GB PSB file. With Affinity Photo, it’s closer to 300 MB.)

As far as typesetting goes, I still expect to use TeX (Tectonic) on projects where it makes sense — it’s what I used on the wide margin study editions since typesetting each language individually would have taken much more time — but it’s nice to have Affinity Publisher for other projects. I’m planning to use it for the book of narrative poems I’m (slowly) working on. (I’ll be setting it with Hinte, a new typeface I’m designing in FontForge. More on that soon.)

With Figma doing most of what I used to use Illustrator for, I don’t expect to use Affinity Designer all that much initially. But the raster brush textures are intriguing. We’ll see.


Reply via email or office hours

New artwork: Peace, Be Still. I dialed up the SVG turbulence filters to get the effect on the left. Also used the erode operator throughout (with the feMorphology filter primitive). I couldn’t get Inkscape to show the lines with the filters applied, though, so I ended up screenshotting the piece via QuickLook and then upscaling in Photoshop (hacky, but hopefully not too obvious).


Reply via email or office hours

New artwork: The Night Shall Not Be Darkened. On this one I used an erosion filter along with turbulence and displacement; the effect is most clear on the two vertical lines.


Reply via email or office hours

A few other new pieces: Nothing Shall Be Impossible unto You (about faith), Roll Forth (about the stone cut out of the mountain without hands), and Out of the Dust (about the Book of Mormon).

For a while I’ve wanted to explore using black and white for my art. Tell Me the Stories of Jesus was the initial step in that direction, but these latest four pieces are more like what I envisioned (a little more like ink on paper, to some degree). I’m looking forward to doing more work in this style.


Reply via email or office hours

New artwork: Tree of Life II.

I used the same circle packing technique to generate the circles, constraining them this time to be inside a larger circle. Initially I was going to have the tree visible as that larger circle — dark on a light background — but it ended up looking better to me with just the small white circles. (After that I used SVG filters and Inkscape and Photoshop as usual.)


Reply via email or office hours

New artwork: Within the Walls of Your Own Homes.

I realized (this is the very small breakthrough I mentioned yesterday) that I could use Blender to add 3D texture to my pieces. Verisimilitude has been the goal all along, and using an actual 3D renderer brings so much to the table that it boggles my mind that I didn’t think of this much earlier.

A closeup of the texture:

within-the-walls-closeup.jpg

How I made this piece: I mocked it up in Illustrator, then exported it to SVG where I manually added the turbulence and displacement filters (in Vim) to distress the edges of the white square, which you can see in that closeup. I used Inkscape to export the SVG to a 6500×6500 PNG.

Then, in Blender, I created a plane and went to town on the shading, using a combination of procedural and image textures to mix the colors together and displace the geometry of the plane. There’s a key light and a dim fill light. And in the compositor I added a little chromatic aberration around the edges with the lens distortion filter.

Rendered it at 5200×5200, which took about two hours on my 16″ MacBook Pro. I tend to work a little smaller and then upscale to 6500×6500 (for square pieces), since Photoshop’s upscaling is fairly decent these days. After upscaling, I added my signature thingie, which I’ll add in Blender in the future so it fits in better.

Here’s the node setup on the plane (and in the future I’ll use groups to make things more manageable):

within-the-walls-nodes.png

Overall, I’m happy with this technique. It’s more time-consuming than painting textures in Photoshop, but I can do other things while it’s rendering, and the result looks much better to me. Working in 3D is more fun, too. Most importantly, using Blender gives me loads of new options that would have been harder to do well with my old technique — shiny paint, glowing materials, etc.


Reply via email or office hours

New artwork: Plan of Salvation. Inspired by a comment my friend Naomi made about another piece.

This is also one of the first times (maybe the very first time) that I’ve superimposed things like this, especially out of chronological order. Hopefully it doesn’t make the piece overly confusing.

Last but not least, I’m really liking the SVG filters for making the edges seem more hand-drawn.


Reply via email or office hours

New artwork: Before the World Was II.

It’s (in my opinion) a much better execution of Before the World Was, which used a quick DrawBot script that didn’t pay much attention to placement.

This time, working off the Generative Artistry circle packing tutorial, I wrote a Python script that places all the circles so there’s no overlap, then outputs an SVG with the turbulence/displacement filters I wrote about not too long ago.

For comparison (original on the left):

before-and-after-the-world-was.jpg

I also went with a slightly less saturated background in this new version, and I put a little bit of texture on the circles themselves to make it feel slightly more painterly.


Reply via email or office hours

Some WIP experimentation with art.

Brief backstory: when I’m doing my minimalist religious art, I usually sketch an idea out first by hand or in Paper on my phone, then mock them up in Illustrator to iterate on the concept. Once it’s satisfactory, I move to execution, either painting the piece in Procreate or using some of the brushes in Illustrator to get a more organic look. And finally I texture the image in Photoshop.

A couple months ago I got interested in exploring alternatives to Illustrator and Photoshop for both execution and texturing processes. And me being me, I wanted to try doing it in code, just to see what it was like. (Some things are easier in code, though I don’t know how often that would actually be the case with these.)

Note: this is still very much a WIP, and who knows if I’ll end up using any of it or not. But here’s the current state of things.

SVG

After reading somewhere that SVG has turbulence and displacement filters, I realized I could potentially use those for the execution part of the process, to distress the edges enough to make things more interesting. (And hopefully to be less repetitious than the Illustrator brushes I use.)

I put together an initial test using a few different settings, and it turned out a bit better than I expected. A sample of the code:

<filter id="person1Filter">
    <feTurbulence type="turbulence" baseFrequency="0.5" numOctaves="2" result="turb1" />
    <feDisplacementMap in2="turb1" in="SourceGraphic" scale="3" xChannelSelector="R" yChannelSelector="G" result="result1" />
    <feTurbulence type="turbulence" baseFrequency="0.05" numOctaves="2" result="turb2" />
    <feDisplacementMap in2="turb2" in="result1" scale="3" xChannelSelector="R" yChannelSelector="G" />
</filter>

<style type="text/css">
    .person1 {
        fill: #a34130;
        filter: url(#person1Filter);
    }
</style>

<g id="person-1">
    <circle class="person1" cx="200" cy="250" r="30" />
    <polygon class="person1" points="225,270 205,500 350,500" />
</g>

And this is what it looks like:

svg-test.png

The background rectangle, the red figure, and the white figure all have different turbulence and displacement values. The red figure uses two sets of turbulence and displacement filters, which worked out fairly well, I think.

I used Inkscape render it out to a high-res PNG, since Illustrator wasn’t able to handle the filters. Eventually, if I keep going down this path, I’d hopefully be able to find a command-line tool that can do the rendering. (Maybe Inkscape has a headless option.)

Overall, this path seems promising. I don’t know that I’d use it all the time, but for certain things it may be handy. I still need to look into sane ways to round corners, and it seems that the other filters (dilation/erosion, convolution, etc.) may be helpful, too.

Grain

I’ve begun writing a Python script called Grain for texturing the final art image. The goal here is to see if I can streamline the process at all, and to see if this idea even works. Grain takes as input a text-based input file that looks like this:

:image test1-texture.jpg
:blend screen
:opacity 0.05
:x -100

:image textures#random
:blend soft-light
:opacity 0.1

:pattern roughdots
:blend soft-light
:opacity 0.2

:image textures#2019-05-21 17.28.14.jpg
:blend soft-light
:opacity 0.01

:image test1-base.png

Each block is a layer. Grain starts with the bottom layer (the executed base image) and goes up from there, adding each layer on top with the specified blending mode and opacity.

The :pattern roughdots command would generate procedural dots (not implemented yet), and the textures# bit in the :image command calls is a shortcut to my folder with texture photos.

So far, the results are disappointing. While the layering does currently work, it isn’t yet producing anything remotely publishable. I think there might be some discrepancies between blending modes in pyvips and in Photoshop. Hard to tell.

And, less importantly, it’s a little slow — partially from using high-res images, partially from Python. If the idea ends up working, I’ll most likely port this to Rust or Go, and probably also have scale things down for the exploration phase of texturing (with a final high-res export at the end).

I’ll keep tinkering with it from time to time and we’ll see how it goes.


Reply via email or office hours