Tim Daub on how he builds JavaScript apps in 2021, which I suppose is also in a similar vein. Sensing a theme here. (And much of this resonated with me. The more minimal the build process, for example, the happier I am.)
Not very long ago I felt like Storybook was a bit cumbersome, and in a fit of consolidation glee I decided to trash it and instead use Arc for my writing. I got as far as adding wiki-style links and backlinks (still helpful additions), then realized Arc wasn’t actually a good fit after all — my brain apparently doesn’t like having the story drafts in among all the other notes. It seems to like things in separate bins.
So Storybook lives after all. There are, however, gobs of cruft that have built up over the years — several features I tried out and then ended up not sticking with — so I’m going to rewrite it from scratch using FastAPI and plain text storage (as mentioned before). Leaner and more focused. Looking forward to it. (On this I’ve done some preliminary planning and have written a script to export the old data in the new plain text format, but that’s it.)
I’ve been using Arc to plan the editing of the novel. It works, but I’ve also found myself wondering if an infinite canvas tool like Figma or Milanote might be even better, with the power to break out of the cold confines of a linear column of text. You can probably tell where this is going, can’t you. And you’re right: because it foolishly doesn’t seem like that hard of a problem, and because I want full control (ha, what an illusion) over both the experience and my data, yes, I’m making my own infinite canvas web app. It’s called Space. It’s in the early amorphous stages of planning and will likely stay that way for a while because I’m in the middle of the semester. But I’m excited about it. (And have been for a while; this project’s been on the docket for over a year.) I initially planned to use canvas, but before I settle on that I’m going to try WebGL; if it works, it would allow for much more interesting possibilities, along the lines of the spatial interface ideas I alluded to a while ago.
By way of the Glue chat comic (interesting ideas, by the way), I came across John Palmer’s excellent posts on spatial interfaces and spatial software. Mmm. As a productivity nerd and a hobbyist toolmaker (there are probably ten or so personal tools I’ve been using over the past several years that I’ve never blogged about, but that’ll change soon), I’m very, very interested in these kinds of ideas. Particularly in using 3D interfaces for productivity. I’ve thought a bit about doing that in VR, but I hadn’t really considered building tools in 3D even outside of VR.
No concrete ideas yet, just little threads leading off to various areas that I’m interested in exploring (todo lists, website/blog engines, writing/outlining tools, etc.).
At this point the plan is to use Three.js for prototyping some ideas to see if it’s viable. My initial sense is that for the kinds of tools I’m interested in, I’ll probably need a good way to render/edit text from within Three.js, and it looks like rendering to a canvas and then using the canvas as a texture will work. Anyway, more to come.
Earlier this week I wrote a small Asteroids clone in JavaScript:
Instructions and the link to the game are on the project page.
FYI, here’s how I made the animated GIF: I used Quicktime Player to record a portion of the screen, then used ImageMagick to convert the .mov file to a series of PNGs:
convert asteroids.mov png/asteroids_%03d.png
I deleted the extra frames at beginning and end that I didn’t care about (unpausing the game, figuring out how to turn the recording off), then used ImageMagick again to make the GIF:
I’ve been playing around some more with the L-system code and modified it to animate the angle property and output each frame to a file. I also added some color and started using blending modes for the brushes. Once I clean up the code, I’ll post it to GitHub.
Anyway, here are some of the animation tests (I used Blender to put the frames together into an animation):
And the first one I did, which is a little too long and a little too fast:
I’ve been getting into procedural drawing and generative art some more, and last week I decided to try out L-systems. I ported some Processing code to Javascript and Canvas, then modified it and added controls so I could tweak the values and try things out. I also wrote a handful of additional brushes to get more interesting renders out of it (since plain lines can be kind of boring).
The algorithm isn’t entirely accurate — at least based off of the axioms and rulesets I plugged in from the Algorithmic Beauty of Plants book — but I like what I’m getting. I’ll do a second app sometime later with the correct algorithm.
Anyway, the code (which is kind of messy at the moment) is on GitHub. Sometime later I plan to add color selectors and more brushes. You can see the rest of these images in my sketches set on Flickr.
The code was pretty much just thrown together; if I were to use this in an actual app, it’d be much cleaner. And for some reason it doesn’t quite work in Safari 4, so you’ll need to use Firefox 3.5.
As for next steps, I’m going to try to rewrite the demo using Processing.js. I’m also planning to extend it to allow panning (so you can have a huge pedigree chart onscreen — some of the Flash-based pedigrees out there do the same thing), and I’m itching to do some kind of genealogy demo ala Snow Stack (Safari only).