A year and a half ago I started working on a REPL-based music composition environment called Trill. After a short amount of time I stashed the project for the time being, but since I can see myself working on it again someday, I figured it’s due a write-up.
The core idea here is a text-based REPL for composing music (and by music I mean more things like hymns and movie scores and folk songs, not as much pop or rock or electronic), with a focus on making the composition experience more aural and less visual.
An example session will hopefully help anchor the ideas:
> score mysong
> staff piano # add a piano staff
> keysig c
> timesig 4/4
> keytime c 4/4 # alternate
> play v. v. v. iii.... # plays the note sequence (. = quarter note, .. = half note, .... = whole note)
> play v. v. v. iii-.... # - = flat (and v, iii are based on the key signature)
> play v/ v// v/// # eighth, sixteenth, thirty-second notes
> add . # adds what was last played to the active staff
> play V IV^ IV_ # play a V chord and then a IV chord one octave up and again one octave down
> pm vi. # plays the last measure plus whatever notes are specified
> add vi.
> staff violin # adds a violin staff
> play @arpeggiate piano # plays two measures of violin arpeggiation based on the piano staff (where @arpeggiate is a generative method)
And some miscellaneous, unordered notes:
Rather than seeing the notes listed out (either in standard music notation or in text format), you basically only hear them (via play). This is the aural-over-visual part.
Duration is represented by the number of periods (cf. the play examples), as an experiment with making the length feel more visceral — a longer string of periods makes for a longer sound.
I’m also experimenting with using the relative scale notes (the Roman numeral notation) rather than absolute note names (C, D, E, etc.), to make transposing easier.
Not sure yet how dotted notes fit in here.
I threw in the idea of having some kind of generative functionality (@arpeggiate), but that’s pretty raw and not thought through at all yet.
The session transcript would also possibly function as the source for a song, and reloading it later would just skip the actual playing and instead just build the staff. Kind of nice to have the full history recorded, I think.
To be clear, I have no idea if any of these ideas are actually good. They’re just half-baked thoughts at this point. I did implement a very small proof-of-concept using FluidSynth and Prompt Toolkit, with the play functionality working, but that’s where I left off. (Writing about it now, though, has me excited again. Maybe this will be my homework-avoidance project for the semester.)
The main things I need to sort out when next I work on Trill are how to navigate a score and how to manipulate notes using textual commands and this aural-first system. Basically, some way to say “go to this part and play this much” and “bump this note up this much” or “make this note a chord.” Seems doable; I just haven’t gotten that far yet.
After a ten-year break, I’ve started writing music again. The new piece is called “One Quiet Night” and is a song about Christ that I wrote for a family Christmas party (I played piano, my wife played viola, and two of my wife’s siblings sang). At some point I’m hoping to record it, but until then, the sheet music will have to do. (It’s available in PDF, and the MusicXML is also available.)
Here’s an 8-bit chiptune rendition of the LDS hymn “The Spirit of God,” transcribed straight across from the hymnbook:
Back story: A few years ago I heard about MML, a way to write Nintendo chiptunes. Shaun Inman had put together an MML bundle for TextMate, which came with ppmck, a command-line tool for converting MML to an NSF (Nintendo Sound File).
I was curious what hymns would sound like as chiptunes, so I transcribed the hymn to MML, converted it to NSF, used Audio Overload to export it to WAV, then used Audacity to convert the WAV to MP3.
Because someone will probably bring it up: no, I don’t think it’s sacrilegious to do this. The 8-bit sound is morally neutral. I wouldn’t play this in a sacrament meeting — it wouldn’t be appropriate — but outside of church I see no problem with it.
The web is now the best document delivery platform, and that will become even more true as time goes on.
Documents need to be flexible so you can view them on any size device — desktop, tablet, mobile, anything.
As I’ve been doing more responsive web design, I’ve been thinking that this principle of reflowable content could apply to sheet music. For example, here’s a normal page of sheet music (from the Mutopia project):
If you were to view this music on a smartphone, you’d either have to zoom all the way out (making it super small), or zoom in on just one section of the page and pan around, which can be a lot of two-dimensional panning — not too great if you’re trying to actually play the music.
Neither is ideal. So I’m thinking maybe sheet music should automatically reflow to fit your viewport — the way both text and responsive websites do:
(This is just a quick copy-and-paste mockup — in reality you’d keep the clefs on each line and probably make things a little smaller and so on — but you get the idea.)
So as the viewport shrinks, you would drop the space between notes until you hit an unacceptable squishiness, then drop the number of measures per line by one and set the space between notes to be wider to fill the space again. Rinse and repeat. By the time you’re down to smartphone size, you’d probably be at two measures per line as in the above mockup.
One other thought: jumps (repeats, codas) should be hyperlinks, so the player doesn’t have to think about where to go. (And when it jumps to the location, it could flash a highlight or marker or something on the measure you should start at.) Or, even better, since we aren’t really worried about taking up space, just flatten repeats and codas altogether — take the repeated section and write it out. Space isn’t as much of an issue on the web. The disadvantage is that it’s not as easy to know that the repeated section is in fact identical.
If you’re only seeing one or two measures per line on, say, a smartphone, and if you’re playing something with more than one staff (piano music, for example), then you’re going to be doing a lot of scrolling. So you probably wouldn’t want to go down quite this much.
There’s also a slight loss of familiarity. When you play a page of music over and over again, you get familiarity grooves etched into your mind; with responsive sheet music, you wouldn’t really get that as much. (If you used the same device every time in the same orientation, then it’d be a little more stable, but you’d be scrolling instead of flipping pages, which anchors things less.)
Ticker-style sheet music
Another idea that came to mind when I was mulling this over was reducing the music to one line and automatically scrolling it like a ticker. (This probably already exists. Responsive sheet music may exist as well, but I couldn’t find it — if someone has already implemented it, let me know. I want to use it.)
Focus. Since the music is one-dimensional, the player doesn’t have to think about other dimensions (down and up, where the end of the page is, etc.).
Linearity. With this style it really would be better to flatten out repeats and codas, so it’s just one long line of sheet music. Then you don’t have to spend any mental cycles thinking about jumping around.
The disadvantages are pretty much the same as with responsive:
You lose some familiarity — with a full page of music, you get used to its layout and it reminds you of things, whereas a scrolling ticker would always feel somewhat new.
Flattening repeats makes you lose the knowledge that the repeated section you’re starting to play is the exact same as the previous section.