A year and a half ago I started working on a REPL-based music composition environment called Trill. After a short amount of time I stashed the project for the time being, but since I can see myself working on it again someday, I figured it’s due a write-up.
The core idea here is a text-based REPL for composing music (and by music I mean more things like hymns and movie scores and folk songs, not as much pop or rock or electronic), with a focus on making the composition experience more aural and less visual.
An example session will hopefully help anchor the ideas:
> score mysong > staff piano # add a piano staff > keysig c > timesig 4/4 > keytime c 4/4 # alternate > play v. v. v. iii.... # plays the note sequence (. = quarter note, .. = half note, .... = whole note) > play v. v. v. iii-.... # - = flat (and v, iii are based on the key signature) > play v/ v// v/// # eighth, sixteenth, thirty-second notes > add . # adds what was last played to the active staff > play V IV^ IV_ # play a V chord and then a IV chord one octave up and again one octave down > pm vi. # plays the last measure plus whatever notes are specified > add vi. > staff violin # adds a violin staff > play @arpeggiate piano # plays two measures of violin arpeggiation based on the piano staff (where @arpeggiate is a generative method) > save
And some miscellaneous, unordered notes:
- Rather than seeing the notes listed out (either in standard music notation or in text format), you basically only hear them (via
play). This is the aural-over-visual part.
- Duration is represented by the number of periods (cf. the
playexamples), as an experiment with making the length feel more visceral — a longer string of periods makes for a longer sound.
- I’m also experimenting with using the relative scale notes (the Roman numeral notation) rather than absolute note names (C, D, E, etc.), to make transposing easier.
- Not sure yet how dotted notes fit in here.
- I threw in the idea of having some kind of generative functionality (
@arpeggiate), but that’s pretty raw and not thought through at all yet.
- The session transcript would also possibly function as the source for a song, and reloading it later would just skip the actual playing and instead just build the staff. Kind of nice to have the full history recorded, I think.
- Influences that I’m aware of: ABC notation, Lilypond, and Alda.
To be clear, I have no idea if any of these ideas are actually good. They’re just half-baked thoughts at this point. I did implement a very small proof-of-concept using FluidSynth and Prompt Toolkit, with the
play functionality working, but that’s where I left off. (Writing about it now, though, has me excited again. Maybe this will be my homework-avoidance project for the semester.)
The main things I need to sort out when next I work on Trill are how to navigate a score and how to manipulate notes using textual commands and this aural-first system. Basically, some way to say “go to this part and play this much” and “bump this note up this much” or “make this note a chord.” Seems doable; I just haven’t gotten that far yet.