Saturday, July 22, 2017

Expremigen: expressive midi generation

Problem?

A lot of text-based midi specification languages exist, but those I know of suffer from one of the following drawbacks:
  • either the syntax is very low-level as in SKINI, which basically translates all midi events into text. This is great for round-tripping between midi and text, but it's not great for writing and interpreting "manually".
  • or the syntax makes it easy for specifying notes and rhythms but it lacks a way of easily specifying evolution of parameters over time (e.g. crescendos, rallentandos, rubatos, staccatos vs legato). These things are often left as "future work" or they feel like they are implemented as an afterthought and lack in expressive power.

Approach?

I've been working with supercollider a lot lately, and one of its great features is an extensive pattern library. Patterns are like templates for generating music events. After thinking about how to solve the above problems in one easy to use language, I came to the conclusion that supercollider's patterns are an ideal foundation for building a solution.

After some searching I found the isobar library which implements supercollider's patterns in python 2. I noticed some drawbacks in isobar's approach though: it only supports python 2 at the moment, and their patterns appear to be using eager evaluation, meaning that you would not be able to specify patterns with very high ("infinite") repeat counts without overflowing your computer's memory space and exploding the required CPU time. Instead of porting isobar to python 3 I therefore decided to create a new implementation of of supercollider's patterns, based on python generators (i.e. using lazy evaluation), in the expremigen (expressive midi generation library) system.

The next step I took was to couple these patterns with the tweening possibilities found in my earlier vector animation library pyvectortween. By now I had a system that could flexibly animate midi properties like volumes over time.

While creating examples it became clear that writing code might scare off some potential users (especially if you are trying to convert existing music as opposed to generating new music algorithmically) and it was not close enough to my initial goal of creating a new midi dsl (dsl = domain specific language), so I set out to define a syntax and came up with a new midi specification language called MISPEL. MISPEL reuses the best ideas (imho) of  existing systems like lilypond and abc notation and then adds one of its own: property animations. For a more detailed introduction to MISPEL and some examples, please see the github readme file.

What does MISPEL look like? Where's the code?

Please see expremigen's github page for the complete source code of expremigen (Python 3 only, GPLv3 license) and some examples. This is of course a work in progress - despite quite some unit tests, bugs can still be expected, and things might change in the future.

Future?

The language is already quite capable, surpassing other systems in specific areas, and no doubt also lacking in other areas, but perhaps not feature complete yet. For now it concentrates on specifying notes, durations, and animatable properties like volume (crescendo, decrescendo), played duration (staccato, legato), lag (rubato) and tempo (accelerando, ritardando), midi control changes and pitchbend. Other things like program changes and sysex are not supported. Time will tell if and how this system needs to evolve.

One potentially interesting thing would be to somehow base my bluegrass music style compiler on mispel instead of lilypond. I expect this would quickly reveal some of the gaps in the language. 

Another interesting thing could be to investigate reverse engineering of midi files into MISPEL to use as a starting point for adding expressivity.