Saturday, August 19, 2017

Audio transitions in supercollider

Problem?

In the domain of images and videos, programs like powerpoint and video editors provide a wide range of "transitions". If you think of powerpoint, e.g., you can switch from one slide to the next directly ("cut"), or you can gradually fade out the first one and fade-in the second one, or you can push in, push out, wipe according to a shape, split, reveal, random bars, shape, uncover, cover, flash, morph, ... too many to name.

In the audio domain, however, the options seem quite limited. I'm only aware of direct cuts and cross-fading as audio effects that are offered by default. Can we think of other audio transitions as well?

Approach

In this blog post I will propose a few other audio transitions, as a proof of concept. The code can be found on http://sccode.org/1-57H.

Setting up supercollider

To illustrate the transitions, let's first create two sounds that we want to transition between.
\sound1 is a sine wave with randomly varying frequency (frequency changes 4x per second).
\sound2 is a sawtooth with varying duty cycle. Both sounds are different enough that we can hear the transitions taking place.

I let the 2 sound producing synths output their audio to audio busses.

Next to the sound producing synths I have a bunch of transition synths that calculate the transition between \sound1 and \sound2 as a kind of audio effect. Each transition synth takes a pos argument. Setting it to -1 corresponds to only listening to \sound1, whereas setting it +1 corresponds to only listening to \sound2. For values between -1 and 1 the transition is taking place.

For demo purposes, the pos argument in the transition synths is ignored and instead driven from a fancy counter made specified by an envelope (Env). This fancy counter ensures that supercollider first plays \sound1 for 5 seconds, then gradually transitions to \sound2  during 5 seconds, then keeps playing \sound2 for 5 seconds, then transitions back to \sound1 during 5 seconds and finally plays \sound1 again for 5 seconds.


Groups are needed to ensure that the transition synths are executed on the server after the sound producing synths. Otherwise we wouldn't get audio output. (After all, it's impossible to fiddle with a sound that isn't calculated yet.). 

In the fork section we will instantiate all effect synths one by one so we can hear all of them in succession. In the first screenshot only the "direct cut" is added. This is a transition that basically does nothing: it just stops the first sound and starts the second one in the place.

Familiar effects

cut

A cut (in audio this is sometimes called a "butt splice") is where you stop one sound and start another sound. This basically means: "no effect". In supercollider this can be implemented by stopping one sound and starting another one. This is already illustrated in the screenshot above.

crossfade or xfade

A crossfade happens when you graduallly lower the volume of one signal while gradually increasing the volume of a second signal. For a while both signals sound simultaneously. This is supported in virtually every digital audio workstation (DAW) in existence. Often one can choose whether to change the volume keeping a "constant gain" (linear curve), "constant power" (square root curve?) or even other types like "exponential" (using logarithmic curves).

In the following screenshot I indicated what was changed to add a new effect. In the next effects I will only show the new effects synths. You can easily update the fork section yourself.


So for cross fading with equal power you could do something more like


Seldom explored effects

Do we have to stop at direct cut and crossfade? Of course not... let's try something else. Note: these techniques may actually be well established, it's just that I've never come across them "in the wild". But then, also not many people use transitions in powerpoint and video editing because if you overdo them, they tend to lower the overall end-user experience.

HPF curtain

In this transition, we increase the filter frequency of a high pass filter (HPF) for the signal that is to disappear, while we decrease the filter frequency of a high pass filter for the signal that is to appear. The implementation of this transition is not perfect as it has some audible artifacts on positions close to -1 and +1. It might help to use a high pass filter with steeper roll-off, or to combine this technique with some amplitude control.

LPF curtain

Very similar to the previous transition, we decrease the cut-off frequency of a low pass filter (LPF) on the signal that must disappear, and crank up the cut-off frequency of a low pass filter on the signal that must appear instead.

Wash out

The sound that is to disappear drowns in reverb whereas he other sound emerges from the reverb.

Pixelate

 The sounds become coarser and then more crisp again. This fragment features some complication involving Select.ar because LFPulse does not behave mathematically correct for duty cycle width=0 and width=1 (it generates audible spikes, which ruin its function as an audio stream selector), so near the duty cycle = 1 and 0, I had to select pure versions of the audio.

Push out

The new sound shifts in from the left side and pushes out the old sound to the right.


Can you think of other transitions?


Saturday, August 12, 2017

Scoring a movie with supercollider, blender, python and osc - part II: keyframing and animation curves

Problem?



In the previous part of this tutorial, I described a way to insert markers on blender's timeline, and to use blender's python scripting abilities to convert these markers into OSC messages which then can be interpreted in supercollider to perform commands (e.g. start and stop patterns or sound effects). This approach works very well for sending discrete commands: things like start this, stop that. If you haven't read that part yet, please go read it as you will need to understand the concepts explained there to follow part II.

In this blog post, I want to show how one can use blender's powerful keyframing system with animation curves to continuously update parameters in supercollider sketches. This technique adds extra possibilities for driving supercollider from blender.

Approach?

In the blender scene, I will add custom properties, and animate those via keyframes and animation curves. I will extend the python script I developed in part I to send the parameter value updates to supercollider.

What is a keyframe?

A keyframe represents the value (the state) of one or more parameters at a moment in time. Suppose you have an X-position of an object in a 3d scene. At time frame=0 you can set X=0 and insert a keyframe on frame=0. At time frame=100, you can set X=10 and insert a new keyframe on frame=100. Now for frames between 0 and 100, the value of x can smoothly vary between 0 and 10, that is, the values of x can be interpolated between the keyframes (you can also ask the system for an abrupt change instead of interpolation). The interpolation can be linear, or can include "easing".

Adding custom properties

Blender is extremely keyframeable. Literally every parameter you see in the UI can be keyframed. Blender is also very extensible: users can add new parameters ("custom properties") in the UI. And those new parameters can be keyframed. 
To add a custom property, first switch to blender's default view


Then, on the right hand side, go to the scene tab, and in the custom properties section, click the Add button.


As we did in part I of the tutorial, we will again use a naming convention for the properties so that our python script knows what to send to supercollider from the property name. The proposed naming convention is that a property named SC_something42 wil be sent to supercollider as osc command /something42 with as argument the property value. We will make the python script so that only changes in values are sent. Values that don't change from one frame to the next will not be resent to supercollider. The intention is to avoid creating an avalanche of useless OSC messages.


After clicking the add button, we can set up our property using the naming convention. Suppose I want to automate a frequency in some synth. A possible custom propertyname could be SC_animfreq. This would trigger sending an OSC message /animfreq to supercollider with the value of animfreq as argument. I've set the default value to 440, and the min and max to 20 and 20000 respectively.


Now on the blender timeline, navigate the playhead to - say - frame number 10 (the easiest way is to type 10 in the current frame edit box), set the value of customproperty animfreq to - say - 220. Hover your mouse cursor over the value 220 of animfreq and press the letter "i" (for "i"nsert keyframe). The edit box with value 220 should change color to indicate that a keyframe has been inserted for the custom property.


Next, move the playhead to - say - frame 100, change the value of the custom property to 880, hover the mouse cursor over the 880 and "i"nsert another keyframe.


Now as you drag the playhead between frames 10 and 100, watch the custom property value box. You should see the number change gradually. This is the linear interpolation that happens between keyframes by default.

If we want to alter this behavior of linear interpolation, we need to switch to the "graph editor". A full explanation of everything that an be done in the graph editor falls outside the scope of this blog post (i.e. you'll need to find some blender tutorials about the graph editor on youtube if you want to dive deeper - let's just say that you can do some funky stuff).


After some zooming in x and y direction you can see the linear interpolation drawn as a curve.


By pressing "T" while hovering the mouse cursor over the graph area you can change the interpolation type. Since we're animating a frequency, it makes sense to set the interpolation type to exponential. Be sure to look for tutorials on F-curve modifiers to see some fancy stuff that is possible with the curves. For the purpose of this tutorial we won't go any deeper.


So far we have added an animated custom property. Now let's tell blender to send it to supercollider in the form of OSC messages. To do so I will extend the python script developed in part I of the tutorial. The additions with respect to part I are marked in the figure.



The OSC messages can be received in supercollider in exactly the same way as in part I. The received values then e.g. can be used to set the freq argument in a synth (or the density in some cloud of sound grains, or whatever you feel like doing).

A blender file and supercollider file with a simple demonstration can be downloaded from google drive.




Scoring a movie with supercollider, blender, python and osc.

Problem?

I want to score a movie with a generative supercollider score (that is, insert supercollider patterns and sound effects, ...). How do I sync the sound to what happens in the video? What if I want to last-minute edit the video? Can I automatically resync parts of my audio to the edited video without having to manually edit start and stop times of patterns and sound effects in my supercollider program? I want a reasonably efficient, a non-intrusive workflow, preferably not requiring gazillions of context switches between different tools, with minimal context switching required, and not requiring a calculator to convert between video frame numbers and milliseconds all the time? I want to be able to scrub through the video (either fast, or frame by frame if needed) to find the correct places for starting and stopping sounds, and trigger the starting and stopping of sounds and patterns directly from the video timeline.

Pretty much all of the above (and much, much more) is possible by leveraging existing mature tools and technologies. And this tutorial will explain one way of doing it. Once you understand how it all works, the new possibilities that are created by coupling these technologies are simply overwhelming. The tutorial is developed on Linux, but I will use only cross-platform tools, so you should be able to replicate it on your own system. (Let me know if you tried and if you succeeded :) )

Approach

I will use the most powerful video editor currently available for Linux: the often underestimated Blender Video Sequence Editor (VSE). Blender is at version 2.78c at the time of writing. Youtube has a series of very good VSE tutorials in case you want to dive deeper later on. Blender is fully scriptable in Python 3 and we will exploit these abilities to trigger OSC commands from markers inserted on the blender timeline (this addresses the minimal context switching, video preview, avoiding manually converting video frame numbers to milliseconds). These OSC commands will be received in supercollider and used to start and stop the sounds and patterns.

If you move videos around on the timeline, blender offers a way for you to choose for each marker whether to either keep it locked to its current position, or whether to move it along with the video fragment. This addresses the resyncing after editing part mentioned in the problem statement.

In supercollider the OSC messages will be received and used e.g. to trigger patterns or sound effects. We will do this in the language side of supercollider. Sending OSC directly to the underlying sound synthesis server is also possible, but for most purposes way too low-level and complex to be practical.

Python scripting will enable us to add some intelligence to the system: e.g. often needed OSC commands (like start pattern, stop pattern, start sound effect, ...) can be encoded in the marker name, whereas less generic commands can easily be handled by editing the python script embedded in Blender. If this sounds still vague, don't worry. We'll come back to it.

Prerequisites


Setting up supercollider

We will not spend too much time on setting up supercollider for now, as our first focus is on driving supercollider from blender's timeline. But we need a minimal supercollider program that can receive OSC messages and react to them - if only to test that blender works as expected.

Here's such an example: whenever an OSC message is received, it prints out the message and its arguments. For now react to only 2 messages: "/stap" (shorthand for "/startPattern") and "/stop" (shorthand for "/stopPattern"). The name of the pattern to be started/stopped will be an argument in the OSC message.


Setting up blender

Start blender, and check in the File -> User Preferences -> File tab that "Auto Run Python Scripts" is enabled. This is a convenience setting so we avoid having to manually start the python script we will develop when opening blender and playing back the video. Note that this is convient, but may present a security risk if you install other blender add-ons that you didn't make yourself, so you may want to reconsider. If you don't enable the setting, you will always have to remember to start the python script manually. I'll show how to do that later.


Next, switch to video view in the top menu.


and insert a video on the timeline using the Add menu item just above the timeline.


I've selected a random movie file from archive.org (https://archive.org/details/TheSoilers) and importing it shows two strips: the green strip represents the audio stream, and the blue stream represents the video strip.


Let's remove the audio stream (strictly speaking this is not needed, but it's a silent movie so it just takes screen space). Right mouse click on the green strip to select it, then type the X key to delete it (think "eXtract") and confirm the deletion.


Then position the green bar (the playhead) on frame 1 by dragging it by mouse or by setting number 1 in the current frame edit box. As you drag the green bar over the video strip, you can see the movie play in the preview window. This is called scrubbing. It makes for easy finding frames where to start or stop certain sounds or patterns. Next, in the marker menu, select Add Marker (or faster: hover your mouse cursor over the timeline and press the M key).


A marker appeared with the name F_01 (which is shorthand for Frame_01). To rename the marker, right mouse click it in the timeline to select if not already selected, then press ctrl+M on the keyboard to rename it.



We'll rename it in such a way that the name itself tells us which OSC command to send to supercollider. This is just so we can quickly add commands on the timeline later without having to edit Python code all the time (minimizing context switches!). Let's e.g. rename the marker to SC_stap_intropattern. Here, SC_ means that the marker will be interpreted as pertaining to supercollider, "stap" is shorthand for "STArtPattern", i.e. a command we will send to supercollider in the form of an OSC message, and "intropattern" is the name of a pattern that we will later define in supercollider. It will be sent as an argument in the OSC message.


Next, go to frame 341 (you can easily verify in the preview window that this is the last frame before some new text appears). Also, set the end frame to 500. This ensures that we will only render the first 500 frames of the movie (there's no point in rendering more for this tutorial). If you forget to increase the end frame, it will remain at the default of 250 and playback/rendering in blender will not be able to pass beyond the 250 frame limit.

If you find at this point that the video strip was not put in the timeline at frame 1, you can right click on the strip to select it, then type the "g" key for "grab", then type the "x" key to constrain the movement in x direction, then reposition it with the mouse and left mouse click to confirm the new position. Blender has many efficient ways of working with selections and movements, and you will greatly benefit from doing some blender tutorials to get to know all these useful tricks.


Now insert and rename a marker SC_stop_intropattern as before: mouse cursor over timeline, M, ctrl+M, SC_stop_intropattern, OK. (Notice how fast this goes?)


Done? Great! We've created some fancy markers, but blender of course has no idea yet that these are supposed to trigger OSC messages to be sent to supercollider. We now have invoke the magic of Python scripting to actually make that happen.

Scripting blender with python

If you haven't done so, install python 3.x from www.python.org, and make sure you have the python-osc module installed. Usually installing a module involves opening a terminal window, navigating to the folder where python is installed, find the pip.exe executable and run on the command line:

pip install python-osc

First, switch blender to scripting view on the top menu.


We will add a new python script first. To do so, click the "new" button in the script window.


As soon as you clicked the new button, the menu changes and you can type a name for the script. I chose trigger.py as name. It's important to give a name that ends in .py because that ungrays the register button. Make sure the register button is not grayed and click the checkbox. The effect of "Register" is to run this python script automatically when the .blend file is loaded in the future (but only if you allow automatic running of python scripts in the user preferences). It's just a detail really, but it allows us to work more efficiently in the future.

The black on gray text on top of the screenshot that looks like python code is generated automatically by Blender whenever you click user interface elements. This shows what you would have had to write in python to achieve the same effect as when clicking the user interface elements. You can safely ignore it for this tutorial.


In the gray script area, we can now write down our Python script. The approach I'm going to use is to register a frame_change_pre handler. This is a callback function that is called by Blender whenever the framenumber is increased during playback or dragging the playhead. Note that in order to type into the script area, you need to make sure that the mouse cursor hovers over the gray area. When it hovers over other areas (e.g. timeline), the keyboard keys get other meanings and you may get frustrated quickly :) 

In the callback function we will check if animation is running (we don't want to send OSC when simply scrubbing the timeline) and if there's a marker at the current frame number. If one or more markers are found on the current position, check if it is one of the markers that triggers automatic OSC messages, or if it is a marker that has a manually registered OSC message attached to it. In the example code I used manual registration to react to the markers with name "init" and "cleanup" and "explosion" (lines 19-21). Note that explosion is added for demo purposes only. It'd make much more sense to add an automatic marker for triggering synths in supercollider. Marker names triggering automatic OSC commands can also be manually registered in the manual_markers map. In that case both the automatic and the manually registered version are executed.

Blender allows to insert multiple markers with the exact same name so there's no problem to retrigger the same commands multiple times. (The python code allows to add an optional number suffix (which is ignored for OSC generation), e.g._234, to automatic marker names because not everyone feels comfortable having the exact same marker name in multiple places) Blender also allows adding multiple markers on the exact same location, so also there we do not hit any limitations to what is possible in terms of sending multiple commands. Using manual registration, you can also add multiple OSC messages for a given marker. Manual registration boils down to editing the python script.

Blender pitfalls

  1. Every time you've edited the python script, be sure to press the "run script" button to see the effect. There's a call to bpy.app.handlers.frame_change_pre.clear() to ensure that the same handler is not registered multiple times. If you didn't set up automatic running of python scripts in the user preferences, you need to press this button also after you loaded your .blend file.
  2. The frame_change_pre handler is not executed in frame N if the playhead starts on frame N.
  3. Blender playback stops automatically when the end frame is reached. Make sure you set it high enough in the UI
  4. If you use python scripting, it's best to start blender from the command line as python errors (syntax errors) will be displayed in the terminal from which blender was started
  5. If you try to type/paste into the script area, make sure the mouse cursor is somewhere hovering over the area. Keypresses are interpreted differently when the mouse is outside that area. Similarly, if you try to save your .blend file with ctrl+s key, be sure the mouse cursor is outside the script area (e.g. hover it over the timeline), otherwise blender will try to save the python code only into a file.
  6. If you start to move around video strips, you may want the markers to move along (or in other cases you may want the markers to remain where they are). By default, the markers don't move along with the video, but if you check the "sync markers" checkbox in the video editing layout's view menu, all selected markers will move along when you move a video strip. Selecting markers happens by right mouse click in the timeline (on the marker). You can add more markers to a selection with shift + right mouse click. You can select many markers at once by hovering over the timeline with the mouse, pressing b for "box select" and drawing a rectangle over all the markers you want to select. These are common techniques in blender. If you are serious about working with blender, make sure to learn the basics.

Supercollider pitfalls

  1. If you use Ndef/Pdef/Tdef, you will want to make sure they are quantized to 0, otherwise the sounds may start late and it will ruin the video-frame acccurate timing we're striving for.
  2. If you redefine the guts of Ndef/Pdef/Tdef, also think carefully about the fadeTime. It may be wanted, or it may cause your sound to mess up.
  3. Be careful to free resources you don't need anymore or the sound generation might crash before the rendering is finished.

Intermediate test

In supercollider, start the test program that listens for OSC messages. In a real-life situation you would also want to make sure some DAW is set up for recording the audio generated by supercollider.

In blender, set the current frame to 0. Because of pitfall #2 in the list of Blender pitfalls listed above, since we added a marker on frame 1, we should start playback from frame 0, since otherwise the first OSC message will not be sent. I've found there's no need to also set the start frame to 0 for this: setting the current frame is enough.

When everything's set up click the play button in blender. In Blender, you should see the playhead progress over the timeline, and if you are in video editing mode you should see the video play in the preview window. In the supercollider "Post window" you should see the OSC messages appear as dictated by the markers at frame 1 and frame 341. Playback will automatically stop when the end frame number is reached. Make sure to set the end frame number high enough! (blender pitfall #3)

Performance considerations

Blender allows you to add many video strips and to make complex transitions between video strips. You can also project one or more movies on 3d objects, and even insert and render complete 3D scenes in multiple layers. For truly complex scenes, the actual rendering can take up to several hours per frame. Clearly this does not match with the requirement of sending supercollider commands to generate audio in real-time... Add to that that supercollider may also need considerable CPU power to generate its sounds. Can we somehow solve this!?

Think about it... the only thing that drives supercollider are the inserted markers. And the markers are not tied to a video or scene, they are tied to the timeline. Once all your markers are set up in the right places and with the right names you can remove or mute all video strips and blender 3d scenes (save a backup first!!!) and just let blender run with the empty timeline (that is: empty except for the markers).

The markers will instruct supercollider to generate sound at the right moments in time with negligible CPU usage from Blender, and you can record the resulting sound in a digital audio workstation (e.g. on Linux, ardour is a perfect match since the JACK protocol allows for routing the sound out of supercollider and into ardour directly, and since both Blender and Ardour support the JACK transport protocol,  you can later on also easily mix in music and samples from other sources than supercollider). 

Finally add the recorded audio as an audio strip on top of the original scene/video (here's where you will be happy to have made a backup ;) ) and render the whole to a final movie + sound (at glacial speeds if necessary, but for a simple video like the one we made here on recent enough PCs none of this performance tuning is really necessary. Everything runs happily in real-time.

A working example

Let's turn our supercollider test program into something that actually produces sound. I'll add support for the init, clean and synth commands that are used in the rest of the article as well. I'll leave it up to you to edit the blender time line to trigger some of the newly supported commands, and to add support for new commands as you need to perform your tasks. Be careful about the explosion sound... IT'S LOUD!!!!!

Conclusion

My initial experiments indicate that the approach outlined here works really well for my (admittedly simplistic) purposes, and I'm really excited about the combined powers of blender, python, OSC and supercollider when it comes adding generative sounds and music to a movie. Given the tremendous combined power of these mature technologies I'm sure we're just seeing the beginning of the combined possibilities.

Wednesday, August 9, 2017

Automating squiggles in supercollider part II

Problem?

In part I of this explanation we used mathematics to create beautiful squiggles. While this gives ultimate control over the scribbles (in terms of accuracy and repeatability of the squiggles), sometimes you just want to do something simpler (quick'n dirty) or something more creative (too much work to spell out in math formulas). 

Approach?

You can simply draw a squiggle with the mouse, record it and then use it to drive a synth. You can also further manipulate the recorded data to produce variations of the recorded gestures.

Code?

Recording

Part I is about recording a squiggle. A very simple proof of concept code is given here. Start the code, then draw a squiggle with left mouse button on the canvas. Both positions and timing information is remembered. 

Performing

 Part II is about performing the recorded squiggle. The recorded information is interpreted. Nothing stops you from transforming the recorded data first (e.g. to create mirrored, scaled, skewed, ... or non-linearly transformed versions of the recorded data - this might lead to interesting ways to create new music).

Conclusion

This approach obviously is much simpler than mathematically constructing squiggles, but it doesn't offer the same level of control and precision. It all depends on your needs what method to use! Finally an easy way to bring Picasso's dog to life :)


Tuesday, August 8, 2017

Automating squiggles in supercollider

Problem?

Supercollider makes it easy to vary parameter values by squiggling the mouse using the UGens MouseX and MouseY. Sometimes I play with these to find that the squiggling itself produces nice results, but then the thought of having to spell out all these x and y values manually makes me sigh. This made me think about how to automate squiggling - not by sending some mouse commands to the window manager, but by using mathematical formulas that describe squiggles.

Approach?

Parametric equations! Enough said. Let's get started building up some intuitions and then applying them to create interesting squiggles. Note: all figures in this article are made with Desmos calculator, a very easy to use online tool to experiment with (amongst other things) parametric equations. Highly recommended for designing your own squiggles.

Note: also see part II of this blog post on a much simpler approach to automating squiggles using a record and playback method. 

Prerequisites?

You will benefit from some basic insights in maths, including vectors and trigonometry (or perhaps you can pick some of it up as you read along). If you want to run the supercollider examples you will also need some basic familiarity with supercollider. The things I will discuss have many applications outside the realm of sound synthesis and algorithmic composition (like in graphical design), so you can also read the blog post to get some insights for usage in other domains.

Ok then... let's go

Here we will discuss parametric equations with a single parameter ("time") that are used to describe 2-dimensional curves. The power of parametric equations comes from the fact that they can specify a motion in x (left-right) and y (up-down) separately, which gives it powers beyond those of simple mathematical functions of the form y = f(x). In parametric equations you have two equations, one for x and one for y. Both are described in terms of the parameter t. You can think of x(t) as the value of x for time t and similarly y(t) describes the value of y for time t. By plotting (x(t),y(t)) for each value of parameter t in a 2d space, we get a curve that represents the squiggle.

In mathematics, the parameter t in the parametric equations is usually rescaled in such a way that the curve is completely described as t varies from 0 to 1. We will come back to this later, when we discuss why this is not necessarily the best choice for usage in a supercollider application.

Building intuitions with points

Suppose we write the following equations:

( x(t), y(t) ) = (0, 0)

These say that no matter what the value of t is, the outcome is always (0,0). If we plot (0, 0) in a 2d space, we get a single point.

Suppose we write:

( x(t), y(t) )  = ( 10, -20 )

Then the point moves to location (10, -20). So far, no surprises. Adding a positive constant to the x component moves the scribble (a point here, really) to the right, adding a negative constant to the x component moves the scribble to the left, and adding positive or negative constants to the y component moves the point respectively up or down.
.

Building intuitions about lines

Points are boring, right?... let's vary the x position as t progresses. A simple example would be:

( x(t), y(t) ) = ( t, 0.6) for ( 0 <= t <= 1)

As time t increases, so does x (since x = t). Parameter y on the other hand remains at 0.6. As a result we expect a horizontal line to appear at height 0.6. This is shown below for  0 <= t <= 1. (If you are unsure how this line results from the above, try putting a few values of t between 0 and 1 in the equations, and plot the results manually on graph paper.)


Suppose we want to shift this line to the left? Just add a negative constant to x (same as with the point!) Suppose we want to shift this line up? Just add a positive constant to y (same as with the point!). In fact you can always add constants to x and y to shift the curves to wherever you want.

Suppose we want to extend the line (make it longer)? We have a few options. We can either:
  • Change the domain of t (the valid values of t), e.g. choose  0 <= t <= 2 instead of 0 <= t <= 1 as before.
  • Or we can rescale time, that is, we can e.g. replace all occurrences of t in the equation with a new parameter q = t/2 (which implies t = 2*q), so we get ( x(q), y(q) ) = ( 2q, 0.6). If we now still use the same bounds for q as we did for t, i.e. 0 <= q <= 1, we've effectively made time run twice as fast, so the line will have doubled in length. If you wanted to keep the original line length after the replacement with the new parameter, you would also have to update the domain of q as follows: 0 <= t <= 1, so 0 <= 2q <= 1, so 0 <= q <= 0.5.
  • Third option is to stretch the x component by doing a multiplication. You can always stretch the x or y component by doing multiplication with a constant.
Mathematically, both options for changing the length of a curve yield the same results, but when we realize the line in supercollider, the chosen option will make a difference. If you rescale time, it will also impact the speed with which the squiggle is realized (drawn). This means that rescaling time while also updating the bounds is a way to speed up or slow down the generated squiggles without affecting the curve.

Here's another neat trick: if we replace all occurrences of t in the equations with (tend - t) we can make time run backwards, i.e. the squiggle is then generated backwards, or in this case from right
to left. The symbol tend here means the last value t can get, e.g. if we chose the domain of t as 0 <= t <= 2, then tend would be 2. Note that in supercollider it will be easy to reverse time directly without having to resort to these mathematical tricks to make it look as if time flows backwards even if in reality it flows forwards. 

Horizontal lines are just as boring as points. Can we do tilted lines? Well yes, of course. The key to tilted lines is to realize that in order to draw a tilted line you need to vary both x and y as time progresses. Let's try this with a few equations:

( x(t), y(t) ) = ( t, t) for (0 <= t <= 1). Here x and y vary equally as t progresses. We get a line under 45 degrees:


( x(t), y(t) ) = (t, 2t) for (0 <= t <= 1). Here we scaled y with a factor two. Therefore we expect the height to rise twice as fast as the width:

 ( x(t), y(t) ) = ( -t, -t/2 ). Here we scaled x negatively, and scaled y down by a factor 2. Whenever you multiply the complete x component with -1, the complete graph is mirrored around the x-axis. If you multiply the y component with -1 the graph is mirrored around the y axis. We also divided the y component by two, so it should vary slower in height than x varies in width as t progresses.

Towards more interesting squiggles

As far as squiggles go, lines are boring too. We need to think about how we scribble on a piece of paper. Here's one possible recipe: we move our arm quickly up and down as we move it slowly from left to right. Moving slowly from left to right is easy, that's the line we've been creating in the previous paragraph: x(t) = t. Do we know functions that move up and down all the time? Sure we do! They are called oscillators, and they come in all sizes and flavors: triangular waves, sawtooth waves or simply sine waves. As sine waves are the easiest ones to describe mathematically, let's concentrate on those first. (In supercollider it will be easy to replace one with the other). Let's e.g. take

( x(t), y(t) ) = ( t, sin(2*(2*pi*t) ) for ( 0 <= t <= 4)


If you count well, you will spot 2*4 = 8 periods of the squiggle. This is no coincidence. This is how you can vary the density of the squiggle. Now this squiggle is horizontal, but we can apply the intuitions we've built up before to stretch it in all directions, to tilt it or to time-reverse it. E.g. let's tilt it by letting y rise as t rises. This is just a matter of adding the tilt to y(t). This is neat about the parametric equations: you can superpose effects.

( x(t), y(t) ) = ( t,  t + sin(2*(2*pi*t) ) for ( 0 <= t <= 4)


In blue you see the line ( x(t), y(t) ) = (t, t). In black you see the squiggle (x(t),y(t)) = (t, sin(2*(2*pi*t))). In red you see both combined: (x(t),y(t)) = (t, t + sin(2*(2*pi*t))). Note that in the combined version, x should not move faster than in the separate versions so for this reason the x component is not changed.

What would the tilted squiggle look like if we oscillated in x direction instead of the y direction? Let's try to find out: (x(t), y(t)) = (t + sin(2*(2*pi*t)), t).


In blue you see the line ( x(t), y(t) )  = (t, t). In black you see the squiggle (x(t),y(t)) = (sin(2*(2*pi*t)), t)). In red you see the combination (x(t),y(t)) = (t + sin(2*(2*pi*t)), t).

What happens if we squiggle in both x and y direction simultaneously?
(x(t),y(t)) = (t + sin(4*pi*t), t + sin(4*pi*t) ).


 Ahm... what? That is a disappointment. Surely that can't be right? Right!? But actually this is correct. If you squiggle left and right in perfect sync with squiggling up and down you get a straight line. Things change quite drastically however, if you don't do it in perfect sync. Let's try some examples where we squiggle up and down with the same speed, but different phase:

(x(t),y(t)) = (t + sin(4*pi*t), t + sin(4*pi*t + pi) )


(x(t),y(t)) = (t + sin(4*pi*t), t + sin(4*pi*t + pi/2) )


(x(t),y(t)) = (t + sin(4*pi*t), t + sin(4*pi*t + pi/4) )


What if we squiggled in x and y direction with different frequencies instead? By playing with frequency and phase you can get some really intricate results, e.g.

(x(t),y(t)) = (t + sin(2pi*t - (2*pi/10)), t + sin(3pi*t + pi/3) )

Tapering

With the parametric equations we have a lot of power at our hands to further transform the generated curves. Suppose we want the squiggle to widen as time progresses. We already knew we can widen the squigle by stretching it (multiplying it) with a constant. But now we want the widening to increase as time progresses, so instead of multiplying it with a constant, we want to multiply it with something that increases as time increases. Can you think of such a thing? The answer is quite obvious once you know it: we can multiply it with time t itself! (Or some scaled and shifted version of it). Let's try out the idea on our tilted squiggle. We'll add tapering by multiplying the squiggly parts (the sines) with t/4:

(x(t),y(t)) = (t + t/4*sin(4*pi*t), t + t/4*sin(4*pi*t + pi) )



If instead of linear tapering you wanted quadratic tapering, you would multiply with a quadratic tapering function (here I've increased the domain of t to 0 <= t <= 8 to better show the quadratic shape). (x(t), y(t)) = (t + t^2/16*sin(4*pi*t), t + t^2/16*sin(4*pi*t + pi))


Bending squiggles

There's really no limit to what you can do. Suppose I wanted to take the linearly tapered squiggle and bend it around a circle with radius 10. Remember that we tilted the squiggled line by adding a horizontal version to a tilted line. Perhaps we can do the same with a circle? The parametric equation for a circle with radius 10 is (x(t),y(t)) = ( 10*sin(t), 10*cos(t) ) for 0 <= t <= 2pi. As before we can reparametrize the equation to make the bounds compatible with the tapered squiggle. We want to vary t from 0 to 4 instead of from 0 to 2pi. This means that we introduce a new parameter q = (4*t/(2pi)), which implies t = (2pi)*q/4. Then indeed 0 <= t <= 2pi, so 0 <= q*(2pi)/4 <= 2pi, so 0 <= q <= 4.
The circle equation therefore becomes (x(q), y(q)) = (10*sin(2pi*q/4), 10*cos(2pi*q/4)). And we can now just replace symbol q with symbol t again (technically this is a different t than before, one that now has compatible bounds) to get (x(t), y(t) ) = (10*sin(2pi*t/4), 10*cos(2pi*t/4)).

Let's see what happens if we add the tapered squiggle to this circle instead (I've increased the number of squiggles to make the effect more obvious:

(x(t),y(t)) = (10*sin(2*pi*t/4) + t/4*sin(16*pi*t), 10*cos(2*pi*t/4) + t/4*sin(16*pi*t + pi) )

.

Can you think of a way to make a squiggly spiral instead of a squiggly circle now (hint: something with the radius of the circle being time dependent)? Or how would you bend it over a quarter circle instead of a complete circle (hint: something with time rescaling of the circle equation)?

Polar equations

You may think by now that you are an absolute master in everything related to parametric equations, but nothing could be further from the truth. Many types of squiggles exist that we cannot easily construct with the insights presented before. One such type of squiggle is that which involves intricate circular movements (think mandala's). And for those types of squiggles, it's usually easier to think in terms of so-called polar equations. Polar equations can easily be translated to parametric equations so the insights from the previous paragraphs are not be lost.

So how are polar equations described? They are described as a variation of radius in function of the angle: radius = r(phi). Such equation can be translated to a parametric equation as follows: (x, y) =  (radius*cos(phi), radius*sin(phi)).

Suppose you have a constant radius 1 for all angles phi. What curve do you get? A circle.
r = 1


To convert r = 1 in polar form into parametric form, we can use the formulas given above:
x = r.sin(phi) = sin(phi)
y = r.cos(phi) = cos(phi)

Since phi is just a parameter that varies e.g. from 0 to 2pi, we can rename it to t, also varying from 0 to 2pi (or we can perform a substitution phi = 2pi*t, so that the range of t now varies between 0 and 1 instead of 0 and 2pi).

How can be make this a squiggly circle? By varying the radius as phi changes. We want the radius to oscillate as phi changes: we can again use sine, triangular or sawtooth waves, e.g.

r = sin(6*phi) + 2


In parametric equations, this would be:
x = r*sin(phi) = (sin(6*phi) + 2)*sin(phi)
y = r*cos(phi) = (sin(6*phi) + 2)*cos(phi)

Or a spiral with r(phi) = 0.1*phi/(2*pi), which in parametric equations becomes
(x(phi), y(phi) ) = (r*sin(phi), r*cos(phi)) = (0.1*phi/(2*pi)sin(phi), 0.1*phi/(2*pi)cos(phi))



Or maybe you want to scribble something more chaotic like r = cos(0.95*phi/8) (for 0 <= phi <= 40*pi):


Or something more aesthetic like this rose? r = phi + 2*sin(2*pi*phi) + 4*cos(2*pi*theta)


Bezier curves

(image courtesy of https://plus.maths.org/content/bridges-string-art-and-bezier-curves)

Nothing special here. With everything you know already some equations should suffice. 
  • Bezier curve segment specified with 2 control points (i.e. a line between known points (x1,y1) and (x2, y2)): (x(t), y(t)) = (x1 - t(x2- x1), y1 - t(y2-y1)) for 0 <= t <= 1
  • Bezier curve segment specified with 3 control points (x1, y1), (x2, y2) and (x3, y3): (x(t), y(t)) = ((1-t)^2*x1 + 2(1-t)t*x1 + t^2*x2, (1-t)^2*y1 + 2(1-t)t*y1 + t^2*y2)
  • You can concatenate several 3-control Bezier curve segments to form larger curves. You can also create higher order Bezier curve segments using more than 3 control points. The formulas can be found on wikipedia.

Concatenation and periodization of parametric curves

In some environments (like in supercollider Synths) it is difficult to perform if-then-else. The lack of if-then-else makes it hard to concatenate parametric equation segments. Supercollider luckily offers the Select UGen which allows one to fake if-then-else by evaluating all branches and selecting results from only one of them.

As an alternative, one can also concatenate the curve segments mathematically. This is explained in the paper "Single equation without inequalities to represent a composite curve" by "E. Chicurel-Uziel", available from Elsevier's Computer Aided Geometric Design (worth reading!).

I won't repeat the complete paper here, but the techniques in the paper are directly applicable to our use cases. The basic idea is to define a step-function that can be used as a "switch" to switch on or off a certain parametric equation in a given interval. This can be combined with retiming of the parametric curve (i.e. substite all "t" with "t-a" to delay the curve with a time units) to concatenate different curves into one bigger curve. They also describe a way to make non-periodic function segments periodic both for cartesian and polar coordinates. 

Full explanation falls outside the scope of this blog post (maybe in a later one?), but if you understood everything so far the article itself is not a lot more difficult.

As a supercollider-only alternative, one may also sequence the different curve segments using patterns.

Conclusion

Note that so far we've barely scratched the possibilities. It's not so difficult to imagine how tons of stuff is possible by cleverly combining techniques and by introducing other functions besides polynomials and sines. Feel free to post your best squiggles in the comments section! 

With some techniques under our belt, let's now tackle the implementation in supercollider. As stated before, the aim is to replace MouseX and MouseY with pre-programmed squiggles.

Implementation in supercollider

In the previous sections we've always used an abstract parameter t (or "time") and this parameter t always varied from some lower bound (usually 0) to some upper bound (often 1 or 2pi). Where will this parameter come from in supercollider?

The simplest way of creating a parameter that goes from some lower bound to an upper bound is to use a sawtooth wave. If we use only one period of the sawtooth wave, we generate the complete squiggle exactly once. But if we use multiple periods, the same squiggle is generated multiple times as well.

If instead of a sawtooth wave we use a triangular wave, then we always generated the squiggle from beginning to end, followed by the same squiggle in reverse (time counts down again as the triangle goes down again).

Let's start by taking a synthdef that sounds somewhat interesting with a MouseX and a MouseY.


Run it in supercollider and notice how squiggling with the mouse creates some interesting effects.

Now choose a squiggle. I chose r = sin(5*phi) + 2 in polar coordinates (very similar to the 2nd figure in the section on polar coordinates, only 5 lobes instead of 6). Here's the supercollider implementation. It also displays a scope which, if you put it in X/Y mode will show you the squiggle as it is generated and executed.

By changing "speed" you can traverse the 2d space faster/more slowly, and by changing "rotations" you can traverse the 2d space partially/multiple times (i.e. multiple complete rotations). The "rotations" parameter has no effect on perfectly cyclic squiggles like the flower shape, but it would have an important effect on non-cyclic squiggles, like a spiral.

The only thing that needs to be changed to switch to a different squiggle in polar coordinates is the line that defines "commonpart" and the min and max x and y values of the generated squiggle (and of course speed and rotations can be set to personal taste).

Applying the theory from this post yields us:


However, the following - technically flawed - version actually sounds better... so I will keep it here as well. The mistake in case you are wondering is in confusing the sin operation with the SinOsc oscillator.

The example below can be downloaded from http://sccode.org/1-57x

And here's another example with the quadratically tapered squiggle (x(t), y(t)) = (t + t^2/16*sin(4*pi*t), t + t^2/16*sin(4*pi*t + pi)):

And of course, nothing stops you from also animating other parameters. There's literally no reason why e.g. you couldn't turn parameter speed into a squiggle as well.

Finally here's a longer example that combines multiple squiggles together and also visualizes them while they are being generated. While implementing this example, it became clear that there are many (MANY!) possible pitfalls you can fall into. But hey... at least the maths never lies.

First the Desmona design:

Next the source code (get it from http://sccode.org/1-57z )

And finally a screenshot of the thing running:




Have fun!