Re: [livecode] Programming time in livecoding systems

From: Jeff Rose <jeff_at_rosejn.net>
Date: Fri, 25 Sep 2009 10:29:31 +0200

Thanks for the links Click. It looks like there are a bunch of
interesting things to check out in this bibliography.

What I'm trying to do is build an understanding of what are the
strengths and weaknesses of interacting with time in each way possible,
but I agree classifying them by language or system isn't very
meaningful. So for another attempt, how about thinking of them as
various combinations of:

* synchronous/asynchronous
* push/pull.
* inter/outer specification of time

It seems that the synchrony vs asynchrony is mostly about programming
style and less about music. I think the asynchronous model is much more
flexible, interesting and useful in a livecoding system, so I've already
chosen to go that route.

Push vs pull on the other hand, interacts with your pitch generating
functions in an important way. In a push mode where the function
schedules its own execution (whether by sleeping or by scheduling
itself), the function will typically generate a pitch and then pause
until the appropriate time to generate the next pitch. If instead you
have a metronome that is pulling notes from your pitch function, then
the function has to inspect time and determine whether it wants to
generate a note or a rest. So, when pushing you have the cognitive
overhead of managing time, and when pulling you have the annoyance of
having to generate rest notes.

With inner/outer specification, what I mean is that you can either treat
time as an integrated component of your pitch generating functions, or
it can be a separate function that is either pushing triggers or pulling
notes from the pitch generating processes. So a metronome or some
stochastic variation of one would be an external time source, while a
function that schedules its own execution or returns its note duration
to specify when the next pitch should be generated, is internally
specified. This is clearly an important distinction, but one that I
don't think I fully understand the ramifications of yet.

For now I'm interested in modeling musical processes using the lazy
sequence mechanisms built into Clojure. I'd like to start with a pull
mechanism based on a simple metronome and one or more sequence
generating functions per voice. Rather than generating a note just
milliseconds before it is to be played though, I'm interested in
generating a series of notes in advance, so that I can then run event
transformers over the series of notes. For example adding variations in
timing to effect the groove, generating harmonies or bass lines, adding
some dissonant chaos or subtracting it and adding some major chord
intervals to increase consonance. Anyway, this is vaporware for now so
I'll leave it at that.

If anyone has some advice on which models of time are good or bad for
various types of instruments or musical styles or models of musical
generation, I'd really like to hear what someone with more livecoding
experience thinks about this.

Cheers,
Jeff

Julian Rohrhuber wrote:
> This is an interesting question, however not necessarily one that
> distinguishes between systems. How time is conceptualised is an inherent
> problem in livecoding, since once you introduce certain notions of time,
> there is no real time, and this is where it gets interesting.
>
> So I would turn the question back to you: As you are working with
> clojure, which seems very interesting in terms of timing, what do you
> personally expect from it?
>
>
>> I've been experimenting with different programming models for
>> livecoding, and currently I'm focused on the way time is modeled and
>> passed around in a musical process. I'm wondering if other people on
>> the list might have some thoughts and experiences in different ways of
>> dealing with time in musical programming?
>>
>> Looking at what seem to be the popular "platforms" for musical
>> generation, it seems like there are pretty much four models of time:
>>
>> * Chuck style synchronous time, where each thread manages its own
>> execution timing using what is basically a sleep mechanism.
>>
>> * Impromptu's asynchronous callbacks, where function calls are
>> scheduled for future execution.
>>
>> * SuperCollider patterns (Pbind & friends) with "managed time", where
>> either a fixed duration or a sequence of durations are used to specify
>> execution timing, but the actually scheduling is done for you by the
>> stream generation machinery.
>>
>> * Max/MSP metronome events, where a timer fires an event to start
>> triggering notes.
>>
>> I'm sure you can mix and match these styles in each system and
>> language, but in my dabbling in each of these worlds it seems like
>> this is the typical way people think of time when using them.
>>
>> So, what I'm wondering is what are the tradeoffs, advantages and
>> disadvantages of these various models? For example, in Impromptu you
>> end up scheduling both audio events for musical timing as well as
>> program events for execution timing, which gives you full control
>> while also making you deal with more complexity. Chuck weaves these
>> two together so typically people think of them as the same thing, but
>> I think it also makes it harder to change things on the fly. Using
>> something like a metronome lets you ignore time completely and just
>> focus on generating the right notes, but it also diminishes your
>> expressive capabilities.
>>
>> Can these be mixed? Are there certain styles or instruments that are
>> better suited for one model of time over another? Hopefully this is
>> something people can share some incite on.
>>
>> -Jeff
>>
>> P.S. I'm building a livecoding system in Clojure that sits on
>> SuperCollider. It's still in a very experimental stage, but it's
>> available here: http://github.com/rosejn/overtone
>
>
Received on Fri Sep 25 2009 - 08:29:58 BST

This archive was generated by hypermail 2.4.0 : Sun Aug 20 2023 - 16:02:23 BST