Re: [livecode] Programming time in livecoding systems

From: alex <alex_at_lurk.org>
Date: Fri, 25 Sep 2009 10:21:35 +0100

Hi Jeff,

Thanks for raising these very interesting issues. I'm supposed to be
giving a talk about live coding languages soonish and really don't
know much about any of them, having just made my own live coding
environments using general purpose languages.

2009/9/25 Jeff Rose <jeff_at_rosejn.net>:
> how about thinking of them as various combinations of:
> * synchronous/asynchronous
> * push/pull.
> * inter/outer specification of time

I think there's also discrete/continuous, and relatedly
striated/smooth. I'm going to be working through Nick's reading list
myself, but would add this one by Bernard Bel, which has some
interesting things to say about the latter.
  http://hal.archives-ouvertes.fr/hal-00134179

The "Time in Indian music" book on Nick's list is very good too,
interesting stuff on meter. Indian music does seem to have an
approach more compatible with live coding, or maybe it's just because
I'm arriving at it with fresh ears...

At the moment I have discrete time, I think synchronous (although not
sure on your definition of this) with a 'pull' model. So sound events
are generated by a regular metronome function, but one event parameter
is 'time offset', so the event can be shifted by a floating point
number of seconds, either in the future or the past (or as far back in
the past as the system buffer time of 0.2 seconds will allow). This I
think goes well with the idea of music as being represented as
discrete events, but then performed in a subtle way in continuous
time.

> With inner/outer specification, what I mean is that you can either treat
> time as an integrated component of your pitch generating functions, or it
> can be a separate function that is either pushing triggers or pulling notes
> from the pitch generating processes.  So a metronome or some stochastic
> variation of one would be an external time source, while a function that
> schedules its own execution or returns its note duration to specify when the
> next pitch should be generated, is internally specified.

Is this close to Bel's smooth/striated time distinction?

> For now I'm interested in modeling musical processes using the lazy sequence
> mechanisms built into Clojure.

I started off using lazy lists in Haskell, the problem with
representing patterns as a list is that you don't have random access.
If you change the pattern generating function you have to start from
the beginning again. So now instead I represent patterns as a
function over (discrete, i.e. integer) time, with higher order
functions to compose them.

I gave a talk about it recently, slides are here:
  http://docs.google.com/present/view?id=ah2x4mkf2fx_112gwnffpck

My pattern library is here:
  http://patch-tag.com/r/petrol/snapshot/current/content/pretty/Pattern.lhs

Here's a recent video of me using it, including some time offset
manipulation halfway through, but unfortunately there was a problem
with the screencast so you can't see the command line until it scrolls
up...
  http://vimeo.com/6727278

I'm only just starting to play with time manipulation here though, I'm
just applying a sine wave to time offsets there. Things like "offset
$ (0.03) <$> sine 16" to shift notes back and forward from -0.03 to
0.03 seconds over 16 beats. Sometimes combining that with a steady
beat with "combine [pure 0, offset $ (0.03) <$> sine 16]", although
that seems to either sound like a chorus effect or a broken scheduler.

Cheers,

alex

-- 
http://yaxu.org/
Received on Fri Sep 25 2009 - 09:21:58 BST

This archive was generated by hypermail 2.4.0 : Sun Aug 20 2023 - 16:02:23 BST