Re: [livecode] live coding practice

From: alex <alex_at_slab.org>
Date: Wed, 10 Jan 2007 23:04:00 +0000

On Wed, 2007-01-10 at 15:13 +0000, Nick Collins wrote:
> Not do I- music works on multiple interacting timescales. I just said that
> one suspicion might be that the instrumental musician is more able to
> attend to individual notes, whilst a live coder dips into more statistical
> aggregates of activity without following the note by note progression.

Right got you, we are of course not dealing with sequences of notes
directly but some higher order representations, and we can show this is
a straight trade off - less note-by-note control, more control over
structures of notes. But again, as Pressing suggests, humans might be
doing this internally anyway.

As you say though, this is untested. If nothing else though, it does
seem like a good model for livecoding, and whether it matches human
cognition or not, we can still appreciate the results as music.

> I'm sorry I didn't comment on Jeff's model before when you brought it up
> some months back. I'm still waiting to be allowed to put my PhD online, and
> I say a few more things about it there, but whilst I respect Jeff's work
> very much (and he was a very learned researcher) this particular ostensibly
> reasonable mathematical model needs a lot more work, and a lot more
> evidence/experimental corroboration, to allow it to be applied in any
> specific situation.

Yes true, although I'm not clear on whether you mean in terms of
applying the model to software or matching it to human improvisers.

> Note sure we are disagreeing, and I might pull back before getting into all
> sorts of different gestures. I'm pretty certain, however, that the physical
> scheduling mechanisms are different - witness the reaction times. And live
> coding in the sense I usually mean it (which may be different to what you
> intend) is much more Wanderley and Orio (2002)'s score level than note
> level.

I quote descriptions of those classifications from this paper:

1. Note-level control, or musical instrument
   manipulation (performer-instrument interaction)
   i.e., the real-time gestural control of
   sound synthesis parameters, which may affect
   basic sound features as pitch, loudness and
   timbre.

2. Score-level control, for instance, a conduc-
   tor’s baton used to control features to be ap-
   plied to a previously defined—possibly
   computer generated—sequence.

To give a little more context, the names of the other classifications
they give are

3. Sound processing control, or post-production activities
4. Contexts related to traditional HCI, such as drag and drop, scrubbing
5. Interaction in multimedia installations

... later they are clear that some things won't easily fit their
classification. I think we agree that livecoding is one of them.
However, livecoding commonly involves "musical instrument manipulation"
on a startlingly fundamental level. Also by "score level control" they
appear to mean controlling a pre-written score. While some attempts
(including many of my own) at livecoding amounts to this, it would not
seem to be anyone's aim.

> > Isn't programming all about encapsulation?
>
> Sorry, I used encapsulate as a pun then. I do assume there are cerebral
> methods to hold certain algorithmic gestures, which doesn't mean these
> algorithms can be spilled out at note speed unless you precoded them.

"Pre-coding", if I understand what you mean by that, is not cheating.
It's just reflecting upon your way of working, and removing duplication
of effort. It's giving that thing you do a name and some parameters, so
that you can do it more quickly, just by typing in that name. It's
spotting the same pattern of working in other areas of work and
generalising the function so you can use it there, too.

> I guess its interesting to imagine a live coding mental million armed
> octopus, holding a maze and mire of algorithms in its repertoire, waiting
> to fire them off in response to the slightest and quickest of provocations.

:) :) :)

> But this would really be like an interactive music system, even where human
> guided, rather than live coding in my favoured sense, which involves
> fundamental rewrite engagement with algorithms on-the-fly.

Sure, but I'm not talking about giving a note generating algorithm a
name, and then typing in the name of that algorithm in order to generate
notes of that type. This I suppose is how I did things before the
changing grammars conference although I didn't have an octopus, just a
bash shell :)

Instead I'm advocating reflecting upon the ways that we make note
generators, and building up a library of functions that allow us to do
so. If done right, this should allow us to concentrate exactly on these
fundamentals rather than get bogged down in typing in the same old
constructs and patterns.

This by the way the enthused haskell beginner programmer in me
speaking...

Anyway, sorry if I appear argumentative today, I'm just thrashing out
ideas.

alex
Received on Wed Jan 10 2007 - 23:04:42 GMT

This archive was generated by hypermail 2.4.0 : Sun Aug 20 2023 - 16:02:23 BST