Re: [livecode] live programming paper

From: alex <alex_at_slab.org>
Date: Sun, 01 Apr 2007 15:37:46 +0100

On Sun, 2007-04-01 at 15:09 +0200, Julian Rohrhuber wrote:
> In SuperCollider, both patterns and UGens are purely functional
> higher order constructions. The whole purpose of JITLib is to allow
> to impose changes on those structures that map in a meaningful way to
> their extension (calculation process or sound, dependent on the
> perspective). Depending on the underlying model, this mapping may
> vary, but the assumption of "immediacy" is not valid when the model
> of time is an abstract / structural one. Pattern proxies generate
> coroutines that can be modified even if they do not return. Node
> proxies abstract from buses and allow to refactor synthesis graphs on
> the fly, so that parts of them can be exchanged. All proxies abstract
> from evaluation order as far as possible, and provide meaningful
> "empty" values. For me, the problem you describe has been the source
> of the idea of live coding in a way.

Ok, I've now reread the Algorithms today paper[1] and see that you have
been here before with Alberto and Renate.

I guess I didn't see the significance of this the first read, as
feedback.pl has a simple approach to live coding and time. There is one
method that is called once every beat. The method does something
depending on the number of beats since start up. Different instances of
feedback.pl sync together by delaying start up time under instruction of
a central time keeping process. Code replacements to the method (and
those its calls) are enacted by doing reinterpretation between beats.

This seems a reasonable approach to me, the code decides what to do by
looking at a clock. Smooth transitions (or otherwise) may then be
handled by the livecoder, just by making lots of small changes towards a
goal rather than big ones.

I built in a way for messages to be passed between feedback.pl
processes, but never used them since. It at first seemed obvious to
assume that symbolic interactions between musical elements must happen
in the code, in order for interesting music to be made. It seems though
that these interactions happen in human perception, and there is no need
to make them explicit in the code. (Well, there is no need, but of
course it may still be desirable for emergent effects and so on that can
result from interacting processes.)

In pure functional programming though, it seems to make less sense to
have functions that just take a timestamp as input, and not take
advantage of for example lazy evaluation. I think your answer with
jitlib is to have a pure functions that are modified by an imperative
process.

I think monadic programming could provide a different answer, but I'm
struggling here... I think it would involve having a pure functional
program that would take edits as input and audio samples as output. The
edits would describe removal, insertion and replacement of pure
functions within a synthesis graph. Somehow idempotence would be
preserved.

The connections between the functions within the synthesis graph is
where the monadic stuff would lie. These connections would handle
modification of the graph (including creating new connections when
inserting a function), as well as holding state for that connection.
That state would include enough historical context for a function,
whether newly modified or not, to be able to decide what to do next.

Maybe some real haskell programmers can clear up my thinking here...

alex

[1] http://akustik.hfbk.net/publications/AlgorithmsToday.pdf
Received on Sun Apr 01 2007 - 14:39:02 BST

This archive was generated by hypermail 2.4.0 : Sun Aug 20 2023 - 16:02:23 BST