Re: [livecode] Rule based intonation

From: Kassen <signal.automatique_at_gmail.com>
Date: Mon, 27 Nov 2006 23:47:36 +0100

On 11/27/06, alex <alex_at_slab.org> wrote:

> Yes true. It seems to be a post-processing step, so needs a bit of
> buffering.


Maybe, depending on the structure of your program, but for most of those I
think these rules could be incorporated into whatever generates sounds. For
many of these rules you could probably already determine wether it applies
and if so in what way at the moment the note is being generated. That's
quite fortunate because otherwise the poor aucoustical musicians would be in
deep trouble. Acoustical musicians tend to have a image of the near future
of the piece in their head already which gives them a advantage but
algorithems could be written to mimic this. One could, for example, generate
the controler data for a whole phrase every ten seconds instead of
generating a note every one second.



>
> Oh yeah definitely. I think the job is easier for generated music than
> it is with post processing some MIDI file, you're really at the source
> of the music and have any information about where phrases are that there
> could be. I guess also that you could see applying these rules not just
> as highlighting or 'expressing' phrases but defining where they are in
> the first place.


Exactly, that's what interests me, I'm not all that interested in treating
MIDI files. To me this field links to what I see as one of the largest
challenges in livecoding or for that matter generated music in general; it's
not that hard to write a set of rules that will generate notes (or some sort
of note-equivalent) but it's quite hard to write rules that give pleasing
results, especially over a longer time. This research at least gives some
pointers to ways of aproaching this.


Also there's no reason why we can't come up with our own performance
> rules that a human wouldn't normally play. Well maybe many here already
> have.


Agreed, it would for example be interesting to aproach other data sets like
these people aproached (mainly) baroque, classical and jazz. At times when
hearing interesting sounds or sound paterns in non-musical contexts I've
been trying to find "rules" linked to elements that I liked but with little
sucess so far in the way of things that could be fed to a compiler. It does
get much harder when there is no score to compare things too but one
interesting aproach would be using techniques from physical modeling applied
to controler data instead of audio waves.


Sticking literally to already discovered rules doesn't realy seem in the
spirit of livecoding either...

Kas.
Received on Mon Nov 27 2006 - 22:51:15 GMT

This archive was generated by hypermail 2.4.0 : Sun Aug 20 2023 - 16:02:23 BST