Hi Dylan
You didn't say if your livecoding system is going to be controlling
synthetic audio, or other stuff (MIDI, lights, robot dancers etc)...
But if you're controlling your own audio system then the obvious choice is
the audio sample clock.. no scope for perceptible event jitter there unless
buffers get dropped. you could run your scripts in the audio thread.. (SC1)
or queue timestamped events in a "jitter buffer" and dequeue them after a
delay longer than the longest expected timing jitter in your audio
callback... (SC3)
There's a few pretty pictures in here which might get you thinking about
timing and audio buffers...
http://www.portaudio.com/docs/portaudio_sync_acmc2003.pdf
Best wishes
Ross.
----- Original Message -----
From: "Dylan McNamee" <dylan_at_aracnet.com>
To: <livecode_at_slab.org>
Sent: Friday, November 17, 2006 4:20 AM
Subject: Re: [livecode] livecode systems architecture questions
> On Nov 16, 2006, at 9:05 AM, Paul Sanders wrote:
>> On 16 Nov 2006, at 16:54, Adrian Ward wrote:
>>
>>> Anyway, funky music is never properly synchronised anyway.
>>
>> I think it is, but not in a way that can be reliably predicted. Although
>> I am sure a sophisticated ruleset could emulate funk to a satisfactory
>> degree for most listeners.
>
> Actually, this is exactly the kind of thing I had in mind, which is why a
> solid jitter-free timing base is important. I can't control the funk if
> I'm on a jittery clock.
>
> Thanks for the input everyone...this is very helpful stuff. (More input
> is welcome, of course!)
>
> dylan
>
>
Received on Mon Nov 20 2006 - 07:00:56 GMT