hmm, so basically it's reverse beatboxing!? a computer tries to sound
like a human trying to sound like some electronic music... but then again
it's controlled by a human who types the sounds the computer tries to make
that sound like a human trying to sound like some electronic music, and...
(perhaps i should not write emails after midnight... )
well, muy cool!
imagine livecoding meets standup comedy: "a computer walks into a bar
and.... badoom ching!"
-amy
On Tue, 14 Aug 2007, alex wrote:
a> On Tue, 2007-08-14 at 12:10 -0700, Amy Alexander wrote:
a> > very cool! i've posted a comment but don't see it appear on the blog. does
a> > this blog wait for moderation or did it just send my comment straight
a> > to the /dev/null page?
a>
a> They're held for moderation... Sorry I didn't realise that the software
a> didn't make this clear to the commenter, I'm going to switch to
a> wordpress soon I think.
a>
a> > this is really cool! can you explain a bit more for relative laymen how it
a> > works? it might seem to the uninitiated at first to be a speech synth +
a> > vocoder, but it sounds like there's something else going on.
a> > in any case, i think it's an especially apropos + entertaining use of the
a> > visual side of livecoding.
a>
a> Well it is like a really broken speech synth.
a>
a> It seems in this area there's an important balance to be struck. On one
a> hand you don't want to necessarily make music with a speech synth,
a> because it's too much like a human voice. It's like it's difficult to
a> stop yourself trying to search for meaning and listen to the sound.
a>
a> On the other hand I want the ease of composing sounds with text, where I
a> can easily play around with words, having some idea of what a word will
a> sound like. Also the results are a bit like speech, so hopefully a
a> listener can quickly get used to how the sounds are constructed and
a> relate to each other, because these relationships are similar to human
a> words/speech.
a>
a> I guess it's also a bit like the idea of an 'uncanny valley' in
a> robotics. Broken speech synthesis sounds nice, but if it's more like
a> human speech, it just sounds rubbish in comparison, or maybe even
a> menacing.
a>
a> On the technical side, the Karplus-Strong algorithm is just a delay loop
a> with a filter, with feedback. You put some white noise in the delay
a> loop, it feeds back on itself but because of the filter quickly smooths
a> out (rather than making the usual nasty feedback whistles). This acts
a> and sounds much like a real plucked string does, which is why it's
a> called physical modelling synthesis.
a>
a> Once I made that I took two parameters; the length of the delay loop and
a> the probability 'blend' value that controls how 'drumlike' it sounds. I
a> picked a pair of values for these parameters for each consonant in the
a> English alphabet, my aim being to find a good range of sounds that sound
a> a bit like the letters I'm assigning them to, so fricatives harsher and
a> more percussive than more open sounds.
a>
a> For the vowels I'm just applying a formant filter which really does make
a> it sound like human vowels.
a>
a> I think what makes interesting sounds though are 'articulations'. I'm
a> not switching between the parameter values, but moving between them
a> quickly, creating diphthong type effects.
a>
a>
a> alex
a>
a>
Received on Wed Aug 15 2007 - 07:50:09 BST