Re: [livecode] livecoding and gestural control

From: Shelly Knotts <shelly.knotts_at_hotmail.co.uk>
Date: Mon, 2 Dec 2013 15:43:56 +0000

Hi Kate,

The gig was a 'blind date' so we didn't meet the dancers before or talk about what we would do. The only instructions to dancers were - as you deduced - that the name in the corner should be who has the wii mote. Konstantinos was mapping the wii mote values to sound. My part was not mapped to the wii mote.

S.

Sent from my Windows Phone
________________________________
From: Kate Sicchio<mailto:kate_at_sicchio.com>
Sent: ‎02/‎12/‎2013 15:36
To: livecode_at_toplap.org<mailto:livecode_at_toplap.org>
Subject: Re: [livecode] livecoding and gestural control

Hi All

I am also interested in Kostantinos and Shelly's piece with the dancers - do you know about the score for the dancers? There seems to be a name generated in the corner of the screen and I am guessing this relates to who has the wii, but the movement seems very open and not live coded. Was the only feedback to the sound from the wii? Just curious if you know more about this.

Thanks!
Kate

--
Kate Sicchio
web: www.sicchio.com
twitter: _at_sicchio
On 1 Dec 2013, at 11:37, Charlie Roberts wrote:
> Nice work Marije! I especially enjoyed the sounds in the six to eight minute range. I had similar thoughts to Konstantinos and hope to hear more about the incorporation of gestural control / devices into live coding practice from anyone on the list. I've seen other performers do it (Sam Aaron jumps to mind) and I'm curious about the motivation for it.
>
> Transparency was mentioned, but this could also be achieved in a collaborative performance, as in Konstantinos's second video . I assume there's something attractive about moving between modalities for performers and am hoping someone can articulate what they find appealing about it.
>
> For what it's worth I also share the impulse to bring more embodiment into my live coding performances... I'm just not sure I trust the impulse in the absence of a collaborator(s).
>
> Any pointers to papers touching on this would also be appreciated.
>
>
> On Sun, Dec 1, 2013 at 6:12 AM, Konstantinos Vasilakos <k.vasilakos_at_keele.ac.uk> wrote:
>
> On 1 December 2013 13:57, Marije Baalman <nescivi_at_gmail.com> wrote:
> Do you have any footage of your work along the same fashion?
>
> Hi, I have done some performances elaborating in the same strategies (live hardcoded mapping), one is a an audio based representation of this method: https://soundcloud.com/konstantinos_p_vasilakos/formations  which basically this is to show the huge potential to create variations of structure within the musical context by changing the relationships of the controller with the performance environment, and/or to manipulate the sound synthesis bits itself in order to change the morphology of the sound.
>
> The other is a performance I did with some dancers and jointly with another laptop performer Shelly Knotts which you can see it here: http://www.youtube.com/watch?v=2Pk1nmIAoQs
> and this basically was elaborating in manipulating the signal of the wii and again to change the mappings(range specs included).
>
> There will be some thorough documentation for these and more on the manipulative mapping in real time with live coding in due course of my PhD.
>
>
> Thanks
>
> --
> Best
> K.
>
Received on Mon Dec 02 2013 - 15:45:32 GMT

This archive was generated by hypermail 2.4.0 : Sun Aug 20 2023 - 16:02:23 BST