LivecodingIsNotSynaesthesia

From Toplap
Revision as of 00:10, 26 August 2008 by Yaxu (talk | contribs) (Reverted edits by AcelbAsolo (Talk); changed back to last version by Yaxu)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Loosely assembled thoughts, a work in probably perpetual progress by amy.

Interest in synaesthesia, the correlation between light and music, has been around for a long, long time. Color organs were constructed as early as the 19th century and as recently as the latest vesion of Winamp.

In the early 20th century, abstract or "visual music" filmmakers such as Oskar Fischinger rather deplored "Mickey-Mousing" - animating visuals to exactly mimic the sound. (The studios, however, often forced them to do it anyway. It sold popcorn, I guess.) Left to their own devices, these filmmakers made silent films silent, or devised less direct, more interpretive relationships between sound and image. Image and sound had discernible relationships, but these were complex and metaphorical, not direct and synaesthetic. This idea has stayed with us and been absorbed into mainstream visual culture - think music video editing.

Time-based visual artists: filmmakers and VJ's - often feel, as Fischinger et al did, subordinated to the music. Who wants to Mickey Mouse? Why should *I* follow *them*? On the other hand, composers often feel subordinated to visual artists - i.e. film-scoring. It seems someone always feels like they're carrying the water...

But leaving visual artists out of it: you can also think of traditional mechanical (musical) performance as synaesthetic - you play the violin, the audience sees your arms and body move, and music comes out corresponding to the visual movements. We're used to this. It's just how the world works, after all: actions are simultaneously seen and heard.

Enter enter:

The livecoder's dilemma comes when what's moving is not an arm, but an algorithm. Algorithms are at once thoughts, typically performed by hitting the "enter" key. But "enter" doesn't make the music (or the visuals, or whatever.) And "enter" can trigger one thing one moment, and something completely different the next.

So what we have is a different paradigm for performance. There's not a one-to-one correspondence between action and reaction. The visualization - i.e. the display of the algorithm - will be a bit more complex. But as discussed above, this isn't the first time this has happened. The trick comes in finding the balance between complexity and chaos - where the relationship is discernible and kinetic. And where non-coding audiences can get the sense of a kinetic algorithm much as non-musical audiences can get the sense of an intricate performance without seeing the performer's fingers or knowing how the instrument is played.

See also: ThingeeLanguage