Thoughts On Live Coding As A Session Musician (2 Of 3)

Hello again!

I began writing this second post by reviewing the first in the series ( and reflecting on my thoughts at that time. Half a dosen (give or take) significant performance events have gone by in that time (performing with ethno-performer Saydyko Fedorova as UDAGAN — motivating almost continuous creative development in the process of composing and rehearsing tailor made repertoires for almost every individual event. I’ve had a lot to think about! Hopefully this article won’t read as too introverted — while the initial article was more focussed on presenting a theoretical analogy for live coding performance, this one much more heavily leans on observations of experience.

Before I dive into stream of thought, I’ll also leave a quick note to mention that I’ve decided to push back describing my thoughts on machine learning methodology in the creative process into the final blog post of this series — the reasoning being that I have a lot of observations on the developmental creative process to make in the meantime.

Diving into my observations, an immediate thought sparked by the introductory paragraph is on the logic of curating an entire new repertoire for each concert. I feel that this is a sign that I’m still very much in the early days of exploration in live coding composing/performance. I feel that this a good omen for the future of the artform as it reflects on the vast scope of possibility — lifetimes can be spent on developing expertise, sensitivity, breadth of knowledge and nuance of expression on any given musical instrument; the instrument of code being one of them. Roughly three years into exploring live coding and I feel that I’ve covered some significant ground, while still having a GREAT MANY personal goals as a live coding performer yet to fulfil. This is more or less how I felt about session musicianship at the same level of progress. Years later, I still feel the same as a session musician and very much hope I also will as a live coding performer.

A significant development for me throughout concerts of 2019 has been in the incremental declaring of a great deal of custom syntax for use in performance and while coding. If I were to create an analogy for how I feel about this from the perspective of a player of guitar instruments, I’d say that to me, performance with base tidal would be analogous to me with performance in standard tuning — highly versatile and adaptable as well as being flexible enough to accommodate a wide range of musical contexts without requiring extensive advance preparation. On the other hand, I would relate the personalisations that I have built up over this year as performance in altered or open tunings — highly effective and able to operate with an extended ‘inner’ range of possibility inside a comparatively smaller ‘outer’ scope of musical context (able to be expanded through advance preparation, of course).

Without going into technical detail, the most significant modifications to my tidal workflow have been:

  1. Adding a staging layer between pattern declaration and execution. This allows for multiple changes to be staged while a previously running section sounds, then all the updated parts executed simultaneously through the one shot use of a specifically purposed code block. This also allows for other ‘pattern metadata’ such as velocity levels and key signatures to be ‘hot swapped’ at execution time, facilitating dramatic rises and falls in dynamic or potentially breathtaking modulations in key.
  2. ‘Abstracting the composition’ — this entails extracting the structural pattern data which holds key compositional attributes (including the various melody and harmony elements that occur throughout a piece’s form and their associated section designation) and reinserting it in the pattern execution block. This means that the patterns of a piece themselves are actually nothing more than ornamentation or variation on a central theme. This gives a lot of power to a composer as it allows entire compositions to be swapped in and out — essentially allowing the coder to play conductor and ‘swap out’ the orchestra. Imagine doing that with real orchestras!

This functionality is mostly declared in my (these days quite messy) BootTidal.hs file, with certain musical functions such as key signature depending on typeclasses define in the MusicData.hs module hosted at ( My code repository for performances can be viewed at, though it is more of a personal utility repo that I use to keep my performance codebase mobile and may not be very readable.

Moving beyond simply observing my personal growth as a live coding performer and reflecting on my previous thoughts in the opening article for this series, it is possible to reflect on the ideas previously discussed with additional perspective. In the last article, I described a process of exploring and expanding a codebase freely and then pushing back the boundary between precompiled code and code produced in performance with an inclination for showmanship.

Throughout the first half of the year, my performances were much more inclined to feature a lot of prepared code that I then modified to improvise and/or perform structured competitions with in real time (an example of ‘pseudo-improvisation’ with this style is documented here: This was a concession to myself at the time in order to facilitate a higher creative output of material — reproducing more code in real time would require me to stop writing and focus inwards on already existing material. 

Throughout the year, another consideration to the ‘reproduce as much as possible to impress the audience’ perspective I initially took has began to surface — that of performing with additional instruments whilst live coding. In recent performances I have begun experimenting with bringing modified guitar instruments and performing instrumentally alongside the code in real time. By utilising much more ‘static’ code which is executed more like a script and has specific opportunities for improvisation ‘baked in’, I’ve found that I’m able to handle both the music code (TidalCycles) as well as the visuals (Hydra) and also perform on my instrument in addition! This is a different (and more traditional, in my opinion) form of showmanship that could appeal more or less to an audience, depending on the performance setting. Before I feel ready to make any conclusive statements of effectiveness of one style of performance over another, I feel that a dosen or so more performances to reflect on are necessary (I have archived close to 100% of our past performances, so we have a lot of past work to reflect on!). In reality (and quite logically), a balanced spectrum that presents both ends of the scale throughout the performance is most likely the ideal case.

Finally, reflecting on my goal to reproduce as close to 100% of the code for a performance in real time — this is a style of performance that I passionately hope to achieve. I do feel however that it is beyond the realms of possibility in the current developmental stage that I find myself at. In order to focus inwards with enough introversion on one specific piece of music to achieve the level of spectacular live musicianship featured in my imagination, I would have to forfeit all forward motion into exploration of coding style for composing, and creation of new material would stop. I’m simply enjoying those areas of the artform too much these days to consider that an option! I will propose however that in my lifetime I certainly intend to present such a ‘live coding recital’ of a perfected masterpiece, as an artistic challenge. From a conceptual perspective, I feel that a live coding interpretation of an orchestral work reproduced in real time would be a brilliant statement. Whether this would be feasible is another matter — a piece may have to be composed specifically for the artform.

I think I can leave it there for now — 

Please leave a comment if anyone has any thoughts, questions (or disagreements) with anything written here. I’ll be thinking about what to put in part 3!

Oscar South