The first live coding performances? Tom DeFanti + Dan Sandin

Back in 1974, Tom DeFanti defended his dissertation on GRASS, an interactive, interpreted, programming language he created for defining and manipulating vector graphics. He and Dan Sandin gave a series of performances in the 1970s that used live coding techniques.

Although interactive programming systems such as Logo have been cited as important precursors to what is now considered (by some, and probably by you if you’re reading this post) to be canonical live coding performance practice, the first performances cited in live coding papers are usually early work by the Hub and by Ron Kuivila. Yesterday I was speaking with Dan (famous for creating the first virtual reality CAVE system among other things) and he told me a little about the performances he and Tom DeFanti gave in the 70s. Here’s a video of their final performance:

The vector graphics are the result of the previously mentioned programming language RT/1 (originally named GRASS), created by Tom DeFanti for his dissertation. These graphics were then manipulated using modular analog video processors created and operated by Dan Sandin. The software was later used by Larry Cuba to create the vector graphics for the original Star Wars movie (images of the Death Star shown to rebel forces before the final attack). RT/1 is an interactive, interpreted language, which was one aspect that made it suitable for live performance. The other was the ease of assigning physical devices to control elements of the software. In this video from 1976, you can see the system explained, including Tom DeFanti doing live coding of vector graphics with code superimposed on top of the video. And here’s the paper that accompanied the video at the National Computing Conference that year.

Although code is superimposed on top of the video in the above example, Dan told me that this was not the case during performances. Also, although live coding did occur during the performances, most vector manipulation was done using physical controls connected to the RT/1 system. The 1976 video shows how easy it is to assign physical elements for control. Dan described that performances often consisted of transitioning between presets on the RT/1, which were triggered via connected buttons; these presets were then processed and manipulated both by Tom and Dan in realtime. The polished nature of the performances was achieved through practice; many hours (Dan estimated ~60 before their first performance) were spent determining what the presets were, how transitions would occur, and what types of manipulations would be performed.

Inspiring work! My thanks to Dan for taking the time to tell me about it, and for providing the performance video.

– Charlie

Leave a Reply

Your email address will not be published. Required fields are marked *

Translate »