Hi Justin,
On 30/09/15 05:02, Justin Northrop wrote:
> Do you have a comprehensive description of the language aside from
> the implementation? Particularly, I'd like to see a list of what
> each instruction does.
All the instructions are listed here with ascii art description of the
graph operations:
https://gitlab.com/flotsam/flotsam/blob/master/daisy/notes.txt
Did you get the video working? I can look for the original and re-encode
it if it helps (I'm also on linux and tend to keep flash turned off).
> I just looked at an interactive tutorial on Petri Nets, and don't see
> how this could correspond to a deterministic language; in the
> tutorial, any transition which has all prerequites satisfied can be
> taken, and I was able to select which transition happened next.
> Differently, in a programming language, one wants there to be always
> one next step to occur, so that a program is deterministic. Here's
> the interactive tutorial that I used:
>
> https://www.informatik.uni-hamburg.de/TGI/PetriNets/introductions/aalst/
I think in the case of these examples, the transition can be considered
to be user input, but it's a while since I looked at this stuff :)
> TanScript seems to be the same basic idea, in that there is a graph
> of instructions, and there are 'heads' that walk along it. In
> TanScript, there can only be one head on any program. When TanScript
> is used only to encode music, it is very simple, each instruction
> simply playing a sound, or branching. Re-usable function/action
> definitions allow for more interesting compositions. Recursion is
> also an option.
There is a major difference in that in daisy the tokens are instructions
and the graph itself is a substrate that they pass though, and
modify/create. The instructions are executed when they meet with
"activate" tokens - there are also 'typed' connections that only accept
instruction/activation tokens. I guess it's some way between a Von
Neumann constructor and a petri net.
> Aside from music, TanScript is also a fairly general-purpose language
> used for manipulating graph-structured data. A stack frame of a
> TanScript program has three references: one to the head, and two more
> to nodes of data that the program executes upon. The primitive
> instructions cause these latter two references to 'walk' around on
> the data, modifying and conditioning on it as they go.
Can TanScript also create new branches or connections in the graph? I
think once a language is isomorphic like this (as Scheme is too, but not
many other languages afaik) it becomes something very interesting.
> Musical instructions can be combined with non-musical programs in
> order to sonify them, making blindfolded debugging feasible.
Very nice.
> ---
>
> Do you know of any other languages similar to Daisy Chain and
> Tanscript?
Other than petrinets (and there are some interesting specialised cases
of them) not really, but that mention of Von Neumann got me thinking
that there are probably similarities with cellular automata languages
that self replicate.
> What dialect of Lisp did you implement Daisy Chain in, and how do
> you recommend making sure the notes play at the right time, with no
> delay between timer events and audio? Me and a friend are about to
> re-implement TanScript in Pharo (a Smalltalk development
> environment). (Letting you know in case you have any advice, having
> already done something similar in Lisp.)
Racket, and it uses the same synth software I use for other performances
with Alex (fluxa, written in C++) - the syncronisation of audio and
video (and across different laptops too) is a huge, but interesting problem!
What I tend to do is have several different kinds of parallel ideas of
"time" going on - a logical time, which increments perfectly with the
beat and runs slightly ahead of everything else. The animation watches
this and renders events when the frame based time catches up with it, it
also interpolates animation based on this, as it knows the future.
The audio gets timestamped messages from the logical time part, and
schedules them to play using a timer based on the samplerate on the
audio device.
Splitting it up like this generally seems to work ok, there can be some
latency sometimes if everything needs to catch up with the logical time
suddenly changing, but that can be under control to some extent (unless
you're playing with someone else who keeps changing the tempo :)
> ---
>
> Perhaps you'd like to have a real-time meeting via video to discuss
> where we can take these ideas further?
I would definitely - although, I'm quite rusty on this topic - but
you've piqued my interest in this area again.
cheers,
dave
--
Read the whole topic here: livecode:
http://lurk.org/r/topic/3t4QQRi5dBmp2kT42ZESga
To leave livecode, email livecode_at_group.lurk.org with the following email subject: unsubscribe
Received on Fri Oct 02 2015 - 21:37:48 BST