Above is a piece of algorithmic composition built in LISP and C++. I don’t know any further details but I find it interesting. I’m currently trying to devise something that could be referred to as algorithmic performance – musical performance based on rules and a framework in which a performer can move around. It’s nothing new but it’s not often inherently framed this way. Many performers who improvise are working within some kind of structure but it’s not often explicit. It’s especially uncommon for the performer to have an algorithmic dynamic between themselves and their instrument. Traditionally there’s a 1:1 relationship between the performer and instrument or at least the performer aspires for this (the ability to 100% control the instrument and communicate exactly). Despite these kinds of musical traditions, openly algorithmic processes permeate the academic community, at least.
I’ll wonderfully acknowledge, at any moment, that I am an appreciator of John Cage and his theories and practices. When all is said and done, he introduced algorithmic processes to the zeitgeist of American music during the mid-20th century. Whether described as chance or indeterminacy, Cage often implemented a few rules in which chance occurrences had room to develop. Sometimes his rules could be as simple as flipping a coin or allowing for silence. The establishment of rules can be described as algorithmic (although not the crux of what Cage promoted) and it’s easy to find processes like this in any variety of places.
Cage aside, I’m wondering right now what “algorithmic performance” is? How can the practice, often in composition, be effectively implemented in performance? Is there actually a difference in algorithmic composition vs. performance? I’m not sure but it’s possible there is no difference. I’m trying to figure it out though using my more contemporary tools.
Intersection.Aggregate – Jared Tarbell and Casey Reas
I have, what I consider to be, a fairly useless APC40. It contains a matrix of buttons (8×5) along with traditional knobs and sliders. It’s default settings with Ableton Live do what you would expect: play clips, stop clips, play scenes, pretty much everything you can do within Live. It’s simply not interesting and so I’m now trying to develop something in Max4Live to customize the device. I want an interface that plays with me. An interface that allows for surprises. I’m curious if the performance with the instrument can in a way, be as rich as between two jazz performers. One performer responds with a particular kind of movement. The other adds subtle chromaticism to their performance. Then the other performer sees room for a key change and moves to it.
I’m imagining this in an instrument/interface. I want to believe there’s room for this in our larger creative community. So with that perspective, I’m slowly devising some ways to customize the APC40. I don’t think my ideas are inventive. I haven’t used or even touched a Monome but I’m fairly certain what I describe is a regular part of that community. Theoretically though, I’d like to see a day where thoroughly interacting with the instrument is considered an important part of the instrument. Then we would have less overly-simplistic devices like the APC40.