slub at Kunsthal Aarhus

Last week Alex and I took to the road on another slub mini-tour starting in Denmark at the Kunsthal Aarhus where we ran a livecoding workshop and performed at the opening of the Aarhus Filmfestival.

IMG_6390

IMG_20131112_132248

The Kunsthal gallery was exhbiting “Systemics #2: As we may think (or, the next world library)” with work by Florian Hecker, Linda Hilfling, Jakob Jakobsen, Suzanne Treister, Ubermorgen, YoHa + Matthew Fuller.

Linda Hilfling and UBERMORGEN’s work comprised an Amazon print on demand hack which was perhaps an even more elaborate version of their previous Google Will Eat Itself. The gallery floor was printed with a schematic describing the processing from the raw material input to the finished printed books.

IMG_20131111_120659

Suzanne Treister’s work called HEXEN 2.0 included alternative/hidden histories of technology presented as densely descriptive tarot cards and prints showing many connections between individuals, events and inventions.

IMG_20131112_174729

London Algorave at nnnnn

In order to get ourselves prepared for the Dagstuhl livecoding seminar (more on that later), we kicked off with a London Algorave at nnnnn, Ryan Jordan’s noise research laboratory in deepest Hackney. Slub had one of our better performances, which was recorded – watch this space.

*UPDATE*

IMG_20130914_172033

Larger components make larger sounds.

IMG_20130914_220252

Massive synth washes and brutal beats from the rock star livecoders Meta-Ex.

IMG_20130914_220827

Meta-Ex close up.

IMG_20130914_235502

Yee-King’s brand new visual acid generating machine reconfigured our minds.

Deershed Festival, Sonic Bike Lab, Fascinate Festival

Preparations for a busy summer, new Al Jazari installation gamepads on the production line:

IMG_20130715_154549

This weekend Alex and I are off to the Deershed Festival in Yorkshire to bring slub technology to the younger generation. We’ll be livecoding algorave, teaching scratch programming on Raspberry Pis and running an Al Jazari installation in between. Then onwards to London for a Sonic Bike Lab with Kaffe Matthews where we’re going to investigate the future of sonic bike technology and theory – including possibly, bike sensor driven synthesis and on the road post-apocalyptic mesh networking.

At the end of August I’m participating in my local media arts festival – Fascinate in Falmouth, where I’ll be dispensing a dose of algorave and probably even more musical robot techno.

algoravin in the UK

A short trip around the UK for slub over the last couple of days, a livecoding gig in London at Bartlett Nexus in UCL at an event concerning architecture, games and hand made technology. A full video of the event (with us at the end) is here.

bartletts1

bartletts2

Also collected along the way, a photo of the xname manufacturing lab:
xname

Then up to Birmingham to attend the Network Music Festival and do another performance late on Saturday night. While there we had a chance to meet the members of The Hub, pioneers of networked music and livecoding. It was inspiring to chat with such experienced musicians in this field. The NMF included a huge range of performances, for example Melatab who used Kinect cameras for networked performance in a shared virtual space. I’m planning some Kinect hacking soon, so I took some photos:

nmf1

nmf2

slub at /* vivo */

My last /* vivo */ Mexico post, some data from our livecoding performance on the final day. This was one of those performances where we had a rough plan and got a bit too carried away by the crowd to follow it (I guess one of the great things about improvisation!). Also to a great deal the music was influenced by mezcal, the fermented spirit from the maguey plant, which I can report is the secret ingredient of Mexican livecoding. My edit history and a screen shot of the final state of the program is online here. The new temporal recursion system was actually pretty damn challenging (hence the serious face) but in combination with Alex’s pattern generation seemed to get people moving pretty well…

/* vivo */ musings

So much to think about after the /* vivo */ festival, how livecoding is moving on, becoming more self critical as well as gender balanced. The first signs of this was the focus of the festival being almost entirely philosophical rather than technical. Previous meetings of this nature have involved a fair dose of tech minutiae – here these things hardly entered the conversations.

Show us your screens

One of the significant topics for discussions was put under the spotlight by Iohannes Zmölnig – who are the livecoding audience, what do they expect and how far do we need to go in order to be understood by them? Do we consider the act of code projection as a spectacle (as in VJing) or is it – as Alex McLean asserts – more about authenticity, showing people what you are doing, what you are interacting with, and an honest invitation? Julian Rohrhuber and Alberto De Campo discussed how livecoding impacts on our school education conditioning, audiences thinking they are expected to understand what is projected in a particular didactic, limited manner (code projection as blackboard). Livecoding could be used to explore creative ways of compounding these expectations to invite changes to the many anti-intellectual biases in our society.

Luis Navarro Del Angel presented another important way of thinking about the potential of livecoding – as a new kind of mass creativity and participation, providing artistic methods to wider groups than can be achieved by traditional means. This is quite close to my own experience with livecoding music, and yet I’m much more used to thinking about what programming offers those who are already artists in some form, and familiar with other material. Luis’s approach was more focused on livecoding’s potential for people who haven’t found a form of expression, and making new languages aimed at this user group.

After some introductory workshops the later ones followed this philosophical thread by considering livecoding approaches rather than tools. Alex and I provided a kind of slub workshop, with examples of the small experimental languages we’ve made like texture, scheme bricks and lazybots, encouraging participants to consider how their ideal personal creative programming language would work. This provides interesting possibilities and I think, a more promising direction than convergence on one or two monolithic systems.

This festival was also a reminder of the importance of free software, it’s role to provide opportunities in places where for whatever reasons education has not provided the tools to work with software. Access to source code, and in the particular case of livecoding, it’s celebration and use as material, breeds independence, and helps in the formation of groups such as the scene in Mexico City.

Making time

Time, the ever baffling one directional mystery. A lot of it has been spent between the members of slub on ways to synchronise multiple machines to share a simple beat, sometimes attempting industrial strength solutions but somehow the longest standing approach we always come back to for our various ad-hoc software remains to be a single osc message. This is the kind of thing that seems to normally involve stressed pre-performance hacking, so after having to rewriting it for temporal recursion I thought I should get it down here for future reference!

The message is called “/sync” and contains two floating point values, the first the number of beats in a “bar” (which is legacy, we don’t use this now) and then the current beats per minute. The time the message is sent is considered to be the start of the beat. A sync message comes into my system via a daemon called syncup. All this really does is attach a timestamp to the sync message recording what the local time on my machine was when it arrived, and sends it on to fluxus. Shared timestamps would be better, but don’t make any sense without a shared clock, and they seem fragile to our demands. The daemon polls on a fairly tight loop (100ms) and the resulting timestamp seems accurate enough for our ears (fluxus runs on the frame refresh rate which is too variable for this job).

So now we have a new sync message which includes a timestamp for the beat start. The first thing the system does is to assume this is in the past, and that the current time has already moved ahead. There are 3 points of time involved:

From the sync time (in the past, on the left) and the bpm we can calculate the beat times into the future. We have a “logical time” which is initialised with the current time from the system clock, a safety margin added, and then gets “snapped” to the nearest beat. The safety margin is needed as the synth graph build and play messages coming from fluxus need to be early enough to get scheduled by fluxa’s synth engine to play with sample accuracy.

The beat snapping has to be able to move back in time as well as forwards, for tiny adjustments from the sync messages (as they never come in exactly when they are expected) otherwise we skip beats. The algorithm to do this is as follows:

(define (snap-time-to-sync time)
  (+ time (calc-offset time last-sync-time (* (/ 1 bpm) 60)))) 

(define (calc-offset time-now sync-time beat-dur)
  ;; find the difference in terms of tempo
  (let* ((diff (/ (- sync-time time-now) beat-dur))
         ;; get the fractional remainder (doesn't matter how
         ;; far in the past or future the synctime is)
         (fract (- diff (floor diff))))
    ;; do the snapping
    (if (< fract 0.5) 
        ;; need to jump forwards - convert back into seconds
        (* fract beat-dur) 
        ;; the beat is behind us, so go backwards
        (- (* (- 1 fract) beat-dur)))))

The last thing that is needed is a global sync offset, which I add at the start of the process, to the incoming message timestamps – this has to be tuned by ear, and accounts for the fact that the latency between the synth playing a note and the speakers moving air seems to vary between machines dependent on many uncertain factors – sound card parameters, battery vs ac power, sound system setup, colour of your backdrop etc.

Other than this we tend to keep the networking tech to a minimum and use our ears and scribbled drawn scores (sometimes made from stones) to share any other musical data.