Making time

Time, the ever baffling one directional mystery. A lot of it has been spent between the members of slub on ways to synchronise multiple machines to share a simple beat, sometimes attempting industrial strength solutions but somehow the longest standing approach we always come back to for our various ad-hoc software remains to be a single osc message. This is the kind of thing that seems to normally involve stressed pre-performance hacking, so after having to rewriting it for temporal recursion I thought I should get it down here for future reference!

The message is called “/sync” and contains two floating point values, the first the number of beats in a “bar” (which is legacy, we don’t use this now) and then the current beats per minute. The time the message is sent is considered to be the start of the beat. A sync message comes into my system via a daemon called syncup. All this really does is attach a timestamp to the sync message recording what the local time on my machine was when it arrived, and sends it on to fluxus. Shared timestamps would be better, but don’t make any sense without a shared clock, and they seem fragile to our demands. The daemon polls on a fairly tight loop (100ms) and the resulting timestamp seems accurate enough for our ears (fluxus runs on the frame refresh rate which is too variable for this job).

So now we have a new sync message which includes a timestamp for the beat start. The first thing the system does is to assume this is in the past, and that the current time has already moved ahead. There are 3 points of time involved:

From the sync time (in the past, on the left) and the bpm we can calculate the beat times into the future. We have a “logical time” which is initialised with the current time from the system clock, a safety margin added, and then gets “snapped” to the nearest beat. The safety margin is needed as the synth graph build and play messages coming from fluxus need to be early enough to get scheduled by fluxa’s synth engine to play with sample accuracy.

The beat snapping has to be able to move back in time as well as forwards, for tiny adjustments from the sync messages (as they never come in exactly when they are expected) otherwise we skip beats. The algorithm to do this is as follows:

(define (snap-time-to-sync time)
  (+ time (calc-offset time last-sync-time (* (/ 1 bpm) 60)))) 

(define (calc-offset time-now sync-time beat-dur)
  ;; find the difference in terms of tempo
  (let* ((diff (/ (- sync-time time-now) beat-dur))
         ;; get the fractional remainder (doesn't matter how
         ;; far in the past or future the synctime is)
         (fract (- diff (floor diff))))
    ;; do the snapping
    (if (< fract 0.5) 
        ;; need to jump forwards - convert back into seconds
        (* fract beat-dur) 
        ;; the beat is behind us, so go backwards
        (- (* (- 1 fract) beat-dur)))))

The last thing that is needed is a global sync offset, which I add at the start of the process, to the incoming message timestamps – this has to be tuned by ear, and accounts for the fact that the latency between the synth playing a note and the speakers moving air seems to vary between machines dependent on many uncertain factors – sound card parameters, battery vs ac power, sound system setup, colour of your backdrop etc.

Other than this we tend to keep the networking tech to a minimum and use our ears and scribbled drawn scores (sometimes made from stones) to share any other musical data.

Aniziz: Keeping it local and independent

A large part of the Aniziz project involves porting the Germination X isometric engine from Haxe to Javascript, and trying to make things a bit more consistent and nicer to use as I go. One of the things I’ve tried to learn from experiments with functional reactive programming is using a more declarative style, and broadly trying to keep all the code associated with a thing in one place.

A good real world example is the growing ripple effect for Aniziz (which you can just about make out in the screenshot above), that consists of a single function that creates, animates and destroys itself without need for any references anywhere else:

game.prototype.trigger_ripple=function(x,y,z) {
    var that=this; // lexical scope in js annoyance

    var effect=new truffle.sprite_entity(
        this.world,
        new truffle.vec3(x,y,z),
        "images/grow.png");

    var len=1; // in seconds

    effect.spr.do_centre_middle_bottom=false;
    effect.finished_time=this.world.time+len;
    effect.needs_update=true; // will be updated every frame
    effect.spr.expand_bb=50; // expand the bbox as we're scaling up
    effect.spr.scale(new truffle.vec2(0.5,0.5)); // start small

    effect.every_frame=function() { 
        // fade out with time
        var a=(effect.finished_time-that.world.time);
        if (a>0) effect.spr.alpha=a; 
   
        // scale up with time
        var sc=1+that.world.delta*2; 
        effect.spr.scale(new truffle.vec2(sc,sc));

        // delete ourselves when done
        if (effect.finished_time<that.world.time) {
            effect.delete_me=true;
        }
    };
}

The other important lesson here is the use of time to make movement framerate independent. In this case it’s not critical as it’s an effect – but it’s good practice to always multiply time changing values by the time elapsed since the last frame (called delta). This means you can specify movement in pixels per second, and more importantly means that things will remain consistent in terms of interactivity within reason on slower machines.