Project Nightjar: Camouflage data visualisation and possible internet robot predators

We’ve had tens of thousands of people spotting nightjars and donating a bit of their time to sensory ecology research. The results of this (of course it’s still on-going, along with the new nest spotting game) is a 20Mb database with hundreds of thousands of clicks recorded. One of the things we were interested in was seeing what people were mistaking for the birds – so I had a go at visualising all the clicks over the images (these are all parts of the cropped image – as it really doesn’t compress well):

clicks-ReflectanceCF017Vrgb0.54-crop

clicks-ReflectanceCF022Vrgb0.52-crop

Then, looking through the results – I saw a strange artefact:

clicks-ReflectanceCP018VRgb0.55-crop

Uncompressed high res version

My first thought was that someone had tried cheating with a script, but I can hardly imagine that anyone would go to the bother and it’s only in one image. Perhaps some form of bot or scraping software agent – I thought that browser click automation was done by directly interpreting the web page? Perhaps it’s a fall back for HTML5 canvas elements?

It turns out it’s a single player (playing as a monkey, age 16 to 35 who had played before) – so easy enough to filter away, but in doing that I noticed the click order was not as regular as it looked, and it goes a bit wobbly in the middle:

Someone with very precise mouse skills then? :)

Jellyfish: A daft new language is born

After trying, and failing, to write a flocking system in jellyfish bytecode I wrote a compiler using the prototype betablocker one. It reads a scheme-ish imperative language and generates bytecode (which is also invented, and implemented in C++) it only took a couple evenings and a train journey to write, and it even seems to work.

jellyfish

The basic idea is to walk through the code tree described by the scheme lists generating bits of bytecode that fit together. Let’s take logical “not” as an example. Like GPU processors, the only datatype is vectors of 3 floats, and we define false as 0 in the x position and anything else in x to be true (ignoring what’s in y or z). There is no single instruction for “not” so we have to build it from the other instructions. For example this bit of code:

(not (vector 0 0 0))

should return (vector 1 0 0). When we are walking the tree of lists we check the first element and dispatch to a set of functions, one for each type of (higher level) instruction which ‘emit’s a list containing the bytecode required. The one for ‘not’ looks like this, where x is the expression, e.g. ‘(not (vector 0 0 0))':

(define (emit-not x)
  (append
   (emit-expr (cadr x))
   (emit (vector jmz 3 0))
   (emit (vector ldl 0 0))
   (emit (vector jmr 2 0))
   (emit (vector ldl 1 0))))

The first thing it does is return all the instructions required for the expression we pass in the second element of ‘x’ with ‘emit-expr’. With our simple example it will just push (vector 0 0 0) onto the stack, but it could be a whole load of complicated nested expressions, and it will work the same.

After that we have some bytecode:

jmz 3 0 ;; if top of stack is 0, jump forward 3 instructions (ldl 1 0)
ldl 0 0 ;; load 0 onto the stack
jmr 2 0 ;; jump forward 2 instructions (skip to next code section)
ldl 1 0 ;; load 1 onto the stack

So this just checks (and removes) the top element on the stack and pushes the opposite logical value. Pushing a single float like the ‘ldl’ (load literal) instructions above expands to a vector value internally, it’s just a convenience. Some instructions (such as those involving vector maths) are just a single instruction, others like conditionals or loops are a bit trickier as they need to count instructions to skip over variable length sections of program.

We add variables in the form of ‘let’ that map to addresses a the start of memory, read and write for accessing model memory like array lookups. The full flocking system looks like this, and animates a points primitive in OpenGL:

(let ((vertex 512) 
      (accum-vertex 512)  
      (closest 9999)
      (closest-dist 9999)
      (diff 0)
      (vel 1024))
      (loop 1 ;; infinite loop
        (loop (< vertex 532) ;; for every vertex
          ;; find the closest vertex
          (loop (< accum-vertex 532) 
            (cond 
              ;; if they're not the same vert
              ((not (eq? accum-vertex vertex))
              ;; get vector between the points
              (set! diff (- (read vertex) (read accum-vertex)))
              (cond 
                ;; if it's closer so far
                ((< (mag diff) closest-dist)
                ;; record vector and distance
                (set! closest diff)
                (set! closest-dist (mag closest))))))
              (set! accum-vertex (+ accum-vertex 1)))
              ;; reset accum-vertex for next time
              (set! accum-vertex 512)

              ;; use closest to do the flocking, add new velocity 
              ;; to old (to add some inertia)
              (write! vel (+ (* (read vel) 0.99)
                   ;; attract to centre
                   (* (+ (* (- (read vertex) (vector 0 0 0)) 0.05)
                         ;; repel from closest vertex
                         (* (normalise closest) -0.15)) 0.01)))
              ;; add velocity to vertex position
              (write! vertex (+ (read vel) (read vertex))) 
                
              ;; reset and increment stuff
              (set! closest-dist 9999)
              (set! vel (+ vel 1))
              (set! vertex (+ vertex 1)))
            ;; reset for main loop
            (set! vertex 512)
            (set! vel 1024)))

This compiles to 112 vectors of bytecode (I should call it vectorcode really) with extra debugging information added so we can see the start and the end of each higher level instruction. It all looks like this – which most importantly I didn’t need to write by hand!

10 30000 0 ;; top memory positions are for registers controlling 
512 2 1    ;; program and graphics state (primitive type and number of verts)
nop 0 0    ;; space
nop 0 0    ;; for all
nop 0 0    ;; the variables
nop 0 0    ;; we use 
nop 0 0    ;; in the program
nop 0 0
nop 0 0
nop 0 0
;; starting let  <- program starts here
ldl 512 0        ;; load all the 'let' variable data up
sta 4 0
ldl 512 0
sta 5 0
ldl 9999 0
sta 6 0
ldl 9999 0
sta 7 0
ldl 0 0
sta 8 0
ldl 1024 0
sta 9 0
;; starting loop  <- start the main loop
;; starting loop
;; starting loop
;; starting cond
;; starting not
;; starting eq?
lda 5 0
lda 4 0
sub 0 0
jmz 3 0
ldl 0 0
jmr 2 0
ldl 1 0
;; ending eq?
jmz 3 0
ldl 0 0
jmr 2 0
ldl 1 0
;; ending not
jmz 38 0
;; starting set!
;; starting -
;; starting read
ldi 4 0
;; ending read
;; starting read
ldi 5 0
;; ending read
sub 0 0
;; ending -
sta 8 0
;; ending set!
;; starting cond
;; starting <
;; starting mag
lda 8 0
len 0 0
;; ending mag
lda 7 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 12 0
;; starting set!
lda 8 0
sta 6 0
;; ending set!
;; starting set!
;; starting mag
lda 6 0
len 0 0
;; ending mag
sta 7 0
;; ending set!
;; ending cond
;; ending cond
;; starting set!
;; starting +
lda 5 0
ldl 1 0
add 0 0
;; ending +
sta 5 0
;; ending set!
;; starting <
lda 5 0
ldl 532 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 2 0
jmr -72 0
;; ending loop
;; starting set!
ldl 512 0
sta 5 0
;; ending set!
;; starting write!
;; starting +
;; starting *
;; starting read
ldi 9 0
;; ending read
ldl 0.9900000095 0
mul 0 0
;; ending *
;; starting *
;; starting +
;; starting *
;; starting -
;; starting read
ldi 4 0
;; ending read
ldlv 0 0
nop 0 0
sub 0 0
;; ending -
ldl 0.05000000075 0
mul 0 0
;; ending *
;; starting *
;; starting normalise
lda 6 0
nrm 0 0
;; ending normalise
ldl -0.150000006 0
mul 0 0
;; ending *
add 0 0
;; ending +
ldl 0.009999999776 0
mul 0 0
;; ending *
add 0 0
;; ending +
sti 9 0
;; ending write!
;; starting write!
;; starting +
;; starting read
ldi 9 0
;; ending read
;; starting read
ldi 4 0
;; ending read
add 0 0
;; ending +
sti 4 0
;; ending write!
;; starting set!
ldl 9999 0
sta 7 0
;; ending set!
;; starting set!
;; starting +
lda 9 0
ldl 1 0
add 0 0
;; ending +
sta 9 0
;; ending set!
;; starting set!
;; starting +
lda 4 0
ldl 1 0
add 0 0
;; ending +
sta 4 0
;; ending set!
;; starting <
lda 4 0
ldl 532 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 2 0
jmr -160 0
;; ending loop
;; starting set!
ldl 512 0
sta 4 0
;; ending set!
;; starting set!
ldl 1024 0
sta 9 0
;; ending set!
ldl 1 0
jmz 2 0
jmr -173 0
;; ending loop
;; ending let

Ouya development experiments

The Ouya is a tiny game console which is designed for promoting indy games rather than traditional high budget productions. It’s cheap compared to standard games hardware, and all the games are free to play at least in demo form. It’s very easy to start making games with as it’s based on Android – you can just plug in a USB cable and treat it just the same as any other Android device. You also don’t need to sign anything to start building stuff – it’s just a case of adding one line to your AndroidManifest.xml to tell the Ouya that the program is a game:

 <category android:name="tv.ouya.intent.category.GAME"/>

and adding a 732×412 icon in “res/drawable-xhdpi/ouya_icon.png” so it shows up on the menu.

130327_ouya_0021

There is a lot to like about the Ouya’s philosophy, so I tried out some graphics experiments with it to get better acquainted:

This program was made using Jellyfish, part of my increasingly odd rendering stack which started out as a port of Fluxus to PS2. It’s a type of microcode written inside TinyScheme and running in a separate virtual machine that makes it possible to process a lot of geometry without garbage collection overheads. At some point I might write a compiler for this, but writing the code longhand at the moment means I can tweak the instruction set and get a better understanding of how to use it. Here is the microcode for the ribbons above, they run at 20,000 cycles per frame each (so about 1.2MHz):

;; register section
   8 20000 0 ;; control (pc, cycles, stack)
   mdl-size prim-tristrip 1 ;; graphics (size, primtype, renderhints)
   0 0 0 ;; pos
   0 0 0 ;; sensor addr

   ;; program data section
   mdl-start 0 0     ;; 4 address of current vertex
   mdl-start 0 0     ;; 5 address of accum vertex (loop)
   0 0 0             ;; 6 influence
   0 0 0             ;; temp

   ;; code section 
   ;; add up differences with every other vertex
   ldi  4  0         ;; load current vertex
   ldi  5  0         ;; load accum vertex
   sub  0  0         ;; get the difference
   lda  6  0         ;; load the accumulation
   add  0  0         ;; accumulate
   nrm  0  0         ;; normalise
   sta  6  0         ;; store accumulation
   add.x 5 1         ;; inc accum address

   ;; accumulation iteration 
   lda  5  0         ;; load accum address
   ldl  mdl-end 0    ;; load end address
   jlt  2  0         ;; exit if greater than model end (relative address)
   jmp  8  0         ;; return to accum-loop

   ;; end accum loop
   ;; push away from other verts
   lda  6  0         ;; load accum
   ldl -0.1 0        ;; reverse & make smaller
   mul 0 0

   ;; attract to next vertex
   ldi 4 0           ;; load current
   ldi 4 1           ;; load next
   sub 0 0           ;; get the difference
   ldl 0.4 0
   mul 0 0           ;; make smaller
   add 0 0           ;; add to accum result

   ;; do the animation
   ldi 4 0           ;; load current vertex
   add 0 0           ;; add the accumulation
   sti 4 0           ;; write to model data

   ;; reset the outer loop
   ldl 0 0           ;; load zero
   sta 6 0           ;; reset accum
   ldl mdl-start 0   
   sta 5 0           ;; reset accum address

   add.x 4 1         ;; inc vertex address
   lda 4  0          ;; load vertex address
   ldl mdl-end 0     ;; load end address
   jlt 2  0          ;; if greater than model (relative address)
   jmp 8  0          ;; do next vertex

   ;; end: reset current vertex back to start
   ldl mdl-start 0
   sta 4 0

   ;; waggle around the last point a bit
   lda mdl-end 0     ;; load vertex pos
   rnd 0 0           ;; load random vert
   ldl 0.5 0         
   mul 0 0           ;; make it smaller
   add 0 0           ;; add to vertex pos
   sta mdl-end 0     ;; write to model

   jmp 8 0

slub at Kunsthal Aarhus

Last week Alex and I took to the road on another slub mini-tour starting in Denmark at the Kunsthal Aarhus where we ran a livecoding workshop and performed at the opening of the Aarhus Filmfestival.

IMG_6390

IMG_20131112_132248

The Kunsthal gallery was exhbiting “Systemics #2: As we may think (or, the next world library)” with work by Florian Hecker, Linda Hilfling, Jakob Jakobsen, Suzanne Treister, Ubermorgen, YoHa + Matthew Fuller.

Linda Hilfling and UBERMORGEN’s work comprised an Amazon print on demand hack which was perhaps an even more elaborate version of their previous Google Will Eat Itself. The gallery floor was printed with a schematic describing the processing from the raw material input to the finished printed books.

IMG_20131111_120659

Suzanne Treister’s work called HEXEN 2.0 included alternative/hidden histories of technology presented as densely descriptive tarot cards and prints showing many connections between individuals, events and inventions.

IMG_20131112_174729

Mongoose 2000

Mongoose 2000 is a system I’m developing for the Banded Mongoose Research Project. It’s a behavioural recording system for use in remote areas with sporadic internet or power. The project field site is located in Uganda in the countryside and it needs to run for long time frames, so there are big challenges when it comes to setting up the system and debugging it remotely.

In order to make this work we’re using a Raspberry Pi as a low power central wifi node, allowing Android tablets to communicate with each other and synchronise data. There are a couple of types of observations we need to record:

  1. Pack composition: including presence in the pack, individual weights and pregnancy state.
  2. Pup focal: studies of individual pups, who’s feeding them, when they feed themselves or playing.
  3. Group events: warning calls, moving locations, fights with other packs.

We also need to store and manage the pack information, so names, collar and chip ids of individual animals. The data is passed around a bit like this:

web

The interface design on the tablets is very important – things may happen quickly, often at the same time (for instance group events happening while a pup focal observation is being carried out), so we need multiple simultaneous things on screen, and the priority has to be on responsiveness and speed rather than initial ease of use. For these reasons it has similarities to live music performance interfaces. We can also take advantage of the storage on the tablets to duplicate data on the Raspberry Pi to add redundancy. Data is transferred from the field site by downloading the entire database onto the Android tablets, which can then be emailed using the normal internet, either when it’s working locally or by taking the tablets into the nearby town where bandwidth is better.

IMG_20131105_185918

The project is a mix of cheap, replaceable hardware and mature well used software – Raspberry Pi’s mean we can afford a backup or two on site, along with plenty of replacement sdcards with the OS cloned. The observation software can also be updated over the Android play store (for bug fixes, or changing the data gathered) without any changes required on the Raspberry Pi. The platform is based on the one I built for the ‘Crap App’ along with experimental stuff I was doing with bike mounted wifi nodes with Kaffe Matthews, and includes SQLite for the underlying database on both platforms (providing atomic writes and journalling) and TinyScheme for Android and Racket for the Raspberry Pi allowing me to share a lot of the code between the hardware platforms.