Egg camouflage evolution tests in different nest sites

I’ve spent some time testing Project Nightjar EggLab: clicking on algorithmically generated eggs on backgrounds taken from nightjar nest sites and recording the time it takes for each egg. It’s designed for lots of people to play in parallel, but I wanted to test it before coming up with more gameplay mechanic ideas.

The timing is used to rank the eggs, I keep the top 1024 individuals that took longest to find, and generate new ones from them. The idea is that successful traits will increase throughout the population and the average score will increase – from this small test it seems to be the case, a slow but consistent rise over the latest 500 eggs:

fitness

Most of the eggs are still really easy to see, but some of them take a few seconds and every now and again there is a good one that can take longer. These are some nest sites from the fiery-necked nightjar, which seems to consistently favour leafy ground – the last one took me a while to spot:

3

2

4

This are the top 50 eggs for the fiery-necked population, it’s quite noisy with false positives due to the fact that if you get distracted when playing the egg will score highly (this is one of the things to fix):

top50

For comparison, here is the top 50 for the Mozambique nightjar:

top50

These birds nest on a bigger variety of sites, including bare earth – here’s a good one of them:

4

The other project nightjar citizen science games, where you can search for real nightjars and nesting sites can be found here.

Cornwall “Let’s Get Digital” presentation

Here’s a presentation I gave at the end of last year at a Creative Skills Cornwall meeting at Falmouth University. I introduced the problems of a growing producer/consumer digital divide – the need for more public discourse in the politics of technology and how free software, codeclub, livecoding, algorithmic weaving and sonic bikes can indicate other relationships we can have with technology.

The talk went down really well, but the slides are a little minimal so it might not be super clear what it was all about based just on them :)

slide

Raspberry Pi: Built for graphics livecoding

I’m working on a top secret project for Sam Aaron of Meta-eX fame involving the Raspberry Pi, and at the same time thinking of my upcoming CodeClub lessons this term – we have a bunch of new Raspberry Pi’s to use and the kids are at the point where they want to move on from Scratch.

This is a screenshot of the same procedural landscape demo previously running on Android/OUYA running on the Raspberry Pi, with mangled texture colours and a cube added via a new livecoding repl:

IMG_20140108_232857

Based on my previous experiments, this program uses the GPU for the Raspberry Pi (the VideoCore IV bit of the BCM2835). It’s fast, allows compositing on top of whatever else you are running at the time, and you can run it without X windows for more CPU and memory, sounds like a great graphics livecoding GPU to me!

Here’s a close up of the nice dithering on the texture – not sure yet why the colours are so different from the OUYA version, perhaps a dodgy blend mode or a PNG format reading difference:

IMG_20140108_232914

The code is here (bit of a mess, I’m in the process of cleaning it all up). You can build in the jni folder by calling “scons TARGET=RPI”. This is another attempt – looks like my objects are inside out:

IMG_20140109_004111

Some post-Snowden thoughts

One of the most interesting outcomes of the the Snowden revelations for me are that they have exposed to the light of day an awful lot about how different groups of people relate to technology and authority. There are the side that worries about the internet becoming the “worst tool of human oppression in all of human history” and then we have the “I’ve got nothing to hide so I don’t care” vast majority. I think a lot of us have to be in the second group most of the time, just to operate normally – even if we have sympathies with the first.

map

One problem is the idea that the internet has gradually become intertwined with our societies to the point where it is inconceivable that we could have a functioning civilisation again without it. This does, I think provide a sense of unease to most people – but at the same time it’s easy to brush away. The fact we take the internet entirely for granted is perhaps the most surprising thing of all but I think primarily down to the unease, the easiest option is to trivialise it’s role.

In some attempt to keep up with the news in a realm I know next to nothing about, I’ve been reading a book called Applied Cryptography by Bruce Schneier – this book was first published in 1996 but feels very current, fascinating and probably quite unique in that it comprises half cultural/political discussion and half source code. On page two he explains three vital requirements for social interaction and computers that that cryptography provides (beyond secrecy):

– Authentication. It should be possible for the receiver and sender of a message to ascertain it’s origin; an intruder should not be able to masquerade as someone else.

– Integrity. It should be possible for the receiver of a message to verify that it has not been modified in transit; an intruder should not be able to substitute a false message for a legitimate one.

– Nonrepudiation. A sender should not be able to falsely deny later that he sent the message.

Jaron Lanier wrote in “You are not a gadget” that we become “less human” as we use online services like Twitter and Facebook, as we submit ourselves to their abstractions rather than demand more from them. I think Lanier’s underlying message has some truth to it, but his blame is mostly in the wrong place.

For all sorts of technical, political and accidental reasons, we are all being trained to communicate without cryptography, whilst having evolved as humans to understand social interaction in ways that absolutely require it. The evidence from psychology and history is that a society that reduces communication to this level does not have a bright future. One solution to this I like a lot is the approach of the cryptoparty movement – a great way to widely spread understanding of these issues and the solutions involved.

Oh, and this is my public key.

Procedural landscape demo on OUYA/Android

A glitchy procedural, infinite-ish landscape demo running on Android and OUYA. Use the left joystick to move around on OUYA, or swiping on Android devices with touchscreens. Here’s the apk, and the source is here.

Screenshot_2014-01-06-07-18-45

It’s great to be able to have a single binary that works across all these devices – from OUYA’s TV screen sizes to phones, and using the standard gesture interface at the same time as the OUYA controller.

The graphics are programmed in Jellyfish Lisp, using Perlin noise to create the landscape. The language is probably still a bit too close to the underlying bytecode in places, but the function calling is working and it’s getting easier to write and experiment with the code.

(define terrain
  '(let ((vertex positions-start)
         (flingdamp (vector 0 0 0))
         (world (vector 0 0 0)))

     ;; recycle a triangle which is off the screen
     (define recycle 
       (lambda (dir)         
         ;; shift along x and y coordinates:
         ;; set z to zero for each vertex
         (write! vertex       
                 (+ (*v (read vertex) 
                        (vector 1 1 0)) dir))
         (write! (+ vertex 1) 
                 (+ (*v (read (+ vertex 1)) 
                        (vector 1 1 0)) dir))
         (write! (+ vertex 2) 
                 (+ (*v (read (+ vertex 2)) 
                        (vector 1 1 0)) dir))

         ;; get the perlin noise values for each vertex
         (let ((a (noise (* (- (read vertex) world) 0.2)))
               (b (noise (* (- (read (+ vertex 1)) 
                               world) 0.2)))
               (c (noise (* (- (read (+ vertex 2))
                               world) 0.2))))

           ;; set the z coordinate for height
           (write! vertex 
                   (+ (read vertex) 
                      (+ (*v a (vector 0 0 8)) 
                         (vector 0 0 -4))))
           (write! (+ vertex 1) 
                   (+ (read (+ vertex 1)) 
                      (+ (*v b (vector 0 0 8)) 
                         (vector 0 0 -4))))
           (write! (+ vertex 2) 
                   (+ (read (+ vertex 2)) 
                      (+ (*v c (vector 0 0 8)) 
                         (vector 0 0 -4))))

           ;; recalculate normals
           (define n (normalise 
                      (cross (- (read vertex)
                                (read (+ vertex 2)))
                             (- (read vertex)
                                (read (+ vertex 1))))))

           ;; write to normal data
           (write! (+ vertex 512) n)
           (write! (+ vertex 513) n)
           (write! (+ vertex 514) n)

           ;; write the z height as texture coordinates
           (write! (+ vertex 1536) 
                   (*v (swizzle zzz a) (vector 0 5 0)))          
           (write! (+ vertex 1537) 
                   (*v (swizzle zzz b) (vector 0 5 0)))          
           (write! (+ vertex 1538) 
                   (*v (swizzle zzz c) (vector 0 5 0))))))

     ;; forever
     (loop 1
       ;; add inertia to the fling/gamepad joystick input
       (set! flingdamp (+ (* flingdamp 0.99)
                          (*v
                           (read reg-fling)
                           (vector 0.01 -0.01 0))))

       (define vel (* flingdamp 0.002))
       ;; update the world coordinates
       (set! world (+ world vel))

       ;; for each vertex
       (loop (< vertex positions-end)         

         ;; update the vertex position
         (write! vertex (+ (read vertex) vel))
         (write! (+ vertex 1) (+ (read (+ vertex 1)) vel))
         (write! (+ vertex 2) (+ (read (+ vertex 2)) vel))

         ;; check for out of area polygons to recycle 
         (cond
          ((> (read vertex) 5.0)
           (recycle (vector -10 0 0)))         
          ((< (read vertex) -5.0)
           (recycle (vector 10 0 0))))
         
         (cond
          ((> (swizzle yzz (read vertex)) 4.0)
           (recycle (vector 0 -8 0)))
          ((< (swizzle yzz (read vertex)) -4.0)
           (recycle (vector 0 8 0))))

         (set! vertex (+ vertex 3)))
       (set! vertex positions-start))))

This lisp program compiles to 362 vectors of bytecode at startup, and runs well even on my cheap Android tablet. The speed seems close enough to native C++ to be worth the effort, and it’s much more flexible (i.e. future livecoding/JIT compilation possibilities). The memory layout is shown below, it’s packing executable instructions and model data into the same address space and doesn’t use any memory allocation while it’s running (no garbage collection and not even any C mallocs). The memory size is configurable but the nature of the system is such that it would be possible to put executable data into unused graphics sections (eg. normals or vertex colours), if appropriate.

jvm

Functions, abstraction and compiling

More puzzles from my compiler writing experiments – this time figuring out how function calls work. They are so ingrained as part of computer languages – it seems somehow surprising to me that they are quite challenging to implement as bytecode.

Conceptually, functions are primarily a means for humans to break problems down into smaller parts in order to better reason about them – the process of doing this is called abstraction. What is good for human reasoning, is also good for efficient use of finite space in memory – so good functions will be able to be reused in as many places as possible (to be general, in other words).

At the low level, when a function is called we jump to a new location of the code, do some stuff, and then return to where we came from. We also need to deal with arguments passed to the function (giving it different values to work on), and then we need to keep things like the return address and result of the function somewhere safe.

The Jellyfish Virtual Machine is a stack machine – so everything needs to be stored on a single stack we can push and pop values to and from (vectors in this case). Calling a function is a case of:

1. Calculating and pushing the return address to the stack.
2. Calculating and pushing all the arguments onto the stack in order.
3. Jump to the function pointer.

;; return a list of vector instructions to generate 
;; a function call to an existing function pointer
(define (emit-fncall x addr)
  ;; generate instructions to eval and push all the arguments
  (let ((args (emit-expr-list (cdr x))))
    (append
     ;; first push the address we want to return to to the stack
     ;; skip the argument pushing code and the following function jump
     ;; we need to tell the linker to fix the address later on:
     (emit (list 'add-abs-loc 'this 1
                 ;; adds absolute location to the offset
                 (vector ldl (+ (length args) 2) 0)))
     args ;; code to push arguments to stack
     (emit (vector jmp addr 0))))) ;; jump to the function pointer

One complexity is that we won’t know the exact address of the function until the compilation process is finished, so we need to tag the relevant instruction to be modified in a second pass later on – this is called linking, which stitches the addresses together properly after compiling.

When we are in the function with the stack set up correctly we need to:

1. Pop all the arguments from the stack and put them somewhere where they can be referred to by name.
2. Do the body of the function.
3. Store the end result on the stack.
4. Jump back to the return address.

When finished, the only thing should be the result of the function call at the top of the stack. The code to generate this is provided by the core “lambda” function (which can be assigned to a name for calling later or called directly – anonymously).

;; lambda - create a function, and return the pointer to it
(define (emit-lambda x)
  (let* ((body
          ;; first we need to get all the arguments out of the
          ;; stack so the function body can refer to them
          (append
           (map ;; map over the argument names
            (lambda (arg)
              (make-variable! arg)
              ;; store the value where it can be looked up by the name
              (vector sta (variable-address arg) 0))
            (cadr x))
           ;; now args are stored, do the body
           (emit-expr-list (cddr x))
           ;; swap ret pointer with the result 
           ;; so it's the top of the stack
           (emit (vector swp 0 0))
           ;; jump to the address on the stack
           (emit (vector jms 0 0))))
         ;; make-function! stores the code in a table 
         ;; and returns the address the function will exist at
         (loc (make-function! body)))
    (append
     (emit
      ;; use the linker again to offset 
      ;; from function code section start
      (list 'add-abs-loc 'function-code 1
            (vector ldl loc 0))))))