News from egglab

9,000 players, 20,000 games played and 400,000 tested egg patterns later we have over 30 generations complete on most of our artificial egg populations. The overall average egg difficulty has risen from about 0.4 seconds at the start to 2.5 seconds.

Thank you to everyone who contributed their time to playing the game! We spawned 4 brand new populations last week, and we’ll continue running the game for a while yet.

In the meantime, I’ve started working on ways to visualise the 500Mb of pattern generating code that we’ve evolved so far – here are all the eggs for one of the 20 populations, each row is a generation of 127 eggs starting at the top and ordered in fitness score from left to right:

viz-2-cf-0-small

This tree is perhaps more useful. The ancestor egg at the top is the first generation and you can see how mutations happen and successful variants get selected.

viz-2-cf-1-6-tree-small

Egglab – pattern generation obsession

I’m putting the final pieces together for the release of the all new Project Nightjar game (due in the run up to Easter, of course!) and the automatic pattern generation has been a focus right up to this stage. The challenge I like most about citizen science is that along with all the ‘normal’ game design creative restrictions (is it fun? will it work on the browser?) you also have to satisfy the fairly whopping constraints of the science itself, determining which decisions impact on the observations you are making – and being sure that they will be robust to peer review in the context of publication – I never had to worry about that with PlayStation games 🙂

variation

pattern2gen

With this game, similar to the last two, we want to analyse people’s ability to recognise types of pattern in a background image. Crucially, this is a completely different perception process from recognition of a learned pattern (a ‘search image’), so we don’t want to be generating the same exact egg each time from the same description – we don’t want people to ‘learn’ them. This also makes sense in the natural context of course, in that an individual bird’s eggs will not be identical, due to there being many many additional non-deterministic processes happening that create the pattern.

The base images we are using are wrapped Perlin noise at different scales, and with different thresholds applied. These are then rotated and combined with each other and plain colours with the browser’s built in composite operations. Ideally we would generate the noise each time we need it with a different random seed to make them all unique, but this is way too slow for HTML5 Canvas to do (pixel processing in Javascript is still painful at this scale). To get around this we pre-render a set of variations of noise images, the genetic program picks one of four scales, and one of two thresholds (and one without threshold) and we randomly pick a new variation of this each time we render the egg. The image at the top shows the variation that happens across 6 example programs. Below are some of the noise images we’re using:

noise-patterns

Raspberry Pi: Built for graphics livecoding

I’m working on a top secret project for Sam Aaron of Meta-eX fame involving the Raspberry Pi, and at the same time thinking of my upcoming CodeClub lessons this term – we have a bunch of new Raspberry Pi’s to use and the kids are at the point where they want to move on from Scratch.

This is a screenshot of the same procedural landscape demo previously running on Android/OUYA running on the Raspberry Pi, with mangled texture colours and a cube added via a new livecoding repl:

IMG_20140108_232857

Based on my previous experiments, this program uses the GPU for the Raspberry Pi (the VideoCore IV bit of the BCM2835). It’s fast, allows compositing on top of whatever else you are running at the time, and you can run it without X windows for more CPU and memory, sounds like a great graphics livecoding GPU to me!

Here’s a close up of the nice dithering on the texture – not sure yet why the colours are so different from the OUYA version, perhaps a dodgy blend mode or a PNG format reading difference:

IMG_20140108_232914

The code is here (bit of a mess, I’m in the process of cleaning it all up). You can build in the jni folder by calling “scons TARGET=RPI”. This is another attempt – looks like my objects are inside out:

IMG_20140109_004111

Procedural landscape demo on OUYA/Android

A glitchy procedural, infinite-ish landscape demo running on Android and OUYA. Use the left joystick to move around on OUYA, or swiping on Android devices with touchscreens. Here’s the apk, and the source is here.

Screenshot_2014-01-06-07-18-45

It’s great to be able to have a single binary that works across all these devices – from OUYA’s TV screen sizes to phones, and using the standard gesture interface at the same time as the OUYA controller.

The graphics are programmed in Jellyfish Lisp, using Perlin noise to create the landscape. The language is probably still a bit too close to the underlying bytecode in places, but the function calling is working and it’s getting easier to write and experiment with the code.

(define terrain
  '(let ((vertex positions-start)
         (flingdamp (vector 0 0 0))
         (world (vector 0 0 0)))

     ;; recycle a triangle which is off the screen
     (define recycle 
       (lambda (dir)         
         ;; shift along x and y coordinates:
         ;; set z to zero for each vertex
         (write! vertex       
                 (+ (*v (read vertex) 
                        (vector 1 1 0)) dir))
         (write! (+ vertex 1) 
                 (+ (*v (read (+ vertex 1)) 
                        (vector 1 1 0)) dir))
         (write! (+ vertex 2) 
                 (+ (*v (read (+ vertex 2)) 
                        (vector 1 1 0)) dir))

         ;; get the perlin noise values for each vertex
         (let ((a (noise (* (- (read vertex) world) 0.2)))
               (b (noise (* (- (read (+ vertex 1)) 
                               world) 0.2)))
               (c (noise (* (- (read (+ vertex 2))
                               world) 0.2))))

           ;; set the z coordinate for height
           (write! vertex 
                   (+ (read vertex) 
                      (+ (*v a (vector 0 0 8)) 
                         (vector 0 0 -4))))
           (write! (+ vertex 1) 
                   (+ (read (+ vertex 1)) 
                      (+ (*v b (vector 0 0 8)) 
                         (vector 0 0 -4))))
           (write! (+ vertex 2) 
                   (+ (read (+ vertex 2)) 
                      (+ (*v c (vector 0 0 8)) 
                         (vector 0 0 -4))))

           ;; recalculate normals
           (define n (normalise 
                      (cross (- (read vertex)
                                (read (+ vertex 2)))
                             (- (read vertex)
                                (read (+ vertex 1))))))

           ;; write to normal data
           (write! (+ vertex 512) n)
           (write! (+ vertex 513) n)
           (write! (+ vertex 514) n)

           ;; write the z height as texture coordinates
           (write! (+ vertex 1536) 
                   (*v (swizzle zzz a) (vector 0 5 0)))          
           (write! (+ vertex 1537) 
                   (*v (swizzle zzz b) (vector 0 5 0)))          
           (write! (+ vertex 1538) 
                   (*v (swizzle zzz c) (vector 0 5 0))))))

     ;; forever
     (loop 1
       ;; add inertia to the fling/gamepad joystick input
       (set! flingdamp (+ (* flingdamp 0.99)
                          (*v
                           (read reg-fling)
                           (vector 0.01 -0.01 0))))

       (define vel (* flingdamp 0.002))
       ;; update the world coordinates
       (set! world (+ world vel))

       ;; for each vertex
       (loop (< vertex positions-end)         

         ;; update the vertex position
         (write! vertex (+ (read vertex) vel))
         (write! (+ vertex 1) (+ (read (+ vertex 1)) vel))
         (write! (+ vertex 2) (+ (read (+ vertex 2)) vel))

         ;; check for out of area polygons to recycle 
         (cond
          ((> (read vertex) 5.0)
           (recycle (vector -10 0 0)))         
          ((< (read vertex) -5.0)
           (recycle (vector 10 0 0))))
         
         (cond
          ((> (swizzle yzz (read vertex)) 4.0)
           (recycle (vector 0 -8 0)))
          ((< (swizzle yzz (read vertex)) -4.0)
           (recycle (vector 0 8 0))))

         (set! vertex (+ vertex 3)))
       (set! vertex positions-start))))

This lisp program compiles to 362 vectors of bytecode at startup, and runs well even on my cheap Android tablet. The speed seems close enough to native C++ to be worth the effort, and it’s much more flexible (i.e. future livecoding/JIT compilation possibilities). The memory layout is shown below, it’s packing executable instructions and model data into the same address space and doesn’t use any memory allocation while it’s running (no garbage collection and not even any C mallocs). The memory size is configurable but the nature of the system is such that it would be possible to put executable data into unused graphics sections (eg. normals or vertex colours), if appropriate.

jvm

Jellyfish: A daft new language is born

After trying, and failing, to write a flocking system in jellyfish bytecode I wrote a compiler using the prototype betablocker one. It reads a scheme-ish imperative language and generates bytecode (which is also invented, and implemented in C++) it only took a couple evenings and a train journey to write, and it even seems to work.

jellyfish

The basic idea is to walk through the code tree described by the scheme lists generating bits of bytecode that fit together. Let’s take logical “not” as an example. Like GPU processors, the only datatype is vectors of 3 floats, and we define false as 0 in the x position and anything else in x to be true (ignoring what’s in y or z). There is no single instruction for “not” so we have to build it from the other instructions. For example this bit of code:

(not (vector 0 0 0))

should return (vector 1 0 0). When we are walking the tree of lists we check the first element and dispatch to a set of functions, one for each type of (higher level) instruction which ’emit’s a list containing the bytecode required. The one for ‘not’ looks like this, where x is the expression, e.g. ‘(not (vector 0 0 0))’:

(define (emit-not x)
  (append
   (emit-expr (cadr x))
   (emit (vector jmz 3 0))
   (emit (vector ldl 0 0))
   (emit (vector jmr 2 0))
   (emit (vector ldl 1 0))))

The first thing it does is return all the instructions required for the expression we pass in the second element of ‘x’ with ’emit-expr’. With our simple example it will just push (vector 0 0 0) onto the stack, but it could be a whole load of complicated nested expressions, and it will work the same.

After that we have some bytecode:

jmz 3 0 ;; if top of stack is 0, jump forward 3 instructions (ldl 1 0)
ldl 0 0 ;; load 0 onto the stack
jmr 2 0 ;; jump forward 2 instructions (skip to next code section)
ldl 1 0 ;; load 1 onto the stack

So this just checks (and removes) the top element on the stack and pushes the opposite logical value. Pushing a single float like the ‘ldl’ (load literal) instructions above expands to a vector value internally, it’s just a convenience. Some instructions (such as those involving vector maths) are just a single instruction, others like conditionals or loops are a bit trickier as they need to count instructions to skip over variable length sections of program.

We add variables in the form of ‘let’ that map to addresses a the start of memory, read and write for accessing model memory like array lookups. The full flocking system looks like this, and animates a points primitive in OpenGL:

(let ((vertex 512) 
      (accum-vertex 512)  
      (closest 9999)
      (closest-dist 9999)
      (diff 0)
      (vel 1024))
      (loop 1 ;; infinite loop
        (loop (< vertex 532) ;; for every vertex
          ;; find the closest vertex
          (loop (< accum-vertex 532) 
            (cond 
              ;; if they're not the same vert
              ((not (eq? accum-vertex vertex))
              ;; get vector between the points
              (set! diff (- (read vertex) (read accum-vertex)))
              (cond 
                ;; if it's closer so far
                ((< (mag diff) closest-dist)
                ;; record vector and distance
                (set! closest diff)
                (set! closest-dist (mag closest))))))
              (set! accum-vertex (+ accum-vertex 1)))
              ;; reset accum-vertex for next time
              (set! accum-vertex 512)

              ;; use closest to do the flocking, add new velocity 
              ;; to old (to add some inertia)
              (write! vel (+ (* (read vel) 0.99)
                   ;; attract to centre
                   (* (+ (* (- (read vertex) (vector 0 0 0)) 0.05)
                         ;; repel from closest vertex
                         (* (normalise closest) -0.15)) 0.01)))
              ;; add velocity to vertex position
              (write! vertex (+ (read vel) (read vertex))) 
                
              ;; reset and increment stuff
              (set! closest-dist 9999)
              (set! vel (+ vel 1))
              (set! vertex (+ vertex 1)))
            ;; reset for main loop
            (set! vertex 512)
            (set! vel 1024)))

This compiles to 112 vectors of bytecode (I should call it vectorcode really) with extra debugging information added so we can see the start and the end of each higher level instruction. It all looks like this – which most importantly I didn’t need to write by hand!

10 30000 0 ;; top memory positions are for registers controlling 
512 2 1    ;; program and graphics state (primitive type and number of verts)
nop 0 0    ;; space
nop 0 0    ;; for all
nop 0 0    ;; the variables
nop 0 0    ;; we use 
nop 0 0    ;; in the program
nop 0 0
nop 0 0
nop 0 0
;; starting let  <- program starts here
ldl 512 0        ;; load all the 'let' variable data up
sta 4 0
ldl 512 0
sta 5 0
ldl 9999 0
sta 6 0
ldl 9999 0
sta 7 0
ldl 0 0
sta 8 0
ldl 1024 0
sta 9 0
;; starting loop  <- start the main loop
;; starting loop
;; starting loop
;; starting cond
;; starting not
;; starting eq?
lda 5 0
lda 4 0
sub 0 0
jmz 3 0
ldl 0 0
jmr 2 0
ldl 1 0
;; ending eq?
jmz 3 0
ldl 0 0
jmr 2 0
ldl 1 0
;; ending not
jmz 38 0
;; starting set!
;; starting -
;; starting read
ldi 4 0
;; ending read
;; starting read
ldi 5 0
;; ending read
sub 0 0
;; ending -
sta 8 0
;; ending set!
;; starting cond
;; starting <
;; starting mag
lda 8 0
len 0 0
;; ending mag
lda 7 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 12 0
;; starting set!
lda 8 0
sta 6 0
;; ending set!
;; starting set!
;; starting mag
lda 6 0
len 0 0
;; ending mag
sta 7 0
;; ending set!
;; ending cond
;; ending cond
;; starting set!
;; starting +
lda 5 0
ldl 1 0
add 0 0
;; ending +
sta 5 0
;; ending set!
;; starting <
lda 5 0
ldl 532 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 2 0
jmr -72 0
;; ending loop
;; starting set!
ldl 512 0
sta 5 0
;; ending set!
;; starting write!
;; starting +
;; starting *
;; starting read
ldi 9 0
;; ending read
ldl 0.9900000095 0
mul 0 0
;; ending *
;; starting *
;; starting +
;; starting *
;; starting -
;; starting read
ldi 4 0
;; ending read
ldlv 0 0
nop 0 0
sub 0 0
;; ending -
ldl 0.05000000075 0
mul 0 0
;; ending *
;; starting *
;; starting normalise
lda 6 0
nrm 0 0
;; ending normalise
ldl -0.150000006 0
mul 0 0
;; ending *
add 0 0
;; ending +
ldl 0.009999999776 0
mul 0 0
;; ending *
add 0 0
;; ending +
sti 9 0
;; ending write!
;; starting write!
;; starting +
;; starting read
ldi 9 0
;; ending read
;; starting read
ldi 4 0
;; ending read
add 0 0
;; ending +
sti 4 0
;; ending write!
;; starting set!
ldl 9999 0
sta 7 0
;; ending set!
;; starting set!
;; starting +
lda 9 0
ldl 1 0
add 0 0
;; ending +
sta 9 0
;; ending set!
;; starting set!
;; starting +
lda 4 0
ldl 1 0
add 0 0
;; ending +
sta 4 0
;; ending set!
;; starting <
lda 4 0
ldl 532 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 2 0
jmr -160 0
;; ending loop
;; starting set!
ldl 512 0
sta 4 0
;; ending set!
;; starting set!
ldl 1024 0
sta 9 0
;; ending set!
ldl 1 0
jmz 2 0
jmr -173 0
;; ending loop
;; ending let

Ouya development experiments

The Ouya is a tiny game console which is designed for promoting indy games rather than traditional high budget productions. It’s cheap compared to standard games hardware, and all the games are free to play at least in demo form. It’s very easy to start making games with as it’s based on Android – you can just plug in a USB cable and treat it just the same as any other Android device. You also don’t need to sign anything to start building stuff – it’s just a case of adding one line to your AndroidManifest.xml to tell the Ouya that the program is a game:

 <category android:name="tv.ouya.intent.category.GAME"/>

and adding a 732×412 icon in “res/drawable-xhdpi/ouya_icon.png” so it shows up on the menu.

130327_ouya_0021

There is a lot to like about the Ouya’s philosophy, so I tried out some graphics experiments with it to get better acquainted:

This program was made using Jellyfish, part of my increasingly odd rendering stack which started out as a port of Fluxus to PS2. It’s a type of microcode written inside TinyScheme and running in a separate virtual machine that makes it possible to process a lot of geometry without garbage collection overheads. At some point I might write a compiler for this, but writing the code longhand at the moment means I can tweak the instruction set and get a better understanding of how to use it. Here is the microcode for the ribbons above, they run at 20,000 cycles per frame each (so about 1.2MHz):

;; register section
   8 20000 0 ;; control (pc, cycles, stack)
   mdl-size prim-tristrip 1 ;; graphics (size, primtype, renderhints)
   0 0 0 ;; pos
   0 0 0 ;; sensor addr

   ;; program data section
   mdl-start 0 0     ;; 4 address of current vertex
   mdl-start 0 0     ;; 5 address of accum vertex (loop)
   0 0 0             ;; 6 influence
   0 0 0             ;; temp

   ;; code section 
   ;; add up differences with every other vertex
   ldi  4  0         ;; load current vertex
   ldi  5  0         ;; load accum vertex
   sub  0  0         ;; get the difference
   lda  6  0         ;; load the accumulation
   add  0  0         ;; accumulate
   nrm  0  0         ;; normalise
   sta  6  0         ;; store accumulation
   add.x 5 1         ;; inc accum address

   ;; accumulation iteration 
   lda  5  0         ;; load accum address
   ldl  mdl-end 0    ;; load end address
   jlt  2  0         ;; exit if greater than model end (relative address)
   jmp  8  0         ;; return to accum-loop

   ;; end accum loop
   ;; push away from other verts
   lda  6  0         ;; load accum
   ldl -0.1 0        ;; reverse & make smaller
   mul 0 0

   ;; attract to next vertex
   ldi 4 0           ;; load current
   ldi 4 1           ;; load next
   sub 0 0           ;; get the difference
   ldl 0.4 0
   mul 0 0           ;; make smaller
   add 0 0           ;; add to accum result

   ;; do the animation
   ldi 4 0           ;; load current vertex
   add 0 0           ;; add the accumulation
   sti 4 0           ;; write to model data

   ;; reset the outer loop
   ldl 0 0           ;; load zero
   sta 6 0           ;; reset accum
   ldl mdl-start 0   
   sta 5 0           ;; reset accum address

   add.x 4 1         ;; inc vertex address
   lda 4  0          ;; load vertex address
   ldl mdl-end 0     ;; load end address
   jlt 2  0          ;; if greater than model (relative address)
   jmp 8  0          ;; do next vertex

   ;; end: reset current vertex back to start
   ldl mdl-start 0
   sta 4 0

   ;; waggle around the last point a bit
   lda mdl-end 0     ;; load vertex pos
   rnd 0 0           ;; load random vert
   ldl 0.5 0         
   mul 0 0           ;; make it smaller
   add 0 0           ;; add to vertex pos
   sta mdl-end 0     ;; write to model

   jmp 8 0

Genetic programming egg patterns in HTML5 canvas

Part of the ‘Project Nightjar’ camouflage work I’m doing for the Sensory Ecology group at Exeter University is to design citizen science games we can build to do some research. One plan is to create lots of patterns in the browser that we can run perceptual models on for different predator animals, and use an online game to compare them with human perception.

The problem is that using per-pixel processing in order to generate the variety of patterns we need is hard to do quickly in Javascript, so I’m looking at using genetic programming to build image processing trees using the HTML5 canvas image composition modes. You can see it in action here, and the source is here. Here are some example eggs built from very simple test images blended together, these are just random programs, but a couple of them are pretty good:

random-eggs

A nice aspect of this is that it’s easy to artistically control the patterns by changing the starting images, for example much more naturalistic patterns would result from noisier, less geometric base images and more earthy colours. This way we can have different ‘themes’ for levels in a game, for example.

I’m using the scheme compiler I wrote for planet fluxus to do this, and building trees that look like this:


'("op"
  "lighter"
  ("op"
   "source-in"
   ("op"
    "source-out"
    ("terminal" (124 57 0 1) "stripe-6")
    ("terminal" (42 23 0 1) "dots-6"))
   ("op"
    "copy"
    ("terminal" (36 47 0 1) "stripe-7")
    ("terminal" (8 90 1.5705 1) "red")))
  ("terminal" (108 69 0 1) "green"))

Ops are the blend mode operations, and terminals are images, which include translation and scale (not currently used). The egg trees get drawn with the function below, which shows the curious hybrid mix of HTML5 canvas and Scheme I’m using these days (and some people may find offensive 🙂 Next up is to actually do the genetic programming part, so mutating and doing crossover on the trees.


(define (draw-egg ctx x y program)
  (if (eq? (program-type program) "terminal")
      (begin
        (set! ctx.fillStyle
              (ctx.createPattern
               (find-image (terminal-image program) 
                           image-lib) "repeat"))

        ;; centre the rotation
        (ctx.translate 64 64)
        (ctx.rotate 
            (transform-rotate (terminal-transform program)))
        (ctx.translate -64 -64)

        ;; make the pattern translate by moving, 
        ;; drawing then moving back
        (ctx.translate 
            (transform-x (terminal-transform program))
             (transform-y (terminal-transform program)))

        (ctx.fillRect 
            (- 0 (transform-x (terminal-transform program)))
            (- 0 (transform-y (terminal-transform program)))
            (* 127 2) (* 127 2))

        (ctx.translate 
            (- 0 (transform-x (terminal-transform program)))
            (- 0 (transform-y (terminal-transform program)))))
      (begin
        ;; slightly overzealous context saving/restoring
        (ctx.save)
        ;; set the composite operation
        (set! ctx.globalCompositeOperation (operator-type program))
        (ctx.save)
        (draw-egg ctx x y (operator-operand-a program))
        (ctx.restore)
        (ctx.save)
        (draw-egg ctx x y (operator-operand-b program))
        (ctx.restore)
        (ctx.restore))))

Planet Fluxus

Fluxus now runs in a browser using WebGL. Not much is working yet – (draw-cube), basic transforms, colours and textures. I’ve also built a small site in django so people can share (or perhaps more likely, corrupt) each other’s scripts. Also much inspired by seeing a load of great live coding at the algoraves by Davide Della Casa and Guy John using livecodelab.

fluxuswebgl2

fluxuswebgl

This is a spin off from the work I did a few weeks ago on a silly Scheme to Javascript compiler. It’s still pretty silly, but in order to explain better, first we take a scheme program like this:

;; a tree
(define (render n)
    (when (not (zero? n))
        (translate (vector 0 1 0))
        (with-state
            (scale (vector 0.1 1 0.1))
            (draw-cube))
        (scale (vector 0.8 0.8 0.8))
        (with-state
            (rotate (vector 0 0 25))
            (render (- n 1)))
        (with-state
            (rotate (vector 0 0 -25))
            (render (- n 1)))))

(every-frame 
    (with-state
        (translate (vector 0 -3 0))
        (render 8)))

Then parse it straight into JSON, so lists become Javascript arrays and everything else is a string, also doing minor things like switching “-” to “_”:

[["define",["render","n"],
    ["when",["not",["zero_q","n"]],
        ["translate",["vector","0","1","0"]],
        ["with_state",
            ["scale",["vector","0.1","1","0.1"]],
            ["draw_cube"]],
        ["scale",["vector","0.8","0.8","0.8"]],
        ["with_state",
            ["rotate",["vector","0","0","25"]],
            ["render",["-","n","1"]]],
        ["with_state",
            ["rotate",["vector","0","0","-25"]],
            ["render",["-","n","1"]]]]],

["every_frame",
    ["with_state",
    ["translate",["vector","0","-3","0"]],
    ["render","8"]]]]

Next we do some syntax expansion, so functions become full lambda definitions, and custom fluxus syntax forms like (with-state) get turned into lets and begins wrapped with state (push) and (pop). These transformations are actually written in Scheme (not quite as define-macros yet), and are compiled at an earlier stage. It now starts to increase in size:

[["define","render",
    ["lambda",["n"],
        ["when",["not",["zero_q","n"]],
            ["translate",["vector","0","1","0"]],
            ["begin",
                ["push"],
                ["let",[["r",["begin",
                        ["scale",["vector","0.1","1","0.1"]],
                        ["draw_cube"]]]],
                    ["pop"],"r"]],
            ["scale",["vector","0.8","0.8","0.8"]],
            ["begin",
                ["push"],
                ["let",[["r",["begin",
                        ["rotate",["vector","0","0","25"]],
                        ["render",["-","n","1"]]]]],
                    ["pop"],"r"]],
            ["begin",
                ["push"],
                ["let",[["r",["begin",
                        ["rotate",["vector","0","0","-25"]],
                        ["render",["-","n","1"]]]]],
                ["pop"],"r"]]]]],

["every_frame_impl",
    ["lambda",[],
        [["begin",
            ["push"],
            ["let",[["r",["begin",
                    ["translate",["vector","0","-3","0"]],
                    ["render","8"]]]],
            ["pop"],"r"]]]]]

Then, finally, we convert this into a bunch of Javascript closures. It’s pretty hard to unpick what’s going on at this point, I’m sure there is quite a bit of optimisation possible, though it does seem to work quite well:

var render = function (n) {
    if (!(zero_q(n))) {
        return (function () {
            translate(vector(0,1,0));
            (function () {
                push()
                return (function (r) {
                    pop()
                    return r
                }((function () {
                    scale(vector(0.1,1,0.1))
                    return draw_cube()
                })()))})();
        scale(vector(0.8,0.8,0.8));
        (function () {
            push()
            return (function (r) {
                pop()
                return r
            }((function () {
                rotate(vector(0,0,25))
                return render((n - 1))
            })()))})()
        return (function () {
            push()
            return (function (r) {
                pop()
                return r
            }((function () {
                rotate(vector(0,0,-25))
                return render((n - 1))
            })()))})()})()}};

every_frame_impl(function () {
    return (function () {
        push()
        return (function (r) {
            pop()
            return r
        }((function () {
            translate(vector(0,-3,0))
            return render(8)
        })()))})()})

Then all that’s needed are definitions for all the fluxus 3D graphics calls – the great thing is that these are also written in Scheme, right down to the low level WebGL stuff, so the only Javascript code needed is half of the compiler (eventually this also can be replaced). I was quite surprised at how easy this is, although it is greatly helped by the similarity of the two languages.

Al Jazari – ambient occlusion

A big part of the look of Minecraft comes from it’s Smooth lighting, which includes an illumination model called ambient occlusion. Ambient occlusion darkens areas of an object based on how obscured they are from a wide area light source, for example an entire sky area, as opposed to a point light source. This is often complicated to calculate in realtime, but with simplified geometric voxels it’s fairly fast to do, building on the code from the hollowing out optimisation I mentioned previously. For each point on a cube’s corners you can add up it’s surrounding empty cubes – the more empty space in the 8 voxels, the brighter the corner needs to be. In this way, we are assuming light is coming from all directions (hence ambient) and don’t need to take into account positions of lights etc.

occ

Here is the raw ambient occlusion lighting in Al Jazari 2, rendered as unlit vertex colours on each visible cube:

aj2-2

Then we add standard direct lighting to the ambient occlusion:

aj2-3

And here is the final image with textures added:

aj2-4

There are a few artefacts in the lighting due to the cubes not all being the same size (in order to minimise the complexity of what is drawn), here is the world coloured based on the size of the octree leaf node – the brighter blue areas are bigger cubes:

aj2-1

Once the ambient occlusion is calculated we only need to recalculate it when surrounding voxels are changed. In practice the splitting/joining of the octree levels when blocks are created and destroyed seems to take care of this in most cases.

;; returns a list of samples to test
(define (occ-samples pos) 
    (list 
     (vadd pos (vector  -1 -1 -1))
     (vadd pos (vector  -1 -1  0))
     (vadd pos (vector  -1  0 -1))
     (vadd pos (vector  -1  0  0))
     (vadd pos (vector   0 -1 -1))
     (vadd pos (vector   0 -1  0))
     (vadd pos (vector   0  0 -1))
     (vadd pos (vector   0  0  0))))

;; count up the empty voxels and calculate occlusion value
(define (calc-occlusion o pos)
  (let* ((samples (occ-samples pos))
         (coverage (/ (foldl
                       (lambda (p r)
                         (+ (if (octree-leaf-empty? (octree-ref o p)) 1 0) r))
                       0
                       samples)
                     (length samples))))
    (min 1 (* 2 coverage))))

;; helper to set a bunch of verts in one go
(define (pdata-list-set! k l v)
  (for-each (lambda (i) (pdata-set! k i v)) l))

...

;; uses the cube's position and size to set the vertex colour on each corner
(with-primitive my-cube
  (translate pos)
  (pdata-list-set! "c" '(3 7 19) (calc-occlusion o (vadd pos (vector 0 0 0))))
  (pdata-list-set! "c" '(10 14 21) (calc-occlusion o (vadd pos (vector size size size))))
  (pdata-list-set! "c" '(2 4 23) (calc-occlusion o (vadd pos (vector size 0 0))))
  (pdata-list-set! "c" '(9 15 17) (calc-occlusion o (vadd pos (vector 0 size size))))
  (pdata-list-set! "c" '(0 8 18) (calc-occlusion o (vadd pos (vector 0 size 0))))
  (pdata-list-set! "c" '(5 13 22) (calc-occlusion o (vadd pos (vector size 0 size))))
  (pdata-list-set! "c" '(6 12 16) (calc-occlusion o (vadd pos (vector 0 0 size))))
  (pdata-list-set! "c" '(1 11 20) (calc-occlusion o (vadd pos (vector size size 0)))))