More procedural Minecraft architecture

More work on the Python teaching environment we’ll be using next week for the Minecraft workshop at dBsMusic. I’m working on various ideas for procedural architecture, using this as a way to demonstrate what programming is about – thinking procedurally/functionally. The code is online here – I’ll be adding some exercises and course materials over the next few days.

Edit: Documentation on how to use, also install.sh should work on Raspberry Pi. Exercises to follow.

dbsmcpi-prims

dbsmcpi-houses2

Strange terraforming

Working on the upcoming Raspberry Pi programming workshop for dBsCode, I’m wrapping the Minecraft Python API with a functional style one to reduce the amount of syntax we’ll have to teach. The idea is to build complex 3D shapes via abstraction, out of simple primitives.

dbsmcpi

The IDE we’re using is Geany which seems to run well alongside Minecraft on the Raspberry Pi so far. It’s great how Minecraft stays on top of the display at all times – more an unintentional feature of the GPU driver, but very useful for teaching.

Scratch -> Lego Mindstorms

A bit of hardware hacking for Troon Primary CodeClub, who have tons of old style Lego Mindstorms they don’t use any more, and after a year of Scratch programming on their PCs are just getting started with Raspberry Pi. We’re using this Scratch modification together with the hardware I’m making which is based on this circuit. The main thing here is an L293D Motor Controller IC which can drive 2 DC motors in both directions. You can write the hardware code in Scratch like this to control the lego motors:

blink11

The most tricky part in this whole endeavour has been physically connecting to Mindstorms. At the moment I’m having to use crocodile clips which won’t work long in normal classroom conditions – but I’m wary of destroying/modifying the connectors as they’re not made any more…

IMG_20140317_111808

Solar powered computation

This is a Raspberry Pi running a Racket Servlet (with the Mongoose 2000 data sync server) serving a webpage to the tablet. The neat thing is that it’s running on solar power – a day of gloomy winter Cornish light (not even outside) charging the on board battery results in over an hour of running time. This includes the ad-hoc wifi transmitter, which is presumably the main power drain as the CPU usage is negligible.

This system, which I’ll be developing more in the coming months, will be heading to rural India as part of an exciting anthropological research project with Shakti Lamba involving the Aakash tablet and some interesting ubiquitous livecoding experiments.

IMG_20140223_111740

Raspberry Pi: Built for graphics livecoding

I’m working on a top secret project for Sam Aaron of Meta-eX fame involving the Raspberry Pi, and at the same time thinking of my upcoming CodeClub lessons this term – we have a bunch of new Raspberry Pi’s to use and the kids are at the point where they want to move on from Scratch.

This is a screenshot of the same procedural landscape demo previously running on Android/OUYA running on the Raspberry Pi, with mangled texture colours and a cube added via a new livecoding repl:

IMG_20140108_232857

Based on my previous experiments, this program uses the GPU for the Raspberry Pi (the VideoCore IV bit of the BCM2835). It’s fast, allows compositing on top of whatever else you are running at the time, and you can run it without X windows for more CPU and memory, sounds like a great graphics livecoding GPU to me!

Here’s a close up of the nice dithering on the texture – not sure yet why the colours are so different from the OUYA version, perhaps a dodgy blend mode or a PNG format reading difference:

IMG_20140108_232914

The code is here (bit of a mess, I’m in the process of cleaning it all up). You can build in the jni folder by calling “scons TARGET=RPI”. This is another attempt – looks like my objects are inside out:

IMG_20140109_004111

Mongoose 2000

A screen shot from the Mongoose 2000 project, we now have most of the ‘pup focal’ interfaces working and syncing their data via the Raspberry Pi. This is the interface for recording a pup aggression event – including the identity of the aggressive mongoose and some information on the cause and severity. Each mongoose has a code, and we’re using sliding toggle button interfaces for quickly picking them – these can be filtered to restrict them to adults, pups, males or females where required.

pupaggr

The interface was written using “starwisp” – my system for building android applications in Scheme. The Mongoose 2000 app has lots of reusable interfaces, so it’s mostly constructed from fragments. There are no specialised database tables, so I can simply add or modify the widgets here and the data automagically appears in the Raspberry Pi export, which makes it very fast to build. I’ve abstracted the mongoose button grid selectors and tristate buttons (yes/no/maybe) as they are used in a lot of places. Here is the entire definition of the fragment for the interface above, the code includes everything for creating and recording the database entity for this event and all the android callbacks it needs to respond to external events.

(fragment
   "ev-pupaggr"
   
   ;; define the interface layout first
   (linear-layout
    (make-id "") 'vertical fillwrap pf-col
    (list
     (mtitle "title" "Event: Pup aggression")
     (build-grid-selector "pf-pupaggr-partner" 
                          "single" "Aggressive mongoose")
     (linear-layout
      (make-id "") 'horizontal 
      (layout 'fill-parent 100 '1 'left 0) trans-col
      (list
       (vert
        (mtext "" "Fighting over")
        (spinner (make-id "pf-pupaggr-over") 
                 (list "Food" "Escort" "Nothing" "Other") fillwrap
                 (lambda (v)
                   (entity-add-value! "over" "varchar" v) '())))
       (vert
        (mtext "" "Level")
        (spinner (make-id "pf-pupaggr-level") 
                 (list "Block" "Snap" "Chase" "Push" "Fight") fillwrap
                 (lambda (v)
                   (entity-add-value! "level" "varchar" v) '())))
       (tri-state "pf-pupaggr-in" "Initiate?" "initiate")
       (tri-state "pf-pupaggr-win" "Win?" "win")))
     (spacer 20)
     (horiz
      (mbutton "pf-pupaggr-done" "Done"
        (lambda ()
          (entity-add-value! "parent" "varchar" 
            (get-current 'pup-focal-id ""))
            (entity-record-values db "stream" "pup-focal-pupaggr")
            (list (replace-fragment (get-id "event-holder") 
                                    "events"))))
      (mbutton "pf-pupaggr-cancel" "Cancel"
        (lambda ()
          (list (replace-fragment (get-id "event-holder") 
                                  "events")))))))

   ;; define the fragment's event callbacks 
   (lambda (fragment arg) ;; on create, return layout for building
     (activity-layout fragment))

   ;; on start - update contents from the db
   (lambda (fragment arg)  
     (entity-reset!)
     (list
      (populate-grid-selector
       "pf-pupaggr-partner" "single" ;; select single mongoose
       (db-mongooses-by-pack) #t     ;; from the whole pack 
       (lambda (individual)          ;; <- called when selected
         (entity-add-value! "id-with" "varchar" 
           (ktv-get individual "unique_id")) 
         (list)))
      ))

   (lambda (fragment) '()) ;; on stop
   (lambda (fragment) '()) ;; on resume
   (lambda (fragment) '()) ;; on pause
   (lambda (fragment) '())) ;; on destroy

Jellyfish: A daft new language is born

After trying, and failing, to write a flocking system in jellyfish bytecode I wrote a compiler using the prototype betablocker one. It reads a scheme-ish imperative language and generates bytecode (which is also invented, and implemented in C++) it only took a couple evenings and a train journey to write, and it even seems to work.

jellyfish

The basic idea is to walk through the code tree described by the scheme lists generating bits of bytecode that fit together. Let’s take logical “not” as an example. Like GPU processors, the only datatype is vectors of 3 floats, and we define false as 0 in the x position and anything else in x to be true (ignoring what’s in y or z). There is no single instruction for “not” so we have to build it from the other instructions. For example this bit of code:

(not (vector 0 0 0))

should return (vector 1 0 0). When we are walking the tree of lists we check the first element and dispatch to a set of functions, one for each type of (higher level) instruction which ’emit’s a list containing the bytecode required. The one for ‘not’ looks like this, where x is the expression, e.g. ‘(not (vector 0 0 0))’:

(define (emit-not x)
  (append
   (emit-expr (cadr x))
   (emit (vector jmz 3 0))
   (emit (vector ldl 0 0))
   (emit (vector jmr 2 0))
   (emit (vector ldl 1 0))))

The first thing it does is return all the instructions required for the expression we pass in the second element of ‘x’ with ’emit-expr’. With our simple example it will just push (vector 0 0 0) onto the stack, but it could be a whole load of complicated nested expressions, and it will work the same.

After that we have some bytecode:

jmz 3 0 ;; if top of stack is 0, jump forward 3 instructions (ldl 1 0)
ldl 0 0 ;; load 0 onto the stack
jmr 2 0 ;; jump forward 2 instructions (skip to next code section)
ldl 1 0 ;; load 1 onto the stack

So this just checks (and removes) the top element on the stack and pushes the opposite logical value. Pushing a single float like the ‘ldl’ (load literal) instructions above expands to a vector value internally, it’s just a convenience. Some instructions (such as those involving vector maths) are just a single instruction, others like conditionals or loops are a bit trickier as they need to count instructions to skip over variable length sections of program.

We add variables in the form of ‘let’ that map to addresses a the start of memory, read and write for accessing model memory like array lookups. The full flocking system looks like this, and animates a points primitive in OpenGL:

(let ((vertex 512) 
      (accum-vertex 512)  
      (closest 9999)
      (closest-dist 9999)
      (diff 0)
      (vel 1024))
      (loop 1 ;; infinite loop
        (loop (< vertex 532) ;; for every vertex
          ;; find the closest vertex
          (loop (< accum-vertex 532) 
            (cond 
              ;; if they're not the same vert
              ((not (eq? accum-vertex vertex))
              ;; get vector between the points
              (set! diff (- (read vertex) (read accum-vertex)))
              (cond 
                ;; if it's closer so far
                ((< (mag diff) closest-dist)
                ;; record vector and distance
                (set! closest diff)
                (set! closest-dist (mag closest))))))
              (set! accum-vertex (+ accum-vertex 1)))
              ;; reset accum-vertex for next time
              (set! accum-vertex 512)

              ;; use closest to do the flocking, add new velocity 
              ;; to old (to add some inertia)
              (write! vel (+ (* (read vel) 0.99)
                   ;; attract to centre
                   (* (+ (* (- (read vertex) (vector 0 0 0)) 0.05)
                         ;; repel from closest vertex
                         (* (normalise closest) -0.15)) 0.01)))
              ;; add velocity to vertex position
              (write! vertex (+ (read vel) (read vertex))) 
                
              ;; reset and increment stuff
              (set! closest-dist 9999)
              (set! vel (+ vel 1))
              (set! vertex (+ vertex 1)))
            ;; reset for main loop
            (set! vertex 512)
            (set! vel 1024)))

This compiles to 112 vectors of bytecode (I should call it vectorcode really) with extra debugging information added so we can see the start and the end of each higher level instruction. It all looks like this – which most importantly I didn’t need to write by hand!

10 30000 0 ;; top memory positions are for registers controlling 
512 2 1    ;; program and graphics state (primitive type and number of verts)
nop 0 0    ;; space
nop 0 0    ;; for all
nop 0 0    ;; the variables
nop 0 0    ;; we use 
nop 0 0    ;; in the program
nop 0 0
nop 0 0
nop 0 0
;; starting let  <- program starts here
ldl 512 0        ;; load all the 'let' variable data up
sta 4 0
ldl 512 0
sta 5 0
ldl 9999 0
sta 6 0
ldl 9999 0
sta 7 0
ldl 0 0
sta 8 0
ldl 1024 0
sta 9 0
;; starting loop  <- start the main loop
;; starting loop
;; starting loop
;; starting cond
;; starting not
;; starting eq?
lda 5 0
lda 4 0
sub 0 0
jmz 3 0
ldl 0 0
jmr 2 0
ldl 1 0
;; ending eq?
jmz 3 0
ldl 0 0
jmr 2 0
ldl 1 0
;; ending not
jmz 38 0
;; starting set!
;; starting -
;; starting read
ldi 4 0
;; ending read
;; starting read
ldi 5 0
;; ending read
sub 0 0
;; ending -
sta 8 0
;; ending set!
;; starting cond
;; starting <
;; starting mag
lda 8 0
len 0 0
;; ending mag
lda 7 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 12 0
;; starting set!
lda 8 0
sta 6 0
;; ending set!
;; starting set!
;; starting mag
lda 6 0
len 0 0
;; ending mag
sta 7 0
;; ending set!
;; ending cond
;; ending cond
;; starting set!
;; starting +
lda 5 0
ldl 1 0
add 0 0
;; ending +
sta 5 0
;; ending set!
;; starting <
lda 5 0
ldl 532 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 2 0
jmr -72 0
;; ending loop
;; starting set!
ldl 512 0
sta 5 0
;; ending set!
;; starting write!
;; starting +
;; starting *
;; starting read
ldi 9 0
;; ending read
ldl 0.9900000095 0
mul 0 0
;; ending *
;; starting *
;; starting +
;; starting *
;; starting -
;; starting read
ldi 4 0
;; ending read
ldlv 0 0
nop 0 0
sub 0 0
;; ending -
ldl 0.05000000075 0
mul 0 0
;; ending *
;; starting *
;; starting normalise
lda 6 0
nrm 0 0
;; ending normalise
ldl -0.150000006 0
mul 0 0
;; ending *
add 0 0
;; ending +
ldl 0.009999999776 0
mul 0 0
;; ending *
add 0 0
;; ending +
sti 9 0
;; ending write!
;; starting write!
;; starting +
;; starting read
ldi 9 0
;; ending read
;; starting read
ldi 4 0
;; ending read
add 0 0
;; ending +
sti 4 0
;; ending write!
;; starting set!
ldl 9999 0
sta 7 0
;; ending set!
;; starting set!
;; starting +
lda 9 0
ldl 1 0
add 0 0
;; ending +
sta 9 0
;; ending set!
;; starting set!
;; starting +
lda 4 0
ldl 1 0
add 0 0
;; ending +
sta 4 0
;; ending set!
;; starting <
lda 4 0
ldl 532 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 2 0
jmr -160 0
;; ending loop
;; starting set!
ldl 512 0
sta 4 0
;; ending set!
;; starting set!
ldl 1024 0
sta 9 0
;; ending set!
ldl 1 0
jmz 2 0
jmr -173 0
;; ending loop
;; ending let

Mongoose 2000

Mongoose 2000 is a system I’m developing for the Banded Mongoose Research Project. It’s a behavioural recording system for use in remote areas with sporadic internet or power. The project field site is located in Uganda in the countryside and it needs to run for long time frames, so there are big challenges when it comes to setting up the system and debugging it remotely.

In order to make this work we’re using a Raspberry Pi as a low power central wifi node, allowing Android tablets to communicate with each other and synchronise data. There are a couple of types of observations we need to record:

  1. Pack composition: including presence in the pack, individual weights and pregnancy state.
  2. Pup focal: studies of individual pups, who’s feeding them, when they feed themselves or playing.
  3. Group events: warning calls, moving locations, fights with other packs.

We also need to store and manage the pack information, so names, collar and chip ids of individual animals. The data is passed around a bit like this:

web

The interface design on the tablets is very important – things may happen quickly, often at the same time (for instance group events happening while a pup focal observation is being carried out), so we need multiple simultaneous things on screen, and the priority has to be on responsiveness and speed rather than initial ease of use. For these reasons it has similarities to live music performance interfaces. We can also take advantage of the storage on the tablets to duplicate data on the Raspberry Pi to add redundancy. Data is transferred from the field site by downloading the entire database onto the Android tablets, which can then be emailed using the normal internet, either when it’s working locally or by taking the tablets into the nearby town where bandwidth is better.

IMG_20131105_185918

The project is a mix of cheap, replaceable hardware and mature well used software – Raspberry Pi’s mean we can afford a backup or two on site, along with plenty of replacement sdcards with the OS cloned. The observation software can also be updated over the Android play store (for bug fixes, or changing the data gathered) without any changes required on the Raspberry Pi. The platform is based on the one I built for the ‘Crap App’ along with experimental stuff I was doing with bike mounted wifi nodes with Kaffe Matthews, and includes SQLite for the underlying database on both platforms (providing atomic writes and journalling) and TinyScheme for Android and Racket for the Raspberry Pi allowing me to share a lot of the code between the hardware platforms.

Sonic Bike Hacklab Part 3: The anti-cloud – towards bike to bike mesh networking

IMG_20130726_122857

[Continued from part 2] One of the philosophies that pre-dates my involvement with the sonic bikes project is a refusal of cloud technologies – to avoid the use of a central server and to provide everything required (map, sounds and computation) on board the bikes. As the problems with cloud technology become more well known, art projects like this are a good way to creatively prototype alternatives.

The need to abstractly “get the bikes to talk to one another” beyond our earlier FM transmission experiments implies some kind of networking, and mesh networking provides non-hierarchical peer to peer communication, appropriate if you want to form networks between bikes on the street over wifi (which may cluster at times, and break up or reform as people go in different directions). After discussing this a bit with hacklab participant and fellow Beagleboard enthusiast Adam Parkinson I thought this would be a good thing to spend some time researching.

The most basic networking information we can detect with wifi is the presence of a particular bike, and we decided to prototype this first. I’d previously got hold of a low power wifi usb module compatible with a Raspberry Pi (which I found I could run with power from the bike’s beagleboard usb!), and we could use an Android phone on another bike, running fluxa to plug the signal strength into a synth parameter:

mesh

It’s fairly simple to make an ad-hoc network on the Raspberry Pi via command line:

ifconfig wlan0 down
iwconfig wlan0 channel 4
iwconfig wlan0 mode ad-hoc
iwconfig wlan0 essid 'bikemesh'
iwconfig wlan0 key password
ifconfig wlan0 192.168.2.1

On the Android side, the proximity synth software continuously measures the strength of the wifi network from the other bike, using a WifiScanReceiver we set up like so:

wifi = (WifiManager) getSystemService(Context.WIFI_SERVICE);
wifi.startScan();
registerReceiver(new WiFiScanReceiver(), 
                 new IntentFilter(
                 WifiManager.SCAN_RESULTS_AVAILABLE_ACTION));

The WifiScanReceiver is a subclass of BroadcastReceiver, that re-triggers the scan process. This results in reasonably high frequency scanning, a couple a second or so and we also check the SSID names of the networks around the bike for the correct “bikemesh” node:

import java.util.List;

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.net.wifi.ScanResult;
import android.net.wifi.WifiManager;
import android.util.Log;
import android.widget.Toast;

public class WiFiScanReceiver extends BroadcastReceiver {
    private static final String TAG = "WiFiScanReceiver";

    public WiFiScanReceiver() {
        super();
        Level=0;
    }

    static public int Level;

    @Override
    public void onReceive(Context c, Intent intent) {
        List<ScanResult> results = ((Earlobes)c).wifi.getScanResults();
        ScanResult bestSignal = null;

        for (ScanResult result : results) {
            if (result.SSID.equals("bikemesh")) {
                String message = String.format("bikemesh located: strength %d",
                                               result.level);
                Level = result.level;
            }
        }
        ((Earlobes)c).wifi.startScan();
    }

}

The synth was also using the accelerometers, but ramped up the cutoff frequency of a low pass filter on some white noise and increased modulation on the accelerometer driven sine waves when you were close to the other bike. The result was quite surprising with such a simple setup, as it immediately turned into a game playing situation, bike “hide and seek” – as rider of the proximity synth bike you wanted to hunt out where the wifi bike was, the rider of which would be trying to escape. The range was surprisingly long distance, about halfway across London Fields park. Here is an initial test of the setup (we had the sounds a bit more obvious than this in later tests):

With the hardware and some simple software tested, the next stage would be to run multiple wifi nodes and get them to connect and form a mesh network. I got some way into using Babel for this, which is very self contained and compiles and runs on Beagleboard and Raspberry Pi. The other side to this is what kind of things do we want to do with this kind of “on the road” system, how do we notate and artistically control what happens over a sonic bike mesh network?

Some ideas we had included recording sounds and passing them between bikes, or each bike forming a synth node, so you create and change audio dependant on who is around you and what the configuration is. I made few more notes on the technical stuff here.