Clip of a small and extremely fun livecoding jam at foam kernow alongside Federico Reuben and Francesca Sargent:
Tag: fluxus
Evolving butterflies game released!
The Heliconius Butterfly Wing Pattern Evolver game is finished and ready for it’s debut as part of the Butterfly Evolution Exhibit at the Royal Society Summer Exhibition 2014. Read more about the scientific context on the researcher’s website, and click the image above to play the game.
The source code is here, it’s the first time I’ve used WebGL for a game, and it’s using the browser version of fluxus. It worked out pretty well, even to the extent that the researchers could edit the code themselves to add new explanation screens for the genetics. Like any production code it has niggles, here’s the function to render a butterfly:
(define (render-butterfly s) (with-state ;; set tex based on index (texture (list-ref test-tex (butterfly-texture s))) ;; move to location (translate (butterfly-pos s)) ;; point towards direction (maim (vnormalise (butterfly-dir s)) (vector 0 0 1)) (rotate (vector 0 90 90)) ;; angle correctly (scale (vector 0.5 0.5 0.5)) ;; make smaller (draw-obj 4) ;; draw the body (with-state ;; draw the wings in a new state (rotate (vector 180 0 0)) (translate (vector 0 0 -0.5)) ;; position and angle right ;; calculate the wing angle based on speed (let ((a (- 90 (* (butterfly-flap-amount s) (+ 1 (sin (* (butterfly-speed s) (+ (butterfly-fuzz s) (time))))))))) (with-state (rotate (vector 0 0 a)) (draw-obj 3)) ;; draw left wing (with-state (scale (vector 1 -1 1)) ;; flip (rotate (vector 0 0 a)) (draw-obj 3)))))) ;; draw right wing
There is only immediate mode rendering at the moment, so the transforms are not optimised and little things like draw-obj takes an id of a preloaded chunk of geometry, rather than specifying it by name need to be fixed. However it works well and the thing that was most successful was welding together the Nightjar Game Engine (HTML5 canvas) with fluxus (WebGL) and using them together. This works by having two canvas elements drawn over each other – all the 2D (text, effects and graphs) are drawn using canvas, and the butterflies are drawn in 3D with WebGL. The render loops are run simultaneously with some extra commands to get the canvas pixel coordinates of objects drawn in 3D space.
A fluxus workshop plan
I’ve been getting some emails asking for course notes for fluxus workshops, I don’t really have anything as structured as that but I thought it would be good to document something here. I usually pretty much follow the first part of the fluxus manual pretty closely, trying to flip between visually playful parts and programming concepts. I’ve taught this to teenagers, unemployed people, masters students, professors and artists – it’s very much aimed at first time programmers. I’m also less interested in churning out fluxus users, and more motivated by using it as an introduction to algorithms and programming in general. Generally it’s good to start with an introduction to livecoding, where fluxus comes from, who uses it and what for. I’ve also started discussing the political implications of software and algorithmic literacy too.
So first things first, an introduction to a few key bindings (ctrl-f fullscreen/ctrl-w windowed), then in the console:
- Scheme as calculator – parentheses and nesting simple expressions.
- Naming values with define.
- Naming processes with define to make procedures.
Time to make some graphics, so switch to a workspace with ctrl-1:
- A new procedure to draw a cube.
- Calling this every frame.
- Mouse camera controls, move around the cube.
- Different built in shapes, drawing a sphere, cylinder, torus.
Then dive into changing the graphics state, so:
- Colours.
- Transforms.
- Textures.
- Multiple objects, graphics state persistent like changing a “pen colour”.
- Transform state is applicative (scale multiplies etc).
Then tackle recursion, in order to reduce the size of the code, and make much more complex objects possible.
- A row of cubes.
- Make it bend with small rotation.
- Animation with (time).
At this point they know enough to be able play with what they’ve learnt for a while, making procedural patterns and animated shapes.
After this it’s quite easy to explain how to add another call to create tree recursion, and scope state using (with-state) and it all goes fractal crazy.
This is generally enough for a 2 hour taster workshop. If there is more time, then I go into the scene graph and explain how primitives are built from points, faces and show how texture coords work etc. Also the physics system is great to show as it’s simple to get very different kinds of results.
Planet Fluxus
Fluxus now runs in a browser using WebGL. Not much is working yet – (draw-cube), basic transforms, colours and textures. I’ve also built a small site in django so people can share (or perhaps more likely, corrupt) each other’s scripts. Also much inspired by seeing a load of great live coding at the algoraves by Davide Della Casa and Guy John using livecodelab.
This is a spin off from the work I did a few weeks ago on a silly Scheme to Javascript compiler. It’s still pretty silly, but in order to explain better, first we take a scheme program like this:
;; a tree (define (render n) (when (not (zero? n)) (translate (vector 0 1 0)) (with-state (scale (vector 0.1 1 0.1)) (draw-cube)) (scale (vector 0.8 0.8 0.8)) (with-state (rotate (vector 0 0 25)) (render (- n 1))) (with-state (rotate (vector 0 0 -25)) (render (- n 1))))) (every-frame (with-state (translate (vector 0 -3 0)) (render 8)))
Then parse it straight into JSON, so lists become Javascript arrays and everything else is a string, also doing minor things like switching “-” to “_”:
[["define",["render","n"], ["when",["not",["zero_q","n"]], ["translate",["vector","0","1","0"]], ["with_state", ["scale",["vector","0.1","1","0.1"]], ["draw_cube"]], ["scale",["vector","0.8","0.8","0.8"]], ["with_state", ["rotate",["vector","0","0","25"]], ["render",["-","n","1"]]], ["with_state", ["rotate",["vector","0","0","-25"]], ["render",["-","n","1"]]]]], ["every_frame", ["with_state", ["translate",["vector","0","-3","0"]], ["render","8"]]]]
Next we do some syntax expansion, so functions become full lambda definitions, and custom fluxus syntax forms like (with-state) get turned into lets and begins wrapped with state (push) and (pop). These transformations are actually written in Scheme (not quite as define-macros yet), and are compiled at an earlier stage. It now starts to increase in size:
[["define","render", ["lambda",["n"], ["when",["not",["zero_q","n"]], ["translate",["vector","0","1","0"]], ["begin", ["push"], ["let",[["r",["begin", ["scale",["vector","0.1","1","0.1"]], ["draw_cube"]]]], ["pop"],"r"]], ["scale",["vector","0.8","0.8","0.8"]], ["begin", ["push"], ["let",[["r",["begin", ["rotate",["vector","0","0","25"]], ["render",["-","n","1"]]]]], ["pop"],"r"]], ["begin", ["push"], ["let",[["r",["begin", ["rotate",["vector","0","0","-25"]], ["render",["-","n","1"]]]]], ["pop"],"r"]]]]], ["every_frame_impl", ["lambda",[], [["begin", ["push"], ["let",[["r",["begin", ["translate",["vector","0","-3","0"]], ["render","8"]]]], ["pop"],"r"]]]]]
Then, finally, we convert this into a bunch of Javascript closures. It’s pretty hard to unpick what’s going on at this point, I’m sure there is quite a bit of optimisation possible, though it does seem to work quite well:
var render = function (n) { if (!(zero_q(n))) { return (function () { translate(vector(0,1,0)); (function () { push() return (function (r) { pop() return r }((function () { scale(vector(0.1,1,0.1)) return draw_cube() })()))})(); scale(vector(0.8,0.8,0.8)); (function () { push() return (function (r) { pop() return r }((function () { rotate(vector(0,0,25)) return render((n - 1)) })()))})() return (function () { push() return (function (r) { pop() return r }((function () { rotate(vector(0,0,-25)) return render((n - 1)) })()))})()})()}}; every_frame_impl(function () { return (function () { push() return (function (r) { pop() return r }((function () { translate(vector(0,-3,0)) return render(8) })()))})()})
Then all that’s needed are definitions for all the fluxus 3D graphics calls – the great thing is that these are also written in Scheme, right down to the low level WebGL stuff, so the only Javascript code needed is half of the compiler (eventually this also can be replaced). I was quite surprised at how easy this is, although it is greatly helped by the similarity of the two languages.
Fluxus at Falmouth University
On Friday Cornwall Locative Arts Network, Cornwall Creative Skills and I took over The Academy for Innovation & Research at Falmouth University with a Fluxus workshop, teaching creative coding via recursive procedural 3D modelling for people new to programming. The thing I like most about Scheme as a programming language is that you can very quickly cover the fundamentals of programming (naming values and processes, recursion, and scope) in a 2 hour course with very little time spent on learning syntax or other fiddly things. We had a great spread of attendants, from local artists and designers to lecturers and PhD students, everyone ending up with their own animated procedural shapes.
This was followed by the Cornwall Locative Arts Network meeting at which I presented with Tom Goskar, a digital archaeologist, who uses 3D graphics in fascinating ways to read inscriptions and patterns in ancient monuments. I talked about borrowed scenery, doris and sonic bikes and discussed using ushahidi and beagleboards in artistic and scientific projects.
Later on we were invited by Jowan Sebastian Parker to experience Falmouth University’s MakerNow Lab which is opening soon, providing laser cutters, 3D printers and an electronics workshop. I’m hoping to find time to make extensive use of the lab in the not too distant future! The photo above is of their ‘synths on postcards’ example project.
Fluxus workshop in Falmouth
After rushing around Europe doing a lot of livecoding over the last week (more on that soon) I’m really pleased to announce this workshop closer to home in Falmouth:
CLAN and CREATIVE SKILLS present: CREATIVE CODING taster session for ABSOLUTE BEGINNERS with Dave Griffiths
Friday May 31st 10.30am – 12.30pm
AIR building, Falmouth University, Tremough Campus TR10 9EZ
Places: For 10 people only
Deposit: There are limited places so you will need to pay a £15 deposit to secure your place when you book, which will be refunded before the workshop. If you don’t turn up you lose your deposit. To book: Please email: admin@creativeskills.org.uk
In this workshop for beginners with award winning game designer, creative coder and live coding artist Dave Griffiths you will find out about the emerging art form of live coding and learn how to write simple programmes to create animations in 3D space. You will be introduced to fluxus, an open source game engine for live coding worlds into existence, used by artists, performers and digital practitioners for installations, VJing, games and education.
What will you do? What will you achieve?
You will create 3D animated forms, and have an introduction to fundamental programming concepts, naming of values and processes, recursion and digital representation of colour, 3D shape and texture.
There will be coffees and teas.
At 1pm Dave will talk at a free event-
CLAN – Cornwall Locative/ Interactive Arts Network
1-2pm AIR Sandpit
Speakers: creative coder Dave Griffiths is joined by digital archaeologist Tom Goskar.
Al Jazari – scooping out the insides of planets with scheme
Optimisation is a game where you write more code in order to do less. In Al Jazari 2 doing less means drawing less blocks. Contiguous blocks of the same type are already automatically collapsed into single larger ones with the Octree – but if we can figure out which blocks are completely surrounded by other blocks, we can save more time by not building or drawing them either.
Here is a large sphere – clipped by the world volume, showing a slice through the internal block structure:
The next version has all internal blocks removed, in this case shaving 10% off the total primitives built and drawn:
The gaps in the sphere from the clipping allow us to look inside at how the octree has optimised the structure. The gain is higher in a more normal Minecraft set up with a reasonably flat floor covering a large amount of blocks. Here is the code involved, built on top of a functional library I’m building up on to manipulate this kind of data. It maps over each Octree leaf checking all the blocks it touches on each of its six sides, taking into account that the size of the leaf block may be bigger than one.
(define (octree-check-edge f o pos size) (define (do-x x y) (cond ((eq? x -1) #f) ((octree-leaf-empty? (octree-ref o (vadd pos (f x y)))) #t) (else (do-x (- x 1) y)))) (define (do-y y) (cond ((eq? y -1) #f) ((do-x size y) #t) (else (do-y (- y 1))))) (do-y size)) (define (octree-is-visible? o pos size) (or (octree-check-edge (lambda (x y) (vector size x y)) o pos size) (octree-check-edge (lambda (x y) (vector -1 x y)) o pos size) (octree-check-edge (lambda (x y) (vector x size y)) o pos size) (octree-check-edge (lambda (x y) (vector x -1 y)) o pos size) (octree-check-edge (lambda (x y) (vector x y size)) o pos size) (octree-check-edge (lambda (x y) (vector x y -1)) o pos size))) (define (octree-calc-viz o) (octree-map (lambda (v pos size depth) (octree-leaf (octree-is-visible? o pos size) (octree-leaf-value v))) o))
Skate/BMX ramp projection
Jaye Louis Douce, Ruth Ross-Macdonald and I took to the ramps of Mount Hawke skate park in deepest darkest Cornwall to test the prototype tracker/projection mapper (now know as ‘The Cyber-Dog system‘) in it’s intended environment for the first time. Mount Hawke consists of 20,000 square feet of ramps of all shapes and sizes, an inspiring place for thinking about projections and tracing the flowing movements of skaters and BMX riders.
Finding a good place to mount the projector was the first problem, it was difficult to get it far enough away to cover more than a partial area of our chosen test ramp – even with some creative duct tape application. Meanwhile the Kinect camera was happily tracking the entire ramp, so we’ll be able to fix this by replacing my old battered projector with a better model in a more suitable location.
The next challenge is calibrating the projection mapping to align it with what the camera is looking at. As they are in different places this is quite fiddly and time consuming to get right, some improvements to the fluxus script will make it faster. Here is Jaye testing it once we had it lined up:
Next it was time to recruit some BMX test pilots to give it a go:
At higher speed it needs a bit of linear interpolation to ‘connect the dots’, as the visualisation is running at 60fps while the tracking is more like 20fps:
This test proved the fundamental idea, and opens up lots of possibilities, different types of visualisations, recording/replaying paths over time as well as the possibility of identifying individual skaters or BMX riders with computer vision. One great advantage this setup has is once it’s running it will work all the time, with no need for continuous calibration (as with RGB cameras) or the use of any additional tracking devices.
Al Jazari 2 – minecraft meets fluxus
Some screenshots of the in-progress next generation Al Jazari livecoding world. This is a voxel rendered world, inspired in part by Minecraft but with an emphasis on coding robots in scheme bricks who construct artefacts from the materials around them. The robot language is still to be designed, but will probably resemble Scratch.
You can ‘jump on board’ the different robots (cycling through them with ‘space’) and program them with commands which include picking up or dropping individual blocks. The program above allows you to control the robot with the ‘w’, ‘a’, ‘s’, ‘d’ keys with ‘z’ to tunnel downwards, and ‘x’ to remove the block in front of the robot.
The world is built quite simply from an octree – which provides an optimised structure for rendering the 64x64x64 level cube in realtime. The view below shows the compression – large areas containing the same material (or empty space) can be represented by leaf nodes terminating the tree early without needing to store each of the 262,144 1x1x1 cubes. After each edit, the octree may fragment or collapse it’s tree (via setting new values in 3D space or a ‘compress’ operation). The scheme code can be found here.
Fluxus on the Raspberry Pi
After getting acquainted with the BeagleBoard while working on the Swamp bike opera I decided to have a look at the similar Raspberry Pi, and particularly it’s graphics systems. The Android/PS2 version of fluxus, called nomadic is ported after a bit of fiddling, but no mouse or keyboard input yet (build it with ‘scons TARGET=RPI’). The graphics driver for the Pi’s VideoCore GPU doesn’t work quite like you normally expect with X11, you get access to it via a custom display manager called dispmanx which allows crazy things like alpha compositing on top of the X display like this:
Everything you need to develop for the Pi’s GPU can be found inside /opt/vc/ (before finding that I installed a bunch of generic OpenGL ES stuff that wouldn’t work). You need to use the headers and link to the driver libraries there. There are some useful examples inside the hello_pi directory – one important thing is to call bcm_host_init(); and link with libbcm_host.so to initialise BroadCom’s driver before you can do any GPU related calls. I started off by trying to port GlutES to the Raspberry Pi but I got further using the example code – I might come back to that as a way of getting more functionality working along with X11.
I’m also experimenting with a new fluxus editor for nomadic, based on Kassen Oud’s work – a text editor written in fluxus for fluxus that you can see in the screenshot. This will eventually be useful for the standard version too, as it will give much more control over the livecoding environment while livecoding 🙂