Red King progress, and a sonification voting system

We have now launched the Red King simulation website. The fundamental idea of the project is to use music and online participation to help understand a complex natural process. Dealing with a mathematical model is more challenging than a lot of our citizen science work, where the connection to organisms and their environments is more clear. The basic problem here is how to explore a vast parameter space in order to find patterns in co-evolution.

After some initial experiments we developed a simple prototype native application (OSX/Ubuntu builds) in order to check we understand the model properly by running and tweaking it.

d714be3e49925c7492edf0623b27a41c

The next step was to convert this into a library we could bind to python. With this done we can run the model on a server, and have it autonomously update it’s own website via django. This way we can continuously run the simulation, storing randomly chosen parameters to build a database and display the results. I also set up a simple filter to run the simulation for 100 timesteps and discard parameters that didn’t look so interesting (the ones that went extinct or didn’t result in multiple host or virus strains).

There is also now a twitter bot that posts new simulation/sonifications as they appear. One nice thing I’ve found with this is that I can use the bot timeline to make notes on changes by tweeting them. It also allows interested people an easy way to approach the project, and people are already starting discussions with the researchers on twitter.

1570fc8a4b4e93b560c2c3c159ba1710

Up to now, this has simply been a presentation of a simulation – how can we involve people so they can help? This is a small project so we have to be realistic what is possible, but eventually we need a simple way to test how the perception of a sonification compares with a visual display. Amber’s been doing research into the sonification side of the project here. More on that soon.

For now I’ve added a voting system, where anyone can up or down-vote the simulation music. This is used as a way to tag patterns for further exploration. Parameter sets are ranked using the votes – so the higher the votes are the higher the likelihood of being picked as the basis for new simulations. When we pick one, we randomise one of its parameters to generate new audio. Simulations store their parents, so you can explore the hierarchy and see what changes cause different patterns. An obvious addition to this is to hook up the twitter retweets and favorites for the same purpose.

ced6bbb49c7d71fd10b46719452d4e9b

Building pyramids with code composition

The Al Jazari 2 bots currently have six basic actions – move forward/backwards, turn 90 degrees left or right, pick up the block underneath them or drop the block to the space they are currently sitting on. Given these instructions, how do we procedurally build pyramids (of any given size) like this in their minecraft-esque world?

The Pyramids of Guimar, Canary Islands, Spain
The Pyramids of Guimar, Canary Islands, Spain

1. A pyramid can be built as a series of plateaus layered on top of each other, the plateaus can be built from material mined from nearby:

ajplat

2. A single plateau can be built as a series of single block wide ridges next to each other, mined from a series of trenches. This is an example ridge/trench of size 3:

ridgetrench

3. We need a gap between the ridge and the trench in order to place the plateau in the correct place in the pyramid (also the bots can only climb a single block at a time, otherwise they get stuck).

So in order to build a complete pyramid, we write the code to build a ridge/trench of any size and figure out the steps in-between to get the robot into the right position for the next one. The simplest ridge/trench is a single block long, so lets try writing some code to do that:

aj2single

The lambda, and bot-sequence etc are scheme code required to get the bot language working, we’re just interested in the contents of the “seq”. After running these instructions we’re in the right place for the next block. Note that the majority of the actions are involved with positioning the bot after doing it’s work. To place the next cube we copy the code and add some more ‘forward’s (as we have to travel a bit further going back and forth):

aj2double

This is already getting pretty long – we could do with a way to do repetition, so I’ve added a ‘repeat’ form to the language which takes a count, a name bound to the current iteration number (like a ‘for’ loop) and a list of instructions. This is the complete ridge/trench definition for any size, including gaps of any size:

aj2ridgetrench

The majority of the code is the maths to get the bot picking up and placing blocks further and further apart including the gap parameter. When collapsed into a function and run we get this:

aj2ridgetrench-fn

With the bot ending up in the same place as it started. In order to create a square plateau we call this function, move sideways and repeat, and then move back to where we started again when we’re done:

aj2plateau

Going from a plateau function to a pyramid is even shorter, and involves moving inwards diagonally and building smaller plateaus each time. Of course it also mines out a negative pyramid at the same time:

aj2pyramid

Here is a time lapse of a massive 8×8 pyramid being built, the code ‘compiles’ to 3224 low level instructions:

So this is a kind of programming that encourages solving problems through composition of abstractions – from the low level instructions, simple loops, primitive building constructs, up to complete structures. I’m not sure why in educational languages such as Scratch this is somehow sidelined (interestingly it’s not in Logo, it’s predecessor). Whether this is due to the ubiquity of imperative programming that leads to a focus on manipulation of state, or this kind of programming being considered as too advanced – but for me it’s fundamental, and I’m pretty sure it wouldn’t be that challenging for the kids in the CodeClub I’m running either.