Red King progress, and a sonification voting system

We have now launched the Red King simulation website. The fundamental idea of the project is to use music and online participation to help understand a complex natural process. Dealing with a mathematical model is more challenging than a lot of our citizen science work, where the connection to organisms and their environments is more clear. The basic problem here is how to explore a vast parameter space in order to find patterns in co-evolution.

After some initial experiments we developed a simple prototype native application (OSX/Ubuntu builds) in order to check we understand the model properly by running and tweaking it.


The next step was to convert this into a library we could bind to python. With this done we can run the model on a server, and have it autonomously update it’s own website via django. This way we can continuously run the simulation, storing randomly chosen parameters to build a database and display the results. I also set up a simple filter to run the simulation for 100 timesteps and discard parameters that didn’t look so interesting (the ones that went extinct or didn’t result in multiple host or virus strains).

There is also now a twitter bot that posts new simulation/sonifications as they appear. One nice thing I’ve found with this is that I can use the bot timeline to make notes on changes by tweeting them. It also allows interested people an easy way to approach the project, and people are already starting discussions with the researchers on twitter.


Up to now, this has simply been a presentation of a simulation – how can we involve people so they can help? This is a small project so we have to be realistic what is possible, but eventually we need a simple way to test how the perception of a sonification compares with a visual display. Amber’s been doing research into the sonification side of the project here. More on that soon.

For now I’ve added a voting system, where anyone can up or down-vote the simulation music. This is used as a way to tag patterns for further exploration. Parameter sets are ranked using the votes – so the higher the votes are the higher the likelihood of being picked as the basis for new simulations. When we pick one, we randomise one of its parameters to generate new audio. Simulations store their parents, so you can explore the hierarchy and see what changes cause different patterns. An obvious addition to this is to hook up the twitter retweets and favorites for the same purpose.


Yeastogram workshop

Last week we had our first biohacking workshop at foam kernow with the London Biohackspace. We had a great turnout and an interesting mix of people. Amber has written a more complete overview of the event.


For more information on yeastograms and how to make your own there is more info here. One of the things we helped with was the construction of a high power UV LED array needed to expose the cultures to radiation. We used 19 LEDs from here powered in 3 chains of 5 and one of 4 at 20V drawing 1.2A. After battling Ohms law for a while we found this site to be immensely useful. We also spent a lot of time on the LED heat sinks and cooling with an old PC fan – but the only problem we had was with the limiting resistors which overheated resulting in a last minute shopping trip to get some 10W rated bigguns. After that everything ran warm rather than hot and could be left overnight. One very important thing to mention with the UV light is to get proper eye protection when working with this, and not just dodgy sunglasses.