Viruscraft: building a ‘reasonably accurate’ genetic game world simulation

The concept for the viruscraft game is to have a realtime genetic model or simulation of the host evolution which is adapting to the properties of a virus you are building (either on screen or via a tangible interface as part of an exhibit). This model needs to be realistic, but only up to a point – it can be more of a caricature of biology than a research model would need to be, as our intentions are educational rather than biological research.

Using our previous species prototype as a starting point, we have a network of connected locations that can be inhabited by organisms. These organisms can jump to neighbouring locations and be infected by others in the same place at the same time. Now we need to figure out how different species of these organisms could emerge over time that evolve immunity to a virus – so we can build up a family tree (phylogeny) similar to the ones we created for the egglab game but that is responding to the viruses that you create in realtime as you play. The evolution itself also has to happen fast enough that you can see effects of your actions ‘quickly enough’, but we’ll worry more about that later.

For a job like this we need to move back from fancy visualisations and graphics and try to get some fundamental aspects working, using standard tools like graphviz to understand what is going on to save time.

The first thing to do is to add a fixed length genetic string to each individual organism, this is currently 40 elements long and is made from biologically based A,T,C and G nucleotide symbols. We chose these so we can use biological analysis tools to test the system as we go along just like any other genetic process (more on that below). The organisms can also reproduce by spawning copies of themselves. When they do this they introduce random errors in the genetic code of their offspring which represents mutation.

Previously we were using a ‘SIRS’ model for virus infection (susceptible -> infected -> recovered -> susceptible), based on 4 global parameters that determined the probability of jumping from one state to the next. Using the genetics, the probability of infection is now different for every individual based on:

1. Is a virus infected individual in the current location?

2. If so, use our genetic code to determine the probability of catching it. Currently we use the ratio of A’s to T’s in the genetic string as a totally arbitrary place-holder ‘fitness function’, the lower the number the better. AAAAAAT is bad (fitness: 6) while TTTTTTA is good (fitness: 0.1666) – so we would expect the A’s to disappear over time and the T’s increase in the genetic strings. This number also determines the probability of dying from the disease and (inversely) the probability of gaining immunity to it.

3. A very small ‘background infection’ probability which overrides this, so the virus is always present at a low level and can’t die out.

The next thing we need is a life cycle for the organism – this needs to include the possibility of death and the disease model is now a ‘SIR’ one, as once recovered, individuals cannot go back to being susceptible again.

state

All the other non virus related probabilities in the simulation (spawning offspring, moving location, natural death) are currently globally set – to make sure we are seeing evolution based only on disease related behaviour for now.

This model as it is could form the foundation of a world level visualisation – seeing organisms running around from place to place catching and spreading your virus and evolving resistance to it. However this is only half the story we want to tell in the game, as it doesn’t include our time based ‘phylogenetic’ family tree view. For this, we still need to figure out how to group individuals into species so we can fully visualise the effects of your virus on the evolution of all the populations as a whole.

First we need to decide exactly what a species is – which turns out to be quite an arbitrary concept. The rather course approach that seems to work here is to say that two organisms represent two distinct species if more than a quarter of their genes are different between them.

We can now check each organism as it’s born – and compare its genome against a ‘blueprint’ one that represents the species that it’s parent belongs to. If it’s similar enough we add it to its parents species, if it’s too different we create a new species for it. This new species will have a copy of its genome as the ‘blueprint’ to compare all its descendants with. This should mean we can build up a set of related species over time.

If we run the simulation for 5000 time steps we can generate a phylogenetic family tree at the end, using the branch points between species to connect them. We are hiding species with only 1 member to make it simpler, and the population is started off with 12 unique individuals. Only one of which (species 10) is successful – all the later species are descendants of that one:

test

The numbers here are the ID, fitness and size of population for each species. The colours are an indication of population size. The fitness seems to increase towards the right (as the number drops) – which is what we’d expect if new species are emerging that cope better with the virus. You can imagine changing the virus will cause all this to shift dramatically. The “game mechanic” for viruscraft will all be about tinkering with the virus in different ways that changes the underlying fitness function of the host, and thus the evolution of the populations.

As we used standard biological symbols for our genetic code, we can also convert each species into an entry in a FASTA format text file. These are used by researchers to determine population structure from limited information contained in genetic samples:


> 1 0.75 6
TGCTCTTGCGTACTAGACTGTTGACATCTCCACCGGATAA
> 3 0.46153846153846156 5
TGGTTTTCTGCTGTGGGGATAACCTGCCACTCAGTGGTGA
> 5 0.6153846153846154 171
CACTATCGCTCATTGCACTGTCGTGGTTTTAGTAACGAGC
...

In the FASTA file in the example above, the numbers after the ‘>’ are just used as identifiers and are the same as the tree above. The second line is the blueprint genome for the species (its first individual). We can now visualise these with one of many online tools for biological analysis:

phylo_tree

This analysis is attempting to rebuild the first tree in a way, but it doesn’t have as much information to go on as it’s only looking at similarity of the genetic code. Also 40 bases is not really enough to do this accurately with such a high mutation rate – but I think it’s a good practice to keep information in such a way that it can be analysed like this.

Red King progress, and a sonification voting system

We have now launched the Red King simulation website. The fundamental idea of the project is to use music and online participation to help understand a complex natural process. Dealing with a mathematical model is more challenging than a lot of our citizen science work, where the connection to organisms and their environments is more clear. The basic problem here is how to explore a vast parameter space in order to find patterns in co-evolution.

After some initial experiments we developed a simple prototype native application (OSX/Ubuntu builds) in order to check we understand the model properly by running and tweaking it.

d714be3e49925c7492edf0623b27a41c

The next step was to convert this into a library we could bind to python. With this done we can run the model on a server, and have it autonomously update it’s own website via django. This way we can continuously run the simulation, storing randomly chosen parameters to build a database and display the results. I also set up a simple filter to run the simulation for 100 timesteps and discard parameters that didn’t look so interesting (the ones that went extinct or didn’t result in multiple host or virus strains).

There is also now a twitter bot that posts new simulation/sonifications as they appear. One nice thing I’ve found with this is that I can use the bot timeline to make notes on changes by tweeting them. It also allows interested people an easy way to approach the project, and people are already starting discussions with the researchers on twitter.

1570fc8a4b4e93b560c2c3c159ba1710

Up to now, this has simply been a presentation of a simulation – how can we involve people so they can help? This is a small project so we have to be realistic what is possible, but eventually we need a simple way to test how the perception of a sonification compares with a visual display. Amber’s been doing research into the sonification side of the project here. More on that soon.

For now I’ve added a voting system, where anyone can up or down-vote the simulation music. This is used as a way to tag patterns for further exploration. Parameter sets are ranked using the votes – so the higher the votes are the higher the likelihood of being picked as the basis for new simulations. When we pick one, we randomise one of its parameters to generate new audio. Simulations store their parents, so you can explore the hierarchy and see what changes cause different patterns. An obvious addition to this is to hook up the twitter retweets and favorites for the same purpose.

ced6bbb49c7d71fd10b46719452d4e9b

A cryptoweaving experiment

Archaeologists can read a woven artifact created thousands of years ago, and from its structure determine the actions performed in the right order by the weaver who created it. They can then recreate the weaving, following in their ancestor’s ‘footsteps’ exactly.

This is possible because a woven artifact encodes time digitally, weft by weft. In most other forms of human endeavor, reverse engineering is still possible (e.g. in a car or a cake) but instructions are not encoded in the object’s fundamental structure – they need to be inferred by experiment or indirect means. Similarly, a text does not quite represent its writing process in a time encoded manner, but the end result. Interestingly, one possible self describing artifact could be a musical performance.

Looked at this way, any woven pattern can be seen as a digital record of movement performed by the weaver. We can create the pattern with a notation that describes this series of actions (a handweaver following a lift plan), or move in the other direction like the archaeologist, recording a given notation from an existing weave.

example
A weaving and its executable code equivalent.

One of the potentials of weaving I’m most interested in is being able to demonstrate fundamentals of software in threads – partly to make the physical nature of computation self evident, but also as a way of designing new ways of learning and understanding what computers are.

If we take the code required to make the pattern in the weaving above:

(twist 3 4 5 14 15 16)
(weave-forward 3)
(twist 4 15)
(weave-forward 1)
(twist 4 8 11 15)

(repeat 2
 (weave-back 4)
 (twist 8 11)
 (weave-forward 2)
 (twist 9 10)
 (weave-forward 2)
 (twist 9 10)
 (weave-back 2)
 (twist 9 10)
 (weave-back 2)
 (twist 8 11)
 (weave-forward 4))

We can “compile” it into a binary form which describes each instruction – the exact process for this is irrelevant, but here it is anyway – an 8 bit encoding, packing instructions and data together:

8bit instruction encoding:

Action  Direction  Count/Tablet ID (5 bit number)
0 1         2              3 4 5 6 7 

Action types
weave:    01 (1)
rotate:   10 (2)
twist:    11 (3)

Direction
forward: 0
backward: 1

If we compile the code notation above with this binary system, we can then read the binary as a series of tablet weaving card flip rotations (I’m using 20 tablets, so we can fit in two instructions per weft):

0 1 6 7 10 11 15
0 1 5 7 10 11 14 15 16
0 1 4 5 6 7 10 11 13
1 6 7 10 11 15
0 1 5 7 11 17
0 1 5 10 11 14
0 1 4 6 7 10 11 14 15 16 17
0 1 2 3 4 5 6 7 11 12 15
0 1 4 10 11 14 16
1 6 10 11 14 17
0 1 4 6 11 16
0 1 4 7 10 11 14 16
1 2 6 10 11 14 17
0 1 4 6 11 12 16
0 1 4 7 10 11 14 16
1 5

If we actually try weaving this (by advancing two turns forward/backward at a time) we get this mess:

close

The point is that (assuming we’ve made no mistakes) this weave represents *exactly* the same information as the pattern does – you could extract the program from the messy encoded weave, follow it and recreate the original pattern exactly.

The messy pattern represents both an executable, as well as a compressed form of the weave – taking up less space than the original pattern, but looking a lot worse. Possibly this is a clue too, as it contains a higher density of information – higher entropy, and therefore closer to randomness than the pattern.

Pixel Quipu

The graphviz visualisations we’ve been using for quipu have quite a few limitations, as they tend to make very large images, and there is limited control over how they are drawn. It would be better to be able to have more of an overview of the data, also rendering the knots in the right positions with the pendants being the right length.

Meet the pixelquipu!

ur018

These are drawn using a python script which reads the Harvard Quipu Database and renders quipu structure using the correct colours. The knots are shown as a single pixel attached to the pendant, with a colour code of red as single knot, green for a long knot and blue as a figure of eight knot (yellow is unknown or missing). The value of the knot sets the brightness of the pixel. The colour variations for the pendants are working, but no difference between twisted and alternating colours, also no twist direction is visualised yet.

hp017

Another advantage of this form of rendering is that we can draw data entropy within the quipu in order to provide a different view of how the data is structured, as a attempt to uncover hidden complexity. This is done hierarchically so a pendant’s entropy is that of its data plus all the sub-pendants, which seemed most appropriate given the non-linear form that the data takes.

ur037

e-ur037

We can now look at some quipus in more detail – what was the purpose of the red and grey striped pendants in the quipu below? They contain no knots, are they markers of some kind? This also seems to be a quipu where the knots do not follow the decimal coding pattern that we understand, they are mostly long knots of various values.

ur051

There also seems to be data stored in different kinds of structure in the same quipu – the collection of sub-pendants below in the left side presumably group data in a more hierarchical manner than the right side, which seems much more linear – and also a colour change emphasises this.

ur015

Read left to right, this long quipu below seems very much like you’d expect binary data to look – some kind of header information or preamble, followed by a repeating structure with local variation. The twelve groups of eight grey pendants seem redundant – were these meant to be filled in later? Did they represent something important without containing any knots? We will probably never know.

UR1176

The original thinking of the pixelquipu was to attempt to fit all the quipus on a single page for viewing, as it represents them with the absolute minimum pixels required. Here are both pendant colour and entropy shown for all 247 quipu we have the data for:

all

entropy-local

The UAV toolkit & appropriate technology

The UAV toolkit’s second project phase is now complete, the first development sprint at the start of the year was a bit of research into what we could use an average phone’s sensors for, resulting in a proof of concept remote sensing android app that allowed you to visually program different scripts which we then tested on some drones, a radio controlled plane and a kite.

photo-1444743618-504214

This time we had a specific focus on environmental agencies, working with Katie Threadgill at the Westcountry Rivers Trust has meant we’ve had to think about how this could be used by real people in an actual setting (farm advisors working with local farmers). Making something cheap, open source and easy to use, yet open ended has been the focus – and we are now looking at providing WRT with a complete toolkit which would comprise a drone (for good weather) a kite (for bad weather/no flight licences required) and an android phone so they don’t need to worry about destroying their own if something goes wrong. Katie has produced this excellent guide on how the app works.

The idea of appropriate technology has become an important philosophy for projects we are developing at Foam Kernow, in conjunction with unlikely connections in livecoding and our wider arts practice. For example the Sonic Bike project – where from the start we restricted the technology so that no ‘cloud’ network connections are required and all the data and hardware required has to fit on the bike – with no data “leaking” out.

photo-1444742698-352249

With the UAV toolkit the open endedness of providing a visual programming system that works on a touchscreen results in an application that is flexible enough to be used in ways and places we can’t predict. For example in crisis situations, where power, networking or hardware is not available to set up remote sensing devices when you need them most. With the UAV toolkit we are working towards a self contained system, and what I’ve found interesting is how many interface and programming ‘guidelines’ I have to bend to make this possible – open endedness is very much against the grain of contemporary software design philosophy.

The “app ecosystem” is ultimately concerned with elevator pitches – to do one thing, and boil it down to the least actions possible to achieve it. This is not a problem in itself, but the assumption that this is the only philosophy worth consideration is wrong. One experience that comes to mind recently is having to make and upload banner images of an exact size to the Play Store before it would allow me to release an important fix needed for Mongoose 2000, which is only intended to ever have 5 or 6 users.

photo-1444743271-137365

For the UAV toolkit, our future plans include stitching together photos captured on the phone and producing a single large map without the need to use any other software on a laptop. There are also interesting possibilities regarding distributed networking with bluetooth and similar radio systems – for example sending code to different phones is needed, as currently there is no way to distribute scripts amongst users. This could also be a way of creating distributed processing – controlling one phone in a remote location with another via code sent by adhoc wifi or SMS for example.

AI as puppetry, and rediscovering a long forgotten game.

AI in games is a hot topic at the moment, but most examples of this are attempts to create human-like behaviour in characters – a kind of advanced puppetry. These characters are also generally designed beforehand rather than reacting and learning from player behaviour, let alone being allowed to adapt in an open ended manner.

geo-6
Rocketing around the gravitational wells.

Geo was an free software game I wrote around 10 years ago which I’ve recently rediscovered. I made it a couple of years after I was working for William Latham’s Computer Artworks – and was obviously influenced by that experience. At the time it was a little demanding for graphics hardware, but it turns out the intervening years processing power has caught up with it.

This is a game set in a large section of space inhabited by lifeforms comprised of triangles, squares and pentagons. Each lifeform exerts a gravitational pull and has the ability to reproduce. It’s structure is defined by a simple genetic code which is copied to it’s descendants with small errors, giving rise to evolution. Your role is to collect keys which orbit around gravitational wells in order to progress to the next level, which is repopulated by copies of the most successful individuals from the previous level.

A simple first generation lifeform.
A simple first generation lifeform.

Each game starts with a random population, so the first couple of levels are generally quite simple, mostly populated by dormant or self destructive species – but after 4 or 5 generations the lifeforms start to reproduce, and by level 10 a phenotype (or species) will generally have emerged to become an highly invasive conqueror of space. It becomes an against the clock matter to find all the keys before the gravitational effects are too much for your ship’s engines to escape, or your weapons to ‘prune’ the structure’s growth.

I’ve used similar evolutionary strategies in much more recent games, but they’ve required much more effort to get the evolution working (49,000 players have now contributed to egglab’s camouflage evolution for example).

A well defended 'globular' colony - a common species to evolve.
A well defended 'globular' colony – a common phenotype to evolve.

What I like about this form of more humble AI (or artificial life) is that instead of a program trying to imitate another lifeform, it really just represents itself – challenging you to exist in it’s consistent but utterly alien world. I’ve always wondered why the dominant post-human theme of sentient AI was a supercomputer deliberately designed usually by a millionaire or huge company. It seems to me far more likely that some form of life will arise – perhaps even already exists – in the wild variety of online spambots and malware mainly talking to themselves, and will be unnoticed – probably forever, by us. We had a little indication of this when the facebook bots in the naked on pluto game started having autonomous conversations with other online spambots on their blog.

A densely packed 'crystalline' colony structure.
A densely packed 'crystalline' colony structure.

Less speculatively, what I’ve enjoyed most about playing this simple game is exploring and attempting to shape the possibilities of the artificial life while observing and categorising the common solutions that emerge during separate games – cases of parallel evolution. I’ve tweaked the between-levels fitness function a little, but most of the evolution tends to occur ‘darwinistically’ while you are playing, simply the lifeforms that reproduce most effectively survive.


An efficient and highly structured 'solar array' phenotype which I’ve seen emerge twice with different genotypes.

You can get the updated source here, it only requires GLUT and ALUT (a cross platform audio API). At one time it compiled on windows, and should build on OSX quite easily – I may distribute binaries at some point if I get time.

geo-5
A ‘block grid’ phenotype which is also common.

Easter Python/Minecraft programming day at dbsCode

Thursday saw our second dbsCode Easter programming taster, and like last year we focused on minecraft programming with our procedural architecture api.

The main change this time was that for the 20 11-16 year old participants we doubled our teachers to 4 (Glen Pike, Francesca Sargent and Matthew Dodkins and me), plus a couple of interested parents helped us out too. This meant that the day was much more relaxed and we noticed they were engaged with the programming for a much higher proportion of the time. Another factor was that we went straight into coding, as none of them needed introduction to minecraft this year. I think one of the biggest strengths of this kind of learning is that they are able to easily switch between playful interaction (jumping into each other’s worlds, building stuff the normal way) and programming. This means there is low pressure which I think makes it more of a self driven activity, as well as making a long (4 hour) workshop possible.

Here are some screenshots of their creations – this was a melon palace created in a world that had somehow become sliced apart:

minecraftDay4

Inside the melon palace, the waterfall pulsed with a while loop and sleeps that altered the water source blocks.

minecraftDay5

As last year there was a lot of mixing of activities, using code to create big shapes and then editing them manually for the finer details:

minecraftDay3

IMG_20150402_142138

Here, a huge block of water inexplicably cuts through the scenery:

IMG_20150402_143112

And two houses, that became merged together and then filled with bookshelves and other homely items:

IMG_20150402_143000

Neural Network livecoding and retrofitting ZX Spectrum hardware

An experimental, and quite angry neural network livecoding synth (with an audio ‘weave’ visualisation) for the ZX Spectrum: source code and TZX file (for emulators). It’s a bit hard to make out in the video, but you can move around the 48 neurons and modify their synapses and trigger levels. There are two clock inputs and the audio output is the purple neuron at the bottom right. It allows recurrent loops as a form of memory, and some quite strange things are possible. The keys are:

  • w,d: move diagonally north west <-> south east
  • s,e: move diagonally south west <-> north east
  • t,y,g,h: toggle incoming synapse connections for the current neuron
  • space: change the ‘threshold’ of the current neuron (bit shifts left)

This audio should load up on a real ZX Spectrum:

One of the nice things about tech like this is that it’s easily hackable – this is a modification to the video output better explained here, but you can get a standard analogue video signal by connecting the internal feed directly to the plug and detaching the TV signal de-modulator with a tiny bit of soldering. Look at all those discrete components!

IMG_20140809_125815

Raspberry Pi: Built for graphics livecoding

I’m working on a top secret project for Sam Aaron of Meta-eX fame involving the Raspberry Pi, and at the same time thinking of my upcoming CodeClub lessons this term – we have a bunch of new Raspberry Pi’s to use and the kids are at the point where they want to move on from Scratch.

This is a screenshot of the same procedural landscape demo previously running on Android/OUYA running on the Raspberry Pi, with mangled texture colours and a cube added via a new livecoding repl:

IMG_20140108_232857

Based on my previous experiments, this program uses the GPU for the Raspberry Pi (the VideoCore IV bit of the BCM2835). It’s fast, allows compositing on top of whatever else you are running at the time, and you can run it without X windows for more CPU and memory, sounds like a great graphics livecoding GPU to me!

Here’s a close up of the nice dithering on the texture – not sure yet why the colours are so different from the OUYA version, perhaps a dodgy blend mode or a PNG format reading difference:

IMG_20140108_232914

The code is here (bit of a mess, I’m in the process of cleaning it all up). You can build in the jni folder by calling “scons TARGET=RPI”. This is another attempt – looks like my objects are inside out:

IMG_20140109_004111