Spork Factory: Amorphous Computing

Departing in a new direction after evolved light follower robots, take 500 processor cores spread out in space. Give them a simple instruction set which includes a instruction to copy (DMA) 8 bytes of their code/data to nearby cores (with an error rate of 0.5%). Fill the cores with random junk and set them running. If we graph the bandwidth used (the amount of data transmitted per cycle by the whole system) we get a plot like this:

bandwidth

This explosion in bandwidth use is due to implicit emergence of programs which replicate themselves. There is no fitness function, it’s not a genetic algorithm and there is no guided convergence to a single solution – no ‘telling it what to do’. Programs may arise that cooperate with each other or may exhibit parasitic behaviour as in alife experiments like Tierra, and this could be seen as a kind of self modifying, emergent Amorphous computing – and eventually, perhaps a way of evolving programs in parallel on cheap attiny processor hardware.

In order to replicate, the code needs to push a value onto the stack as the address to start the copy from, and then call the dma instruction to broadcast it. This is one of the 500 cores visualised using Betablocker DS emulator display, the new “up” arrow icon is the dma instruction.

Thinking more physically, the arrangements of the processors in space is a key factor – here are some visualisations. The size of each core increases when it transmits, and it’s colour is set by a hash of the last data transmission – so we can see the rise of communication and the propagation of different strains of successful code.

The source is here and includes the visualisation software which requires fluxus.

Spork Factory: evolving a light follower robot

Continuing with the structured procrastination R&D project on evolvable hardware, I’m proud to report a pretty decent light following robot – this is a video of the first real-world test, with a program grown from primordial soup chasing me around:

After creating a software model simulation of the robot in the last post, I added some new bytecode instructions for the virtual machine: LEYE and REYE push 1 on the stack if we are detecting light from the left or right photoresistor, zero if it’s dark. LMOT and RMOT pop the top byte of the stack to turn the motors on and off. The strategy for the genetic algorithm’s fitness function is running each 16 byte generated program on the robot for 1000 cycles, moving the robot to a new random location and facing direction 10 times without stopping the program. At the end of each run the position of the robot was compared to the light position, and the distances were averaged as the fitness. Note that we’re not assigning fitness to how fast we get to the light.

This is pretty simple stuff, but it’s still interesting to look at what happens over time in the genetic algorithm. Both motors are running at startup by default, so the first successful programs learn how turn one motor off – otherwise the robot just shoots off and scores really low. So the first generations tend to just go round in circles. Then they start to learn how to plug the eyes in, one by one edging them closer to the goal – then it’s a case of improving the sample rate to improve accuracy, usually by using jmps and optimising the loops.

This is an example of a fairly simple and effective solution, the final generation shown in the animation above:

loop:
  leye 
  rmot 
  nop nop nop nop nop 
  reye 
  lmot 
  or 
  nop nop nop nop
  rmot 
  jmpz loop

Some explanation, the right and left eyes are plugged into the left and right motors, which is the essential bit making it work, the ‘nop’s are all values that are not executable. The ‘rmot’ before the ‘jmpz’ makes the robot scan around in circles if there is no light detected (strangely, a case which doesn’t happen in the simulation). The argumant to ‘jmpz’ is 0 (loop) which is actually the 17th byte – so it’s cheekily using memory which has been initialised to zero as part of it’s program.

This is a more complicated and stranger program which evolved after 70 generations with a high fitness, I haven’t worked out what it’s up to yet:

  pshl 171 
loop:
  lmot 
  leye 
  pip 111 
  pip 30 
  rmot 
  reye 
  pshl 214 
  nop 
  lmot 
  jmp loop

Touchscreen programming

As more and more people use touchscreens, it still irks me that we lack good ways of programming “on” devices reliant on them (i.e. native feeling – rather than modified text editors). As a result they seem designed entirely around consumption of software (see also the “The coming war on general-purpose computing”).

So lets make them programmable. Recent steps in this direction are based on Jellyfish – an idea to create a kind of locative livecoding virus game (more on that as it unfolds), starting with fluxus on android (now called nomadic) and a good dose of Betablocker DS, mixed with some procedural 3D rendering inspired by the Playstation2’s mad hardware, and icons previously seen on the Supercollider 2012 flier!

This is a screenshot of it’s current early state, with the Linux/Android versions side by side (spot the inconsistency in wireframe colour due to differences in colour material in OpenGL ES). Main additions to the previous android fluxus are texturing and text rendering primitive support. I’m glad to say that pinch-to-zoom and panning are already working on the code interface, but it’s not making too much sense yet to look at.