Spork Factory: Amorphous Computing

Departing in a new direction after evolved light follower robots, take 500 processor cores spread out in space. Give them a simple instruction set which includes a instruction to copy (DMA) 8 bytes of their code/data to nearby cores (with an error rate of 0.5%). Fill the cores with random junk and set them running. If we graph the bandwidth used (the amount of data transmitted per cycle by the whole system) we get a plot like this:


This explosion in bandwidth use is due to implicit emergence of programs which replicate themselves. There is no fitness function, it’s not a genetic algorithm and there is no guided convergence to a single solution – no ‘telling it what to do’. Programs may arise that cooperate with each other or may exhibit parasitic behaviour as in alife experiments like Tierra, and this could be seen as a kind of self modifying, emergent Amorphous computing – and eventually, perhaps a way of evolving programs in parallel on cheap attiny processor hardware.

In order to replicate, the code needs to push a value onto the stack as the address to start the copy from, and then call the dma instruction to broadcast it. This is one of the 500 cores visualised using Betablocker DS emulator display, the new “up” arrow icon is the dma instruction.

Thinking more physically, the arrangements of the processors in space is a key factor – here are some visualisations. The size of each core increases when it transmits, and it’s colour is set by a hash of the last data transmission – so we can see the rise of communication and the propagation of different strains of successful code.

The source is here and includes the visualisation software which requires fluxus.

11 thoughts on “Spork Factory: Amorphous Computing”

  1. >It is becoming clear that emergence and evolution are natural features of dynamic systems.

    On the contrary, it is clear that when system is designed with an instruction set which includes the ability to copy data to peers, then give an opportunity to execute gargbage, then over time the system will in fact copy data to peers a certain percentage of the time.

    I’m most interested in knowing the distribution of copy instructions at the beginning of the experiment vs the distribution after it reaches steady state (if it ends up doing that at all).

    Beyond that, none of the data presented provide any clear evidence of any changes in the system beyond random leveling. Was there any evidence of any emergent behavior?

    (cool experiment, fwiw)

  2. Hi David, the probability of a dma instruction at the beginning is 1 in 256, also it won’t work alone without an address on the stack to read the data from – the stacks start empty. As the programs run we see instructions necessary to trigger copies flood the heap memory to leave nothing else at some points, and a corresponding rise in the amount of copying going on (as in the graph). What I haven’t understood yet is why the density then falls lower and fluctuates radically after reaching some critical saturation point.

  3. Of course, it gets more interesting if there is more to copying than simply broadcasting to neighbours, but this is just proof of concept for further explorations…

  4. Are you sure that the stack is truly empty? Does a value truly have to get pushed into the stack in order for a value to be read out of it? How does it treat a null stack? Will it read location zero?

    Once the copy-to-peer rate hits its peak, what is the incidence of the copy-to-peer instruction? Is it still 1/256? (I expect it should be a bit higher) What is the incidence of the push-garbage-to-the-stack instruction initially and after the first peak?

  5. How many times did you run it? Does it always get very busy and then fall off? If you change the initial garbage, does it do the same thing?

    Since the instruction is to send to nearby cores, it seems like it might make some sense to see a sort of saturation and then after a while there’s a shortage of handy neighbors with space left in memory, so it falls off, but not really in a tidy, linear kind of way.

  6. A null stack results in no copy happening, in terms of the change in make up of the instructions in memory, if you look at the movie around 0.36 the memory consists entirely of pushes (icon with vertical bar on the left), values to push (yellow numbers), dma instructions (arrows) and org instructions – which set the address 0 for the running thread, which will result in many parts of the memory being copied with the same address being pushed on the stack.

    I ran the thing quite a few times, and the same pattern happens (saturation then fall off), this does seem to be affected by the spacial arrangements though. The dma-ing of memory doesn’t require space as it just overwrites what is there.

  7. BTW the code for the VM is very simple:
    It checks the stack count before running an operation (this is quite recent addition, it used to return 0 for an empty stack).

    I’ll try and get some time to post some histograms of instruction counts – of course the presence of an opcode in memory doesn’t mean it’s executed as it might be treated as an operand, but it’ll give some indication.

  8. Oh, I see! I’m headed off to camp for a few days but will grab the code when I get back. This is interesting. Nicely done!

  9. “What I haven’t understood yet is why the density then falls lower and fluctuates radically after reaching some critical saturation point.”

    I would say once the system reaches uniform saturation it behaves chaotically i.e. any system that uses it previous step as an input to it next output is a chaotic system. This isn’t anything new, but what would be new is if some sort of emergent behaviour came out of this. If you control the seed so as to keep each run the same and then vary one paramater, explore the space (perhaps a GA would be good for this?), you might find something interesting (but could take a looooooooong time).

Leave a Reply

Your email address will not be published. Required fields are marked *