Pattern matrix – putting it together

Here is a member of staff at Miners Court trying some tangible weave coding in the midst of our crafts area – at the moment it’s simply displaying the weave structure on the simulated warp weighed loom with a single colour each for warp and weft threads, the next thing is to get ‘colour & weave’ patterns working.

DSC_1064

The pattern matrix is the second generation of tangible programming device from the weavecoding project. It’s been built as an open hardware project in collaboration with Falmouth University’s Makernow fablab, who have designed and built the chassis using many 3D printed parts and assembled the electronics using surface mount components (far beyond my stripboard skills).

Here you can see the aluminium framework supporting the AVR based row controller boards with the Raspberry Pi in the corner. The hall effect sensors detect magnetic fields – this picture was taken before any of the wiring was started.

IMG_20150408_105446

The row controllers are designed to read the sensor data and dispatch it to the Raspberry Pi using i2c serial communication running on their atmega328 processors. This design was arrived at after the experience of building flotsam which centralised all of the logic in the Raspberry Pi, resulting in lots of wiring required to collect the 128 bits of information and pass it to the GPIO port on the Pi. Using i2c has the advantage that you only need two wires to communicate everything, processing can be distributed and it can be far more modular and extendible in future. In fact we plan to try different sensors and configurations – so this is a great platform for experimenting with tangible programming.

This video shows the current operation of the sensors and row controllers, I’ve programmed the board with test code that displays the state of the magnetic field with the status LED, making sure that it can tell the orientation of the programming block:

The row controllers have a set of multiplexers that allow you to choose between 20 sensor inputs all routed to an analogue pin on the AVR. We’re just using digital here, but it means we can try totally different combinations of sensors without changing the rest of the hardware.

After getting the first couple of rows working and testing it with elderly people at our Miners Court residency there were a couple of issues. Firstly the magnets were really strong, and I worried about leaving it unattended with the programming blocks snapping together so violently (as we plan to use it in museum settings as well as at Miners Court). The other problem was that even with strong magnets, the placement of the blocks needed to be very precise. This is probably to do with the shape of the magnets, and the fact that the fields bend around them and reverse quite short distances from their edges.

To fix these bugs it was a fairly simple matter to take the blocks apart, remove 2 of the 3 magnets and add some rings to guide placement over the sensors properly:

IMG_20150418_114347

Adventures with i2c

In order to design the next version of the flotsam hardware I need to make it cheaper and easier to build. The existing hardware was very cheap in terms of components but expensive in terms of time it took to construct! With this lesson learned and with a commission on the horizon I need to find a simpler and more flexible approach to communication between custom hardware and the Raspberry Pi – mainly one that doesn’t require so many wires.

i2c is a serial communication standard used by all kinds of components from sensors to LCD displays. The Raspberry Pi comes with it built in, as the Linux kernel supports it along with various tools for debugging. I also have some Atmel processors from a previous project and there is quite a bit of example code for them. I thought I would post a little account of my troubleshooting on this for others that follow the same path, as I feel there is a lot of undocumented knowledge surrounding these slightly esoteric electronics.

The basic idea of i2c is that you can pass data between a large number of independent components using only two wires to connect them all. One is for data, the other is for a clock signal used for synchronisation.

Debugging LEDs
Debugging LEDs

I started off with the attiny85 processor, mainly as it was the first one I found, along with this very nice and clear library. I immediately ran into a couple of problems, one was that while it has support for serial communication built in (USI) you have to implement i2c on top of this so your code needs to do a lot more. The second was with only 8 pins the attiny85 is not great for debugging. I enabled i2c on the Raspberry Pi and hooked it up, ran i2cdetect – no joy. After a lot of fiddling around with pull up resistors and changing voltages between 3 and 5v either no devices were detected, or all of them were, all reads returning 0 (presumably logic pulled high for everything) or noise – nothing seemed to make any difference.

After a while (and trying other i2c slave libraries to no avail) I switched to an atmega328 processor using this library which includes a Makefile! One thing that I’ve noticed that would make things much easier for learning this stuff is more complete toolchains in example code including the right #defines and fuse settings for the processor. However this code didn’t work either at first, and my attempts at using debugging LEDs on PORTB didn’t work until I figured out it was conflicting with the UART i/o used in the example code – after figuring out that this wasn’t part of the i2c code I removed it and the Raspberry Pi could at last see the device with i2cdetect. With the addition of some LEDS I could check that bytes being sent were correctly being written to the internal buffer at the correct addresses.

It finally works!
It finally works!

Reading was another matter however. Most of the time i2cget on the Pi failed, and when it did work it only returned 0x65 no matter what the parameters. I’d already read extensively about the Raspberry Pi’s i2c clock stretching bug and applied various fixes which didn’t seem to make any difference. What did the trick was to remove the clock divide on the atmega’s fuses – by default it runs at 8Mhz but slows the instruction cycles to 1Mhz – without that it could keep up with the Pi’s implementation and all reads were successful. I still had to solve the ‘0x65 problem’, and went into the i2c code to try and figure out what was going on (using 8 LEDs to display the i2c status register). It seems like reading single bytes one at a time is done by issuing a TW_ST_DATA_NACK as opposed to TW_ST_DATA_ACK, as it sends a not-acknowledged for the last byte read. This state is not supported by the library, after fiddling around with it a bit I switched over on the Pi’s side to using the python smbus library, and tried using read_i2c_block_data – which reads 32 bytes at a time. The first byte is still 0x65 (101 in decimal, in the photo above) – but the rest are correct, I’ll need to read a bit more on the i2c protocol to figure that one out (and get the attiny working eventually), but at least it’s enough for what I need now.

Handy collection of pinouts
Handy collection of pinouts

Spork Factory: evolving a light follower robot

Continuing with the structured procrastination R&D project on evolvable hardware, I’m proud to report a pretty decent light following robot – this is a video of the first real-world test, with a program grown from primordial soup chasing me around:

After creating a software model simulation of the robot in the last post, I added some new bytecode instructions for the virtual machine: LEYE and REYE push 1 on the stack if we are detecting light from the left or right photoresistor, zero if it’s dark. LMOT and RMOT pop the top byte of the stack to turn the motors on and off. The strategy for the genetic algorithm’s fitness function is running each 16 byte generated program on the robot for 1000 cycles, moving the robot to a new random location and facing direction 10 times without stopping the program. At the end of each run the position of the robot was compared to the light position, and the distances were averaged as the fitness. Note that we’re not assigning fitness to how fast we get to the light.

This is pretty simple stuff, but it’s still interesting to look at what happens over time in the genetic algorithm. Both motors are running at startup by default, so the first successful programs learn how turn one motor off – otherwise the robot just shoots off and scores really low. So the first generations tend to just go round in circles. Then they start to learn how to plug the eyes in, one by one edging them closer to the goal – then it’s a case of improving the sample rate to improve accuracy, usually by using jmps and optimising the loops.

This is an example of a fairly simple and effective solution, the final generation shown in the animation above:

loop:
  leye 
  rmot 
  nop nop nop nop nop 
  reye 
  lmot 
  or 
  nop nop nop nop
  rmot 
  jmpz loop

Some explanation, the right and left eyes are plugged into the left and right motors, which is the essential bit making it work, the ‘nop’s are all values that are not executable. The ‘rmot’ before the ‘jmpz’ makes the robot scan around in circles if there is no light detected (strangely, a case which doesn’t happen in the simulation). The argumant to ‘jmpz’ is 0 (loop) which is actually the 17th byte – so it’s cheekily using memory which has been initialised to zero as part of it’s program.

This is a more complicated and stranger program which evolved after 70 generations with a high fitness, I haven’t worked out what it’s up to yet:

  pshl 171 
loop:
  lmot 
  leye 
  pip 111 
  pip 30 
  rmot 
  reye 
  pshl 214 
  nop 
  lmot 
  jmp loop

Mexican livecoding style

At only around 2 years old, the Mexican livecoding scene is pretty advanced. Here are images of (I think) all of the performances at /*vivo*/ (Simposio Internacional de Música y Código 2012) in Mexico City, which included lots of processing, fluxus, pure data and ATMEL processor bithop along with supercollider and plenty of non-digital techniques too. The from-scratch technique is considered important in Mexico, with most performances using this creative restriction to great effect. My comments below are firmly biased in favour of fluxus, not considering myself knowledgeable enough for thorough examinations of supercollider usage. Also there are probably mistakes and misappropriations – let me know!

Hernani Villaseñor, Julio Zaldívar (M0M0) – A performance of contrasts between Julio’s C coded 8bit-shifting ATMEL sounds and Hernani’s from scratch supercollider scripts, both building up in intensity through the performance, a great opener. A side effect of Julio using avrdude to upload code resulted in the periodic sonification of bytecode as it spilled into the digital to analogue converter during uploads. He was also using an oscilloscope to visualise the sound output, some of the code clearly designed for their visuals as well as crunchy sounds.

Mitzi Olvera and Alejandro Franco – I’d been aware of Mitzi’s work for a while from her fluxus videos online so it was great to see this performance, she made good use of the fluxus immediate mode primitives, and started off with restricting them to points mode only, while building up a complex set of recursive patterns and switching render hints to break the performance down into distinct sections. She neatly transitioned from the initial hard lines and shapes all the way to softened transparent clouds. Meanwhile Alejandro built up the mix and blasted us with Karplus Strong synthesis, eventually forcing scserver to it’s knees by flooding it with silent events.

Julian Rohrhuber, Alberto de Campo – A good chunk of powerbooks unplugged (plugged in) from Julian and Alberto, starting with a short improvisation before switching to a full composition explored within the republic framework, sharing code and blending their identities.

Martín Zumaya (Stereo Vision), José Carlos Hasbun (joseCaos) – It was good to see Processing in use for livecoding, and Martin improvised a broad range of material until concentrating on iconic minimal constructions that matched well with José’s sounds – a steady build up of dark poly-rhythmic beats with some crazy feedback filtering mapped to the mouse coordinates to keep things fluid and unpredictable.

IOhannes Zmölnig – pure data morse code livecoded in Braille. This was an experiment based on his talk earlier that day, a study in making the code as hard to read for the performer as the audience. In fact the resulting effect was beautiful, ending with the self modification of position and structure that IOhannes is famous for – leaving a very consistent audio/visual link to the driving monotonic morse bass, bleeps and white noise.

Radiad3or (Jaime Lobato, Alberto Cerro, Fernando Lomelí, Iván Esquinca y Mauro Herrera) – part 1 was human instruction, analogue performance as well as a comment at the inadequacy of livecoding for a computer, with commands like “changeTimbre” for the performers to interpret using their voices, a drumkit, flutes and a didgeridoo. Following this, part 2 was about driving the computer with these sounds, inverting it into a position alongside or following the performers rather than a mediator, being reprogrammed by the music. This performance pushed the concept of livecoding to new levels, leaving us in the dust still coming to terms with what we were trying to do in the first place!

Benoît and the Mandelbrots (live from Karlsruhe) – a remote performance from Germany, the Mandelbrots dispatched layers upon layers of synthesised texture, along with their trademark in-performance text chat, a kind of code unto itself and a view into their collective mind. The time lag issues involved with remote streaming, not knowing what/when they could see of us, added an element to this performance all of it’s own. As did the surprise appearance of various troublemakers into the live video stream…

Jorge Ramírez – another remote performance, this time from Beijing, China. Part grimy glitch and part sonification of firewalls and effects of imagined or real monitoring and censorship algorithms this was powerful, and included more temporal disparity – this time caused by the sound arriving some time before the code that described it.

Si, si, si (Ernesto Romero Mariscal Guasp y Luciana Renner Maceralli) – a narrative combination of Luciana’s performance art, tiny webcam augmented theatre sets, and Ernesto’s supercollider soundtrack. Livecoding hasn’t ventured into storytelling much yet, and this performance indicated that it should. Luciana’s inventive use of projection with liquids and transparent fibres reminded me of the early days of film effects and was a counterpoint to Ernesto’s synthesised ambience and storytelling audio.

Luis Navarro, Emilio Ocelotl – ambitious stuff this – dark dubsteppy sounds from Emilio, driving parameters of a from-scratch fluxus sierpinski fractal exploration from Luis. Similar to Mitzi’s performance, Luis limited his scene to immediate mode primitives, a ternary tree recursion forming the basis for constantly morphing structures.

Alexandra Cárdenas, Eduardo Obieta – Something very exciting I noticed was a tendency when working in sound/visual pairs such as Alexandra and Eduardo for the sounds to be designed with the visuals in mind – e.g. the use of contrasting frequencies that could be picked out well by fft algorithms. This demonstrated a good mutual understanding, as well as a challenge to the normal DJ/VJ hierarchy. Eduardo fully exercised the NURBS primitive (I remember it would hardly render at 10fps when I first added it to fluxus!) exploding it to the sound input before unleashing the self-test script to end the performance in style!

Eduardo Meléndez – one of the original Mexican livecoders, programming audio and visuals at the same time! Not only that – but text (supercollider) and visual programming (vvvvv) in one performance too. I would have liked to have paid closer attention to this one, but I was a bit nervous

Slub finished off the performances, but I’ll write more about that soon as material comes in (I didn’t have time to take any photos!).