Clip of a small and extremely fun livecoding jam at foam kernow alongside Federico Reuben and Francesca Sargent:
Category: Livecoding
Pattern Matrix hardware build
More photos from the Makernow fablab, the pattern matrix weavingcoding device is starting to take shape! All schematics, 3d printing shapes and plans will be released soon as open source hardware on the weavecoding repo.
Screenless music livecoding
Programming music with flotsam – for the first time, it’s truly screen-less livecoding. All the synthesis is done on the Raspberry Pi too (raspbian release in the works). One of the surprising things I find with tangible programming is the enforced scarcity of tokens, having to move them around provides a situation that is good to play around with, in contrast to being able to simply type more stuff into a text document.
The programming language is pretty simple and similar to the yarn sequence language from the weavecoding project. The board layout consist of 4 rows of 8 possible tokens. Each row represents a single l-system rule:
Rule A: o o o o o o o o Rule B: o o o o o o o o Rule C: o o o o o o o o Rule D: o o o o o o o o
The tokens themselves consist of 13 possible values:
a,b,c,d : The 'note on' triggers for 4 synth patches . : Rest one beat +,- : Change current pitch <,> : Change current synth patch set A,B,C,D : 'Jump to' rule (can be self-referential) No-token: Ends the current rule
The setup currently runs to a maximum depth of 8 generations – so a rule referring to itself expands 8 times. A single rule ‘A’ such as ‘+ b A - c A
‘ expands to this sequence (the first 100 symbols of it anyway):
+b+b+b+b+b+b+b+b+b+b-c-c+b-c-c+b+b-c-c+b-c-c+b+b+b-c-c+b-c-c+b+b-c-c+b-c-c+b+b+b+b-c-c+b-c-c+b+b-c-c
I’m still working on how best to encode musical structures this way, as it needs a bit more expression – something to try is running them in parallel so you can have different sequences at the same time. With a bit more tweaking (and with upcoming hardware improvements) the eventual plan is to use this on some kid’s programming teaching projects.
Livecoding UAVs for environmental research
Some screenshots of the UAV livecoding visual programming language. Weather being on our side, we’re planning some test flights later this week! The first program uses GPS to take photos with an overlap of 50% at 300 metres altitude, based on the vertical camera angle as reported from the device. It assumes the the flight orientation is level:
The blocks are all drag and drop and get converted into Scheme code which is run by a modified tinyscheme interpreter. The code can be saved and loaded, and I’m planning to make it possible for people to share code via email.
This is a simpler program which takes a photo every 3 seconds and records a handful of sensor data to the database:
At the bottom you can see a squashed camera preview – I’ve tried various approaches (hiding, scaling to 0 pixels etc) but android requires that there is a preview somewhere in order to take a photo properly. You can view the recorded data on the device too, for checking. There is also a ‘flight mode’ which locks and turns off the screen, and ignores all button events. On some phones you need to take out the battery to stop the program running but unfortunately on others you can still use the power button to close the program.
Visual programming for environmental research with UAVs
I’ve recently begun a new project with Karen Anderson who runs the UAV research group at the Exeter University Environment and Sustainability Institute. We’re looking at using commodity technology like android phones for environmental research with drones. Ecology research groups and environmental agencies have started using drones as a replacement for expensive and risky light aircraft for gathering data on changes to landscapes due to climate change and erosion. How can we make tools that are simpler and cheaper for them to set up and use? Can our software also be relevant for children using kites in cities for making their own maps, or farmers wishing to record changes to their own fields themselves?

This is a more open ended project than our previous environmental and behavioural projects, so we’re able to approach this with an R&D perspective in relation to the technology. One of the patterns I’ve noticed with this kind of work is that after providing scientists with something that meets their immediate needs, it inspires a ton of new ideas and directions – and I become a bottleneck. Ideally I need to provide something that allows them to build things themselves once they have an understanding of all the possibilities, also adapting to needs ‘in the field’ is an important aspect of the kind of work that they do – which can be in remote locations anywhere in the world.
Some time ago I had a go at porting my musical livecoding language scheme bricks to android for the open sauces project. I’m now applying it as a way of configuring sensor data acquisition and recording by drag/dropping a visual programming language. It’s early days yet, I’m still debugging the (actually rather amazing) android drag/drop API – here are some initial screenshots.



Tangible weavecoding at the Makernow Fablab
Demoing the flotsam tangible livecoding device at the Makernow Fablab in Penryn. Future revisions of the hardware will need to fix some of the problems identified in testing as well as work in gallery exhibitions we have planned for next year.
Flotsam: A prototype screenless livecoding language
Two languages are working with Flotsam, the new name for the prototype screenless tangible programming language I’ve been building (which comes from the fact it’s largely made from driftwood). It’s somehow already been featured on the Adafruit blog!
The circuit seems to be fully debugged now, with short circuits fixed – which took a little while and more than a little frustration 🙂 The Raspberry Pi python code is currently on the weavingcodes repository (more on this project on the kairotic site), and the first language is a declarative style L system for describing weave structure and pattern with yarn width and colour. The LEDs indicate that the evaluation happens simultaneously, as this is a functional language. The blocks represent blue and pink yarn in two widths, with rules to produce the warp/weft sequence based on the rows the blocks are positioned on:
The weaving simulation is written in pygame (which I’ve been using lately for teaching), and is deliberately designed to make alternative weave structures than those possible with Jacquard looms by including yarn properties. The version in the video is plain weave, but more complex structures can be defined as below – in the same way as Alex’s gibber software:
This is a completely different language for building shapes in Minecraft, and is an imperative, stack based language for driving a turtle in 3D space. Eventually (when I’ve manufactured a few more programming blocks) it will be possible to change Minecraft block materials and react to player actions. The LEDs indicate here the more sequential evaluation of this Forth like language:
All that’s needed to switch languages is to redraw the symbols with chalk and run a different script. It won’t be truly screenless until I write a musical language for it, which is obviously coming soon…
Learning to read, notate and compute textiles in Aarhus
Setting off from Copenhagen, the weaving codes tour continued as Emma Cocker, Alex McLean, Ellen Harlizius-Klück and I sped across the Danish countryside on the train. We were heading for Aarhus to spend some time working with people at the other end of the technology spectrum – The Center for Participatory IT in Aarhus University.
We were invited by Geoff Cox to run a workshop over two days, and given the nature of the faculty and time working together being at a premium on this project, we decided to to run the workshop more as invitation to join in our work (and therefore learn a bit about how artistic research is done), rather than having specific material to cover.
Alex and I needed to learn from Ellen how to ‘read’ a textile sample and notate it’s structure, a kind of reverse engineering process. Day one consisted of sharing out different types of weave and carefully figuring out the crossing points of warp and weft. The first challenge is to attempt to find the smallest repeat of the pattern, then record on graph paper a cross where the warp threads show over the weft threads. The big surprise is that the weave does not obviously relate to the visible pattern. It can also be hard to determine from a small sample which direction is warp and which is weft, so you can choose this arbitrarily if need be.
We could then compare different weaves, and also different notation styles that people used – little shorthand ways of describing large areas of plain weave for example. Much of the fabric came from Ellen’s Pepita Virus exhibition, and contained many different sizes of the “dog tooth” pattern. By comparing the notation we had all recorded for our different samples, we could tell that to scale up a pattern it’s not the case that you can also scale up the weave – the structure needs to change completely between 4 or 5 different types.
Day two consisted of testing Alex’s Javascript loom/computer, by entering the weave structure and colour sequences and checking if the resulting patterns matched. By livecoding, we could also play with the patterns without the need to physically weave them, in order to gain a better insight into how changes affected the resulting pattern – and a better overall understanding of the weaving process.
To finish up, I talked about the patterns inherent in computation, based on my previous Z80 explorations to show how the pattern creating origins of computation are still present, and the role of programming languages as ways for people to come to shared understanding via notation rather than simply telling a machine what to do.
Unravelling technology in Copenhagen
Last week the weavingcodes/codingweaves project started with a trip to Denmark, our first stop was the Centre for Textiles Research in Copenhagen where we presented the project and gathered as much feedback as possible right at the beginning. The CTR was introduced to us by Eva Andersson Strand, and is an interdisciplinary centre which focuses on the relationships between textiles, environment and society.
This long-view perspective of technology is critical for us, as we are dealing with a combination of thinking in the moment via livecoding and a history of technology dating back to the neolithic. This is a warp weighted loom, the focus of much of Ellen Harlizius-Klück’s research and the technology we are going to be using for the project.
Weights like this are widespread in the archaeological record for many cultures around the world, with the earliest ones around 5000 BC. Similarly – a post-it note including a handy cuneiform translation:
Alex talked about livecoding as a backwards step, removing the interface – thinking about it as an unravelling of technology. His introduction to Algorave led to many connections later when Giovanni Fanfani described the abstract rhythmic patterns of Homeric rhapsodic poetry. These were performed by citizens, in a collaborative and somewhat improvised manner – the structures they form musically and in language are potentially of interest as they seem to echo the logic of weaving pattern.
Ellen described her research into tacit knowledge of ancient Greek society – how weaving provided thinking styles and ordering concepts for the earliest forms of mathematics and science which is the basis for much of the weavingcodes project. One additional theme that has come up fairly consistently is cryptography – Flavia Carraro’s description of ‘The Grid in the decipherment of the Linear B writing system: a “paper-Ââ€loom”?’ was another addition to this area.
Emma Cocker talked about Peneolopeian time – constant weaving and unravelling as a subversive act, and the concept of the kairos, as a timely action – the name given to the point at which the weft is made when the warp ‘shed’ is provided, as well as a part of the warp weighted loom. Her input was to provide a broader view to our explorations (as coders, weavers and archaeologists all tend to get caught up in technical minutiae from time to time). From our discussions it was apparent that one of the strongest connections between livecoding and ‘weaving as thought’ is a subversion of a form of work that is considered by the dominant culture as entirely utilitarian.
Thinking outside of the screen (#1)
I’m starting a new exploratory project to build a screen-less programming language based on two needs:
- A difficulty with teaching kids programming in my CodeClub where they become lost ‘in the screen’. It’s a challenge (for any of us really but for children particularly) to disengage and think differently – e.g. to draw a diagram to work something out or work as part of a team.
- A problem with performing livecoding where a screen represents a spectacle, or even worse – a ‘school blackboard’ that as an audience we expect ourselves to have to understand.
I’ve mentioned this recently to a few people and it seems to resonate, particularly in regard to a certain mismatch of children’s ability to manipulate physical objects against their fluid touchscreen usage. So, with my mind on the ‘pictures under glass’ rant and taking betablocker as a starting point (and weaving code as one additional project this might link with), I’m building some prototype hardware to provide the Raspberry Pi with a kind of external physical memory that could comprise symbols made from carved wood or 3D printed shapes – while still describing the behaviour of real software. I also want to avoid computer vision for a more understandable ‘pluggable’ approach with less slightly faulty ‘magic’ going on.
Before getting too theoretical I wanted to build some stuff – a flexible prototype for figuring out what this sort of programming could be. The Raspberry Pi has 17 configurable I/O pins on it’s GPIO interface, so I can use 5 of them as an address lookup (for 32 memory locations to start with, expandable later) and 8 bits as input for code or data values at these locations.
The smart thing would be to use objects that identify themselves with a signal, using serial communication down a single wire with a standard protocol. The problem with this is that it would make potential ‘symbol objects’ themselves fairly complicated and costly – and I’d like to make it easy and cheap to make loads of them. For this reason I’m starting with a parallel approach where I can just solder across pins on a plug to form a simple 8 bit ID, and restrict the complexity to the reading hardware.

I’ve got hold of a bunch of 74HC4067 multiplexers which allow you to select one signal from 16 inputs (or the other way around), using 4 bits – and stacking them up, one for each 8 bits X 16 memory locations. This was the furthest I could go without surface mount ICs (well out of my wonky soldering abilities).



Now 4 bits are working it’s harder to test with an LED – so next up is getting the Raspberry Pi attached.