Debugging midi bytes with sonification

I’m currently working on some hardware for interfacing a Raspberry Pi 3 with MIDI. You can of course use a normal USB audio interface, and there is a ready made MIDI Hat module for the Pi already – but I only need MIDI out, and it shouldn’t be a problem to come up with something simple using the GPIO pins.

(Debugging: Wrong clock rate – but we have data! I ended up debugging by sonifying the MIDI bytes with a piezo speaker)

I followed the instructions here, expanded here and updated for Raspberry Pi 3 here – and using bits of all of them I managed to get it working in the end, so I thought it would be a good idea to document it here.

The main idea is that the Raspberry Pi already contains a serial interface we can reconfigure to send MIDI bytes, rather than bolting on lots of new hardware. All you need is to free it up from its default purpose which is to allow external terminal connection and as the bluetooth interface on the Pi3. We also need to slow it right down to 1980’s MIDI baud rate. You can then connect your MIDI cable to the Raspberry Pi’s ‘TX’ pin with a little bit of buffering hardware in between. This is needed to bring it up to 5V from the Pi’s 3.3V logic output (but as they need to be Schmitt triggers perhaps an additional function is to ‘hold’ the voltage for longer). I tried a normal TTL buffer and it didn’t work – but didn’t look into this too closely. As an aside, I found a UK made IC in my stockpile:


On the Raspbian end of the equation we need a few tweaks to set up the Raspberry Pi. Edit /boot/cmdline.txt and remove the part that says console=serial0,115200, this stops the kernel listening to the serial device. Next edit /boot/config.txt to add these lines to the bottom:

dtoverlay=pi3-miniuart-bt # disable bluetooth

This slows the serial (UART) clock down to the right rate relative to the Pi3 clock and disables bluetooth. Due to the OS switch to systemd on Linux a lot of the documentation is out of date, but I also ran:

sudo systemctl disable serial-getty@ttyAMA0.service

In order to turn off, and stop the console serial interface from starting at boot, I’m not sure if that was required. Then it’s a case of rebooting, compiling ttymidi (which required a small Makefile change to add -lpthread to the link line). This program sets up a MIDI device accessible by ALSA, so we can then install vkeyb (a virtual midi keyboard) and connect it to ttymidi in qjackcontrol. This is a pic of the testing setup with my trusty SU10 sampler.


Cerys Matthews and Sonic Kayaks

Catching up on old news, but a few months ago I went on Cerys Matthew’s show on BBC radio 6 and talked about Sonic Kayaks, you can hear the interview here.

This was in the run up to the British Science Festival in Swansea where we put them to sea in a musical citizen science adventure. We’ve recently made some on-board kayak recordings, so watch this space to hear more…


Sonic Kayaks: musical instruments for marine exploration

Here is a bit of a writeup of the gubbins going into the sonic kayaks project. We only have a few weeks to go until the kayaks’ maiden voyages at the British Science Festival, so we are ramping things up, with a week of intense testing and production last week with Kirsty Kemp, Kaffe Matthews and Chris Yesson joining us at FoAM Kernow. You can read Amber’s report on the week here.



The heart of the system is the Raspberry Pi 2. This is connected to a USB GPS dongle, and running the sonic bike software we have used in many cities over the last couple of years. We have some crucial additions such as two water temperature sensors and a hydrophone. We have also switched all audio processing over to pure data, so we can do a lot more sound wise – such as sonify sensor data directly.

How to do this well has been a tricky part to get right. There is a trade off between constant irritating sound (in a wild environment this is more of a problem than a city, as we found out in the first workshop) and ‘overcooking’ the sound so it’s too complex to be able to tell what the sensors are actually reporting.


This is the current pd patch – I settled on cutting out the sound when there is no change in temperature, so you only hear anything when you are paddling through a temperature gradient. The pitch represents the current temperature, but it’s normalised to the running minimum and maximum the kayak has observed. This makes it much more sensitive, but it takes a few minutes to self calibrate at the start. Currently it ranges from 70 to 970 Hz, with a little frequency modulation at 90 Hz to make the lower end more audible.

Here it is on the water with our brand new multi-kayak compatible mounting system and 3D printed horn built in blender. The horrible sound right at the start is my rubbish phone.

In addition to this, we have the hydrophone, which is really the star of the show. Even with a preamp we’re having to boost it in pure data by 12 times to hear things, but what we do hear is both mysterious and revealing. It seems that boat sounds are really loud – you can hear engines for quite a long way, useful in expanding your kayak senses if they are behind you. We also heard snapping sounds from underwater creatures and further up the Penryn river you can hear chains clinking and there seems to be a general background sound that changes as you move around.

We still want to add a layer of additional sounds to this experience for the Swansea festival for people to search for out on the water. We are planning different areas so you can choose to paddle into or away from “sonic areas” comprising multiple GPS zones. We spent the last day with Kaffe testing some quick ideas out:

Looking at sea temperature and sensing the hidden underwater world, climate change is the big subject we keep coming back to, so we are looking for ways to approach this topic with our strange new instrument.

Pixelquipu installed at the Open Data Institute

Pixelquipu Inca Harddrives installed at the Open Data Institute (Weaving Codes/coding with knots, with Julian Rohrhuber at the Institut Fuer Musik und Medien) Part of their Thinking out Loud exhibition. Julian also built a sonification installation to play the quipu at different times of the day.


Here’s a closeup:


Red King progress, and a sonification voting system

We have now launched the Red King simulation website. The fundamental idea of the project is to use music and online participation to help understand a complex natural process. Dealing with a mathematical model is more challenging than a lot of our citizen science work, where the connection to organisms and their environments is more clear. The basic problem here is how to explore a vast parameter space in order to find patterns in co-evolution.

After some initial experiments we developed a simple prototype native application (OSX/Ubuntu builds) in order to check we understand the model properly by running and tweaking it.


The next step was to convert this into a library we could bind to python. With this done we can run the model on a server, and have it autonomously update it’s own website via django. This way we can continuously run the simulation, storing randomly chosen parameters to build a database and display the results. I also set up a simple filter to run the simulation for 100 timesteps and discard parameters that didn’t look so interesting (the ones that went extinct or didn’t result in multiple host or virus strains).

There is also now a twitter bot that posts new simulation/sonifications as they appear. One nice thing I’ve found with this is that I can use the bot timeline to make notes on changes by tweeting them. It also allows interested people an easy way to approach the project, and people are already starting discussions with the researchers on twitter.


Up to now, this has simply been a presentation of a simulation – how can we involve people so they can help? This is a small project so we have to be realistic what is possible, but eventually we need a simple way to test how the perception of a sonification compares with a visual display. Amber’s been doing research into the sonification side of the project here. More on that soon.

For now I’ve added a voting system, where anyone can up or down-vote the simulation music. This is used as a way to tag patterns for further exploration. Parameter sets are ranked using the votes – so the higher the votes are the higher the likelihood of being picked as the basis for new simulations. When we pick one, we randomise one of its parameters to generate new audio. Simulations store their parents, so you can explore the hierarchy and see what changes cause different patterns. An obvious addition to this is to hook up the twitter retweets and favorites for the same purpose.


Sonic Kayaks Hacklab

Part one of our two events for British Science Week was the Sonic Kayak open Hacklab with Kaffe Matthews and Dr. Kirsty Kemp. Amber has reported our findings here, this was the first time we successfully trialled the technology and ideas behind the Sonic Kayak, in future we will be refining them into instruments for experiencing the marine world. More on that soon!


Red King – listening to coevolution

Scientific models are used by researchers in order to understand interactions that are going on around us all the time. They are like microscopes – but rather than observing objects and structures, they focus on specific processes. Models are built from the ground up from mathematical rules that we infer from studying ecosystems, and they allow us to run and re-run experiments to gain understanding, in a way which is not possible using other methods.

I’ve managed to reproduce many of the patterns of co-evolution between the hosts and parasites in the red king model by tweaking the parameters, but the points at which certain patterns emerge is very difficult to pin down. I thought a good way to start building an understanding of this would be to pick random parameter settings (within viable limits) and ‘sweep’ paths between them – looking for any sudden points of change, for example:


This is a row of simulations which are each run for 600 timesteps, with time running downwards. The parasite is red and the host is blue, and both organism types are overlayed so you can see them reacting to each other through time. Each run has a slightly different parameter setting, gradually changing between two settings as endpoints. Halfway through there is a sudden state change – from being unstable it suddenly locks into a stable situation on the right hand side.

I’ve actually mainly been exploring this through sound so far – I’ve built a setup where the trait values are fed into additive synthesis (adding sine waves together). It seems appropriate to keep the audio technique as direct as possible at this stage so any underlying signals are not lost. Here is another parameter sweep image (100 simulations) and the sonified version, which comprises 2500 simulations, overlapped to increase sound density.


You can hear quite a few shifts and discontinuities between different branching patterns that emerge at different points – writing this I realise an animated version might be a good idea to try too.

Stereo is done by slightly changing one parameter (the host tradeoff curve) across the left and right channels – so it gives the changes a sense of direction, and you are actually hearing 5000 simulations being run in total, in both ears. All the code so far (very experimental at present) is here. The next thing to do is to take a step back and think about the best way to invite people in to experience this strange world.

Here are some more tests: