Adventures with FM sound and the YM2612

So I’ve been getting a into FM synthesis lately, and after giving the TX7 a spin I’ve been really wanting to try evolving sounds for the later synth chips Yamaha made for games consoles and soundcards. At that point in history FM was one of the only realistic ways you could synthesise complex sound as there wasn’t enough time (processing) for general purpose DSP, and there wasn’t enough space (memory) for sample playback to be an option.

img_3528

One of the most famous of these range of synth chips was the YM2612 which was used in the Sega Megadrive/Genesis consoles. I got hold of a few to try, but it’s been quite a challenge to get them working so I thought probably worth documenting the process. The YM2612 is programmed using 5 control pins and 8 data pins. You send one byte to the chip at a time – first an address to write to, then the data to write, which allows you to set a couple of hundred parameters that define 6 separate sounds. The memory on the chip is organised into a map of the global settings (top left) and the voice settings in two banks:

registers

There isn’t much documentation for these things, the main reference is for a similar chip which is written in Japanese. More usefully there are software versions in emulators, and quite a few bits and pieces online from other people’s projects and some translations of the docs. Usefully the Sega documentation included a piano test sound which is great as you have something that *should* actually make a noise as long as everything else is working.

The first problem was figuring out the timing – despite having some examples it took me a while to realise that there is a specific order that you have to set the status pins, and that there is one change at one time that actually triggers (latches) the data transfer to the synth:

timing

This is a timing diagram, I’m used to seeing them in datasheets, but it’s the first time I’ve needed to understand one properly (with a bit of help from David Viens). It tells us that for writing, the data (D0-D7 pins) are actually read shortly after the WR pin is set low – to add to the fun, some of the pins (denoted by the line over them) are 0v for on, 5v for off. You can also read data from the synth, but whatever address you supply you only get a single status byte that tells you if it’s currently busy, and if either of it’s two internal clocks have overflowed. The idea I assume is that you would use them for timing music accurately without the need to do it in your main CPU.

The audio coming out needs to be amplified too – if you connect it directly to a speaker there is a danger of overloading the DAC and causing it to burn out, so you need a bit of analogue circuitry to do this. Here is an example of triggering and glitching (by writing to random addresses) the example piano sound:

I started off using an Arduino Nano, but didn’t have much success so switched to atmega328 which was a bit easier to set up. One of the problems is that ideally you need to be able to control the 8 data GPIO pins in one go (to set the 8bit value), which isn’t possible from the nano as none of the ports have 8 bits. You also have to supply the YM2612 with a clock pulse at around 8Mhz – which took me a while to get right, I tried both dividing the atmega’s 16Mhz clock by 2 and outputting it on a pin as well as a separate external oscillator and both eventually worked – except the oscillator leaked quite badly into the audio output (there is probably a fairly simple fix to isolate this).

One potential problem is dodgy vintage chips, I tried two suppliers on eBay, one Chinese shop and another in the UK – all the ones I’ve tested so far worked fine, I’m happy to say. The best way of testing them is to use a second hand Sega Megadrive console and replacing the YM2612 it has with a socket you can easily plug in candidates for testing – I managed without but it would have come in handy sanity wise to know for sure that the chips worked.

yamaha_ym2612_top_metal_die_shot
(image source)

One thing that I wasted a lot of time doing was ignoring the output entirely and just trying to get the status bit to change by setting the internal timer – the idea was to make sure the chip logic was working before worrying about the audio amplifier circuitry. However, this requires the read sequence timing to work as well as the write timing – and as soon as you can listen to the output you get small clues from the DAC noise, which actually indicates the chip is running or not even if there is no sound triggered.

ops

Here are some links I found helpful: the wikipedia entry for the pinout, full software documentation (for the registers) and a great blog post about putting it all together, with schematics – in French. Here is a hardware synth module with similar YMF262 chips.

My source (and soon more stuff) is here.

Screenless music livecoding

Programming music with flotsam – for the first time, it’s truly screen-less livecoding. All the synthesis is done on the Raspberry Pi too (raspbian release in the works). One of the surprising things I find with tangible programming is the enforced scarcity of tokens, having to move them around provides a situation that is good to play around with, in contrast to being able to simply type more stuff into a text document.

The programming language is pretty simple and similar to the yarn sequence language from the weavecoding project. The board layout consist of 4 rows of 8 possible tokens. Each row represents a single l-system rule:

Rule A: o o o o o o o o
Rule B: o o o o o o o o
Rule C: o o o o o o o o
Rule D: o o o o o o o o

The tokens themselves consist of 13 possible values:

a,b,c,d : The 'note on' triggers for 4 synth patches
. : Rest one beat
+,- : Change current pitch
<,> : Change current synth patch set
A,B,C,D : 'Jump to' rule (can be self-referential)
No-token: Ends the current rule

The setup currently runs to a maximum depth of 8 generations – so a rule referring to itself expands 8 times. A single rule ‘A’ such as ‘+ b A - c A ‘ expands to this sequence (the first 100 symbols of it anyway):

+b+b+b+b+b+b+b+b+b+b-c-c+b-c-c+b+b-c-c+b-c-c+b+b+b-c-c+b-c-c+b+b-c-c+b-c-c+b+b+b+b-c-c+b-c-c+b+b-c-c

I’m still working on how best to encode musical structures this way, as it needs a bit more expression – something to try is running them in parallel so you can have different sequences at the same time. With a bit more tweaking (and with upcoming hardware improvements) the eventual plan is to use this on some kid’s programming teaching projects.

Sonic Bike Hacklab Part 3: The anti-cloud – towards bike to bike mesh networking

IMG_20130726_122857

[Continued from part 2] One of the philosophies that pre-dates my involvement with the sonic bikes project is a refusal of cloud technologies – to avoid the use of a central server and to provide everything required (map, sounds and computation) on board the bikes. As the problems with cloud technology become more well known, art projects like this are a good way to creatively prototype alternatives.

The need to abstractly “get the bikes to talk to one another” beyond our earlier FM transmission experiments implies some kind of networking, and mesh networking provides non-hierarchical peer to peer communication, appropriate if you want to form networks between bikes on the street over wifi (which may cluster at times, and break up or reform as people go in different directions). After discussing this a bit with hacklab participant and fellow Beagleboard enthusiast Adam Parkinson I thought this would be a good thing to spend some time researching.

The most basic networking information we can detect with wifi is the presence of a particular bike, and we decided to prototype this first. I’d previously got hold of a low power wifi usb module compatible with a Raspberry Pi (which I found I could run with power from the bike’s beagleboard usb!), and we could use an Android phone on another bike, running fluxa to plug the signal strength into a synth parameter:

mesh

It’s fairly simple to make an ad-hoc network on the Raspberry Pi via command line:

ifconfig wlan0 down
iwconfig wlan0 channel 4
iwconfig wlan0 mode ad-hoc
iwconfig wlan0 essid 'bikemesh'
iwconfig wlan0 key password
ifconfig wlan0 192.168.2.1

On the Android side, the proximity synth software continuously measures the strength of the wifi network from the other bike, using a WifiScanReceiver we set up like so:

wifi = (WifiManager) getSystemService(Context.WIFI_SERVICE);
wifi.startScan();
registerReceiver(new WiFiScanReceiver(), 
                 new IntentFilter(
                 WifiManager.SCAN_RESULTS_AVAILABLE_ACTION));

The WifiScanReceiver is a subclass of BroadcastReceiver, that re-triggers the scan process. This results in reasonably high frequency scanning, a couple a second or so and we also check the SSID names of the networks around the bike for the correct “bikemesh” node:

import java.util.List;

import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.net.wifi.ScanResult;
import android.net.wifi.WifiManager;
import android.util.Log;
import android.widget.Toast;

public class WiFiScanReceiver extends BroadcastReceiver {
    private static final String TAG = "WiFiScanReceiver";

    public WiFiScanReceiver() {
        super();
        Level=0;
    }

    static public int Level;

    @Override
    public void onReceive(Context c, Intent intent) {
        List<ScanResult> results = ((Earlobes)c).wifi.getScanResults();
        ScanResult bestSignal = null;

        for (ScanResult result : results) {
            if (result.SSID.equals("bikemesh")) {
                String message = String.format("bikemesh located: strength %d",
                                               result.level);
                Level = result.level;
            }
        }
        ((Earlobes)c).wifi.startScan();
    }

}

The synth was also using the accelerometers, but ramped up the cutoff frequency of a low pass filter on some white noise and increased modulation on the accelerometer driven sine waves when you were close to the other bike. The result was quite surprising with such a simple setup, as it immediately turned into a game playing situation, bike “hide and seek” – as rider of the proximity synth bike you wanted to hunt out where the wifi bike was, the rider of which would be trying to escape. The range was surprisingly long distance, about halfway across London Fields park. Here is an initial test of the setup (we had the sounds a bit more obvious than this in later tests):

With the hardware and some simple software tested, the next stage would be to run multiple wifi nodes and get them to connect and form a mesh network. I got some way into using Babel for this, which is very self contained and compiles and runs on Beagleboard and Raspberry Pi. The other side to this is what kind of things do we want to do with this kind of “on the road” system, how do we notate and artistically control what happens over a sonic bike mesh network?

Some ideas we had included recording sounds and passing them between bikes, or each bike forming a synth node, so you create and change audio dependant on who is around you and what the configuration is. I made few more notes on the technical stuff here.