How to back up your encryption keys

Without really noticing it I’ve gradually acquired more and more encryption keys without understanding how to back them up properly. Until fairly recently I lazily assumed that remembering the passphrases would be enough in case of my laptop catching on fire, but this is not the case.

I use GPG keys both for authenticity over email, and encryption when sending people passwords for stuff I’m setting up for them. The Ubuntu launchpad also uses GPG for signing packages for which I use a different key. I also run a bunch of servers, for which I use ssh keys to prove my identity, then there is the Android play store, that requires binaries to be signed, using yet another key, which is also shared for OUYA packages too.

The main algorithm in use for authentication in all these cases is called RSA. RSA and similar algorithms generate a pair of keys, one public and one private. The private key data is encrypted using yet another algorithm (usually AES) which is what your passphrase is used for. When it’s needed, you type your passphrase in, it decrypts the RSA private key and uses that to identify you. So it’s vitally important that this key data is backed up as it can’t be recreated from your passphrase. There doesn’t seem very much information online on the practicalities of all this, so I’m documenting the process with links to where I got info here, partly in order to get feedback if it’s wrong!

With ssh it’s just a matter of copying the contents of your .ssh directory – which contain the public and encrypted private key. The android keys are in a file called .keystore, in your home directory.

When it comes to GPG the best way seems to be to list and export them individually with:

gpg --list-secret-keys
gpg --export-secret-keys your-id-number > secret-key.asc

The id number is the part after the slash for each keypair. At the same time, it’s important to back up a revocation key for each key – this allows you to tell the GPG trust network if your identity becomes compromised by someone finding out your key (or losing/forgetting your passphrase, which is perhaps more likely). You can do this with:

gpg --gen-revoke your-id-number

And paste the result into a text file.

So you can take all these files and store them somewhere safe on a usb stick for example. It all needs to be encrypted so it doesn’t matter if it’s found and accessed. I almost made a fundamental mistake and encrypted it with GPG, which would have been like locking my house keys inside the house. Instead you can encrypt the file using AES independently using this command:

openssl aes-256-cbc -in your-key-file.tar.gz -out your-key-file.tar.gz.enc

I’m assuming once this is done, the best practice is to put it in various places to reduce the chances of it getting lost, as it doesn’t matter if it’s accessible. Make sure to use a long passphrase you won’t forget! The advice given here is to use a long randomly generated string and write it on a piece of paper, which is stored in a safety deposit box – this is the approach to take if you are in charge of really important stuff, I’m not sure that I’m at that point yet 🙂

Cornwall “Let’s Get Digital” presentation

Here’s a presentation I gave at the end of last year at a Creative Skills Cornwall meeting at Falmouth University. I introduced the problems of a growing producer/consumer digital divide – the need for more public discourse in the politics of technology and how free software, codeclub, livecoding, algorithmic weaving and sonic bikes can indicate other relationships we can have with technology.

The talk went down really well, but the slides are a little minimal so it might not be super clear what it was all about based just on them 🙂

slide

Some post-Snowden thoughts

One of the most interesting outcomes of the the Snowden revelations for me are that they have exposed to the light of day an awful lot about how different groups of people relate to technology and authority. There are the side that worries about the internet becoming the “worst tool of human oppression in all of human history” and then we have the “I’ve got nothing to hide so I don’t care” vast majority. I think a lot of us have to be in the second group most of the time, just to operate normally – even if we have sympathies with the first.

map

One problem is the idea that the internet has gradually become intertwined with our societies to the point where it is inconceivable that we could have a functioning civilisation again without it. This does, I think provide a sense of unease to most people – but at the same time it’s easy to brush away. The fact we take the internet entirely for granted is perhaps the most surprising thing of all but I think primarily down to the unease, the easiest option is to trivialise it’s role.

In some attempt to keep up with the news in a realm I know next to nothing about, I’ve been reading a book called Applied Cryptography by Bruce Schneier – this book was first published in 1996 but feels very current, fascinating and probably quite unique in that it comprises half cultural/political discussion and half source code. On page two he explains three vital requirements for social interaction and computers that that cryptography provides (beyond secrecy):

– Authentication. It should be possible for the receiver and sender of a message to ascertain it’s origin; an intruder should not be able to masquerade as someone else.

– Integrity. It should be possible for the receiver of a message to verify that it has not been modified in transit; an intruder should not be able to substitute a false message for a legitimate one.

– Nonrepudiation. A sender should not be able to falsely deny later that he sent the message.

Jaron Lanier wrote in “You are not a gadget” that we become “less human” as we use online services like Twitter and Facebook, as we submit ourselves to their abstractions rather than demand more from them. I think Lanier’s underlying message has some truth to it, but his blame is mostly in the wrong place.

For all sorts of technical, political and accidental reasons, we are all being trained to communicate without cryptography, whilst having evolved as humans to understand social interaction in ways that absolutely require it. The evidence from psychology and history is that a society that reduces communication to this level does not have a bright future. One solution to this I like a lot is the approach of the cryptoparty movement – a great way to widely spread understanding of these issues and the solutions involved.

Oh, and this is my public key.

Project Nightjar: Camouflage data visualisation and possible internet robot predators

We’ve had tens of thousands of people spotting nightjars and donating a bit of their time to sensory ecology research. The results of this (of course it’s still on-going, along with the new nest spotting game) is a 20Mb database with hundreds of thousands of clicks recorded. One of the things we were interested in was seeing what people were mistaking for the birds – so I had a go at visualising all the clicks over the images (these are all parts of the cropped image – as it really doesn’t compress well):

clicks-ReflectanceCF017Vrgb0.54-crop

clicks-ReflectanceCF022Vrgb0.52-crop

Then, looking through the results – I saw a strange artefact:

clicks-ReflectanceCP018VRgb0.55-crop

Uncompressed high res version

My first thought was that someone had tried cheating with a script, but I can hardly imagine that anyone would go to the bother and it’s only in one image. Perhaps some form of bot or scraping software agent – I thought that browser click automation was done by directly interpreting the web page? Perhaps it’s a fall back for HTML5 canvas elements?

It turns out it’s a single player (playing as a monkey, age 16 to 35 who had played before) – so easy enough to filter away, but in doing that I noticed the click order was not as regular as it looked, and it goes a bit wobbly in the middle:

Someone with very precise mouse skills then? 🙂

Fascinate Falmouth

It’s not often that you get to go to the first edition of a festival or conference, but last week was the first ever Fascinate Conference, in Falmouth – a varied collection of artists, performers, musicians and experimenters with technology, some from far away on their first visit to Cornwall, others were local – both researchers from Falmouth University, as well as artists picking up inspiration.

For me the keynote presentations provided some powerful concepts, Atau Tanaka, opening the event presented an thought provoking timeline in terms of his extensive performance experience. Moving from laptop computers, to mobile computing, and onwards to “post-computers”, including Beagle Boards and Raspberry Pi – as more hackable, extendible and open than more restricted mobile platforms but providing largely the same needs.

Another idea running through a moving presentation from Seth Honnor regarded the 4 degree climate change ‘elephant in the room’. While it represents such a huge un-graspable problem, he points out that everything we do needs to take it into account. It doesn’t necessarily need to be centre stage, but it has to be there – as a background future reality. If we do this we can start to build up the necessary imagination that’s going to be needed in the future.

BSxgNsCIUAAcM1m.jpg:large

My presence at the conference was somewhat fragmentary (I had other duties to attend to) sadly missing many of the workshops, presentations and performances – it was however a chance for me to perform for the first time in Cornwall, as well as get to see first hand some of the research that’s happening in Falmouth. The event itself was just the right size, and while at times slightly chaotic and problematic in terms of gender representation – they are things that take time to get right, and it’s freshness and interdisciplinary nature was very welcome indeed. Looking forward to next year’s event!

Update: Since writing this post, the organisers have contacted me to clarify that considerable effort was put into gender representation for the conference, there was a good balance on other presentation tracks and in terms of the keynotes it was more a case of unfortunate last minute changes and other unavoidable factors.

Compiling Scheme to Javascript

Recently I’ve been continuing my experiments with compilers by writing a Scheme to Javascript compiler. This is fairly simple as the languages are very similar, both have garbage collection and first class functions so it’s largely a case of reusing these features directly. There are some caveats though – here is an example of the canonical factorial example in Scheme:

(define factorial
  (lambda (n)
    (if (= n 0) 1
        (* n (factorial (- n 1))))))

The Javascript code generated by the compiler:

var factorial = function (n)
{
    if ((n == 0)) {
        return 1
    } else {
        return (n * factorial((n - 1)))
    }
}

Scheme has tail call optimisation, meaning that recursive calls are as fast as iterative ones – in Javascript however, a function that calls itself will gradually use up the stack. The factorial example above breaks with larger numbers so we need to do some work to make things easier for Javascript interpreters. When considering list processing, fold can be considered a core form that abstracts all other list processing. If we make that fast, we can reuse it to define higher order list processing such as filter, which takes a function and a list and returns another list. The function is called on each element, if it returns false it’s filtered out of the resulting list.

(define filter
  (lambda (fn l)
    (foldl
     (lambda (i r)
       (if (fn i) (append r (list i)) r))
     ()
     l)))

Compiling this with an iterative version of fold – which uses a for loop in Javascript, we generate this code:

var filter = function (fn,l)
{
    return (function() {
        var _foldl_fn=function (i,r) {
            if (fn(i)) {
                return r.concat([i])
            } else {
                return r
            }
        };

        var _foldl_val=[];
        var _foldl_src=l;
        for (var _foldl_i=0; _foldl_i<_foldl_src.length; _foldl_i++) {
            _foldl_val = _foldl_fn(_foldl_src[_foldl_i], _foldl_val); 
        }
        return _foldl_val; })()
    }
}

We have to be a little bit careful that variables created by the compiler don’t clash with those of the user’s code, so we have to make really ugly variable names. You also need to be careful to wrap code in closures as much as possible to control the scope. The problem with this is that there is probably a limit to the amount of nesting possible, especially comparing different implementations of Javascript. A interesting project that has got much further than this is Spock. More context to this work and some actual source code soon…

/* vivo */ musings

So much to think about after the /* vivo */ festival, how livecoding is moving on, becoming more self critical as well as gender balanced. The first signs of this was the focus of the festival being almost entirely philosophical rather than technical. Previous meetings of this nature have involved a fair dose of tech minutiae – here these things hardly entered the conversations.

Show us your screens

One of the significant topics for discussions was put under the spotlight by Iohannes Zm̦lnig Рwho are the livecoding audience, what do they expect and how far do we need to go in order to be understood by them? Do we consider the act of code projection as a spectacle (as in VJing) or is it Рas Alex McLean asserts Рmore about authenticity, showing people what you are doing, what you are interacting with, and an honest invitation? Julian Rohrhuber and Alberto De Campo discussed how livecoding impacts on our school education conditioning, audiences thinking they are expected to understand what is projected in a particular didactic, limited manner (code projection as blackboard). Livecoding could be used to explore creative ways of compounding these expectations to invite changes to the many anti-intellectual biases in our society.

Luis Navarro Del Angel presented another important way of thinking about the potential of livecoding – as a new kind of mass creativity and participation, providing artistic methods to wider groups than can be achieved by traditional means. This is quite close to my own experience with livecoding music, and yet I’m much more used to thinking about what programming offers those who are already artists in some form, and familiar with other material. Luis’s approach was more focused on livecoding’s potential for people who haven’t found a form of expression, and making new languages aimed at this user group.

After some introductory workshops the later ones followed this philosophical thread by considering livecoding approaches rather than tools. Alex and I provided a kind of slub workshop, with examples of the small experimental languages we’ve made like texture, scheme bricks and lazybots, encouraging participants to consider how their ideal personal creative programming language would work. This provides interesting possibilities and I think, a more promising direction than convergence on one or two monolithic systems.

This festival was also a reminder of the importance of free software, it’s role to provide opportunities in places where for whatever reasons education has not provided the tools to work with software. Access to source code, and in the particular case of livecoding, it’s celebration and use as material, breeds independence, and helps in the formation of groups such as the scene in Mexico City.

New game design

I’m working on a new game as an art/science collaboration, and thought that it might be interesting to try the technique of making a board game first. The idea is not so much to make it work first time, but use physical pieces to figure out possible rules and try them without thinking about limitations of screen or interface.

It seems to work quite well as a way of working together as it’s simple to make situations as questions, change the meaning of pieces by drawing on them – or play with the physical arrangement. It probably bears some relation to design processes like bodystorming or lego serious play.

Users > “drivers” of software

I’ve finally had a chance to sit down and comment on John Naughton’s article in the Guardian and it’s manifesto addressed to the education secretary The Rt Hon Michael Gove MP on computer education in the UK. This joins an avalanche of recognition that programming – or “coding” – is suddenly a Good Thing for People To Know.

It is wonderful to read a major media publication pouring scorn on the old idea that “computers are like cars”, “users are drivers” which has dogged perceptions of computers for years. There is a new philosophy here focusing on hacking, in the original sense – an algorithmic literacy of processes which now mediate every aspect of our lives. This is seen as good as a general life skill, and of course good for the economy for encouraging the kind of independent thinking needed for successful startup companies.

This avalanche has to be credited to some extent to game designer David Braben and his Raspberry Pi project, and an extensive PR campaign tweaking people’s memories (including an increasing number of politicians in their 30’s and 40’s) who remember school computers like the BBC micro, and their later influence on what these same politicians like to call the “creative industries”. This of course all seems instinctively good sense for those of us who have been closely watching the boom in popularity of arduino, processing and free software methodologies in hacklabs and fablabs.

However, an approach organised on this scale is unlikely to support such generalist and creative philosophies we are used to. A few days prior to this article we had an announcement of £575m for kitting out schools with computing infrastructure from familiar public sector contractors including Serco and Capita, and a bit more detail on Staffordshire county council who are spending £28m on “Apple products under the iOS lot including iMacs, Mac Books, iPods, iPads, Mac Minis and Lion Server.”

The problem here is that a rejection of “users as drivers” is a rejection of iOS (and to a lesser extent Android) and the app store philosophy. App stores are extremely successful at promoting the idea of phones as appliances (never computers in their own right) and software as small digestible “apps” encapsulated in locked down environments generally marketed as a kind of protection for users. If these models of computing are to grow as expected – they are completely at odds with an accessible understanding we need for this change in education approach, and the creative literacy of algorithms which would follow.

When I’ve interviewed graduates for creative programming jobs the thing which is really most valuable (much more so than knowing the relevant programming language) is exactly the thing that having an “edit source code” button on every app would encourage (as included on OLPC’s sugar, another older example of an education targeted effort). What is needed is a creative lack of respect for software, a cheerful abandonment of the fear that “breaking something” will be your fault rather than that of the system.