We livecoded a planetarium! I didn’t get any photos of the slub performance as I was a little busy, but Matthew Yee-King’s photos are here. Thanks to Pete ‘the dome’ for all his hard work, and sorry for nearly blowing up his speakers :/ Fluxus is now compatible with Plymouth University’s Immersive Vision Theatre!

Some pics of the wronghead’s performance:

There are more on my flickr page, and some movies too…

Dustbin livecoding revisited

The planetarium projection seems to work quite well now. I’ve made the angle and number of cameras it uses configurable, and I also alter the texture coordinates so the visible segment of the sphere corresponds to the entire texture (so you use all the pixels in the texture target, basically). It also uses orthographic projection for rendering the sphere:

This is another photo of my failsafe bin lid testing rig. It looks a bit more convincing than this really shows, the key thing (I think) is that the segments of the sphere appear to be the same distance apart in the bin, while being squashed in the flat version:

Scheme Bricks: Viewing Al Jazari code

I’ve been thinking more about scheme bricks as a general purpose visual programming language – and the eventual goal of writing it in itself. As an example, this is the scheme code for the logic part of the aljazari robots, where they run their instructions, viewed in scheme bricks:

This class alone is 10,000 pixels high (captured using tiled-framedump in fluxus). I need to do something about collapsing lists, which are all vertical at the moment, into a horizontal form.

Jam City first playable

First playable in games is the first version that actually runs and demonstrates some gameplay elements. In the vast world of games-that-are-actually-silly-ways-of-livecoding-music, it’s the first version that makes some form acceptably arranged noise.

I’m not really sure if this is going to be runnable at our first ever planetarium gig next week, I’d written it off as there seemed way too much to get done, but then carried on hacking away anyway and it now kinda works.

Lirec: python scripting and military robots

I’ve been starting to get back into the Lirec project again this week, starting off by wrapping all my C++ vision code to python in order to script it. This has sped the research work up already, as I’ve been able to script some initial experiments on expression recognition in a couple of days – I’m currently using the yale face database as my training data for the expression appearance model, but it’s not very good as there aren’t enough faces really. I’ve registered for the AR face database and the PIE database from Carnegie Mellon too.

There is a limit to how much development of the competencies (the low level things the robots need to do) really furthers Lirec’s research aims, so I’m also looking for places where robots already form long term companionships with humans – there are some surprising cases cropping up in the military which need further investigation.