More on haxe development

I thought I’d expand a little on the al jazari flash game, and how to develop flash games using free software.

Haxe is really rather nice, and although I’d prefer something with more parentheses (and ironically, less static typing) it does make programming for flash a nicer experience than I’d been led to believe is normally the case. I only used haxe, gimp and a bit of fluxus to get sprite renders of the 3D models, along with the firefox and of course its flash plugin (I’d like to use gnash if I do a lot more of this). I’m going to describe the basics and some of the things it took me longer to figure out. I relied a lot on howtos in blog posts, so I thought it would be a good idea to join in the fun.

Firstly you need a file called compile.hxml with something like this in it:

-swf al-jazari.swf
-swf-version 9
-swf-lib resources.swf
-main AlJazari
-swf-header 640:480:40:ffffff

This is something like a makefile for haxe and contains the main output file, the main class and the size of the area the plugin will take up. You compile your haxe script with the command haxe compile.hxml.

The style of haxe (or paradigm, if you will) is very Java like (which is probably good for me after all this exposure to Scheme and C++). You need to name your file the same as the class containing the main function eg:

class MyMainClass
{
    static function main()
    {
        trace("this gets run");
    }
}

Will work if it’s called MyMainClass.hx and the compile.hxml contains:

-main MyMainClass

You can then test it out by writing a little html like this:

<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000"
    width="50"
    height="50"
    align="middle">
<param name="movie" value="al-jazari.swf"/>
<param name="allowScriptAccess" value="always" />
<param name="quality" value="high" />
<param name="scale" value="noscale" />
<param name="salign" value="lt" />
<param name="bgcolor" value="#ffffff"/>
<embed src="al-jazari.swf"
    bgcolor="#000000"
    width="640"
    height="480"
    name="haxe"
    quality="high"
    align="middle"
    allowScriptAccess="always"
    type="application/x-shockwave-flash"
    pluginspage="http://www.macromedia.com/go/getflashplayer"
    />
</object>

And then point your browser at this to test the code.

Using textures

The compile.hxml file for al jazari also includes a reference to a lib – which is where you can embed resources like textures for making sprites. You build one of these with a bit of xml like this:

<?xml version="1.0" encoding="utf-8"?>
<movie version="9">
    <background color="#ffffff"/>
    <frame>
        <library>
            <bitmap id="BlueCubeTex" import="textures/blue-cube.png"/>
        </library>
    </frame>
</movie>

The id refers to a class you have to add to your haxe script – I think this is like a forward declaration or extern of some form, which allows you to refer to your texture:

class BlueCubeTex extends BitmapData { public function new() { super(0,0); } }

This bit of code (say in a class inherited from Sprite) will then draw the texture like this:

graphics.clear();
graphics.beginBitmapFill(
BlueCubeTex);
graphics.drawRect(0,0,64,64);
graphics.endFill();

The xml script is needed to build the swf library file which contains the textures, which you do by running:

swfmill simple resources.xml resources.swf

swfmill is free software, and installable with apt-get install swfmill on ubuntu.

Using sound

I couldn’t figure out a way to embed sounds using swfmill, it seems that it’s a recent feature, and I couldn’t find any examples that included the haxe code to load them. I did get this to work though:

import flash.media.Sound;
import flash.net.URLRequest;
import flash.media.SoundLoaderContext;

var sound: Sound = new Sound();
var req:URLRequest = new URLRequest(“path/from/main/swf/to/samples/sample.mp3”);
var context:SoundLoaderContext = new SoundLoaderContext(8000,true);
sound.load(req,context);

Which loads mp3s from a url, and then after some time (you can set up a callback to tell you then it’s loaded but I didn’t bother):

sound.play(0);

Where the parameter is the offset (in samples I think) from the start of the sound.

A week in Budapest

I spent last week in Budapest, the first half was a lirec consortium meeting. We spent Tuesday morning finding out about dog and human behaviour at the Department of Ethology at the Eötvös Loránd University. I think the most facinating part for me was the area of human understanding of dog vocalisation, as we have mutually developed a complex communication system with dogs over the last 100,000 years, with very little in the way of what we usually call language.

I also spent a couple of days at the wonderful kibu (or kitchen budapest) meeting up with Gabor and Agoston, doing a presentation about groworld and the resilients project with Nik, and talking a lot about fluxus.

Al Jazari in flash! (or haxe)

My first flash/haxe app, which I found really fun to make. Click on the code to change the instructions, and on the cubes to activate or deactivate triggers – there is only one robot and one sample (808 handclap) working for the moment. Here’s more about the al jazari project.

[update – added some more samples]
[update #2 – more robots and more sounds, click on the robots to edit their code]

The source code and textures are here.

Plotting face space

I’m flitting around a lot between projects… Back on appearance models for the lirec project, this is a small slice of face space, the plots represent images of the individuals in different lighting conditions – seeing how the lighting affects the spread of the data. One of the images for each individual is shown at the top, along with it’s symbol. The axes refer to only 3 of the 600 dimensions in the face space I’m using for this, I’ve picked some good ones so you can see how the individuals are clustered.

User identification happens simply by finding the closest known face to the one the camera can see. Actually, currently the face classifier cheats by finding the closest average centre of the known faces for each individual, but looking at these plots I don’t think this is a good approach as the ‘blobs’ aren’t very spherical.

This is also (rather shamefully) my first go with gnuplot which I’m liking a lot.

Hapnet

A small bioinformatics project in progress:

Haplotype networks and Minimum Spanning Networks are commonly used for representing associations between sequences. HapNet is a tool for viewing both types of networks, using the output data generated from Arlequin. HapNet automatically formats the network in the optimal layout for easy visualisation, and publication-ready figures can be exported in several formats.

After calculating the minimum spanning trees of the networks, my initial reaction was to use graphviz for this, as it seems perfect for the job. However, I had a lot of trouble with the different length edges, and the need to represent distance with intermediate nodes which have to be on a straight path. As I’d already written force directed graph drawing for daisy it wasn’t too hard to adapt. Source code here, and the start of a proper webpage on the libarynth.

More expression recognition

Making myself look ridiculous as usual, but this works better than I had expected:

It’s taking the vector in face space between example smile and frown expressions and then projecting the new face it doesn’t know about onto it (the dot product in multi dimensions) to give a value for how smiley or how frowny a face is. I’m calibrating it with my own expressions for the moment, but it does seem to work on other people to some extent. More data would make it more robust, but the theory seems good! The code is here and here.

Accidental graph art

If I ran one of these new fangled ‘new media arts’ courses (or whatever they are called now) I’d force students to restrict themselves to only using graphviz for several months. These are some accidents which happened trying to visualise haplotype networks for some biology visualisation work I’m doing:

Alex Mclean has some more impressive graphviz drawings here. I’m actually having to abandon graphviz as it wont cope with different length edges very well. I expect this will come back to bite me.

Expression recognition attempts

Parametrising gormlessness:

I’m trying to parametrise expressions, which involves making faces at a web camera all day. The bars along the top visualise the parameters for the face model that my face generates, and the rather scary image next to my face is the result of putting these numbers back into the face model to synthesise my face. The training data is not really that great, but it seems to be able to represent expressions more or less so I’m hoping with a little bit more work I can get an estimation on what expression you are pulling.

I’m not terribly confident, but a day or so’s work should provide an answer either way.

Voxels in fluxus

I’ve rewritten the experimental scheme voxel code into experimental C++ fluxus code, using hardware rendering for the sprite ‘splat’ accumulation, which makes it realtime for 30x30x30 grids (27000 voxels).

The last two are a bunch of concentric spheres with the top left corner carved away by a box.

Part of the fun of this is figuring out a scene description language for volumetric data, currently you can create solid spheres or cubes, spherical influences from points, add/subtract them from each other, threshold and light them. Here’s a test with a million voxels, which surprisingly still leaves fluxus interactive, if not actually workable for realtime animations:

And a short movie of spheres carving each other up as they move around: