Little J

little-j-roundel+text Little J is a research project to explore citizen journalism, and find ways to use technology to connect journalists working on local news (an industry currently under threat) with the public, grouped under the term “hyperlocal news”. This is a project between REACT (Research & Enterprise in Arts & Creative Technology) and Behaviour (a psychological creative agency based in Cornwall) and is focused on the communities in and around Port Talbot in Wales as a test bed for a series of events.

The project is a great chance to explore Ushahidi more deeply, after using it for borrowed scenery and halfway through working on doris we can start looking at adding new features. I’ve started by setting up a test site to fiddle with and we are now focusing on creating a wide range of methods for people to get their stories into the site and into the hands of the journalists as we can. The obvious place to start is a facebook app, and looking at ways to integrate social media with the database of stories and do it in a way that encourages some good information. Another area to explore are mechanisms for encouraging people who are specialists in specific areas to be represented by new kinds of reputation systems. As a prototype we will be covering a lot of ground quickly, finding what works best for future development.

You can read more about this on Behaviour’s site and on the REACT blog.

algoravin in the UK

A short trip around the UK for slub over the last couple of days, a livecoding gig in London at Bartlett Nexus in UCL at an event concerning architecture, games and hand made technology. A full video of the event (with us at the end) is here.

bartletts1

bartletts2

Also collected along the way, a photo of the xname manufacturing lab:
xname

Then up to Birmingham to attend the Network Music Festival and do another performance late on Saturday night. While there we had a chance to meet the members of The Hub, pioneers of networked music and livecoding. It was inspiring to chat with such experienced musicians in this field. The NMF included a huge range of performances, for example Melatab who used Kinect cameras for networked performance in a shared virtual space. I’m planning some Kinect hacking soon, so I took some photos:

nmf1

nmf2

DORIS on the high seas

Yesterday was the first test of the full DORIS marine mapping system I’m developing with Amber Teacher and David Hodgson at Exeter University. We went out on a fishing boat from Mylor harbour for a 5 hour trip along the Cornish coast. It’s a quiet season for lobsters at the moment, so this was an opportunity to practice the sampling without too much pressure. Researcher Charlie Ellis was working with Hannah Knott, who work with the National Lobster Hatchery and need to take photos of hundreds of lobsters and combine them with samples of their genetic material.

IMG_20130214_104502

By going out on the boats they get accurate GPS positioning in order to determine detailed population structures, and can sample lobsters that are small or with eggs and need to be returned to the sea as well as the ones the fishermen take back to shore to be sold. Each photograph consists of a cunning visual information system of positioning objects to indicate sex, whether they are for return or removal and a ruler for scale.

lobster

map

pot

Al Jazari 2 re-arranger robot program

Onboard a robot, who is following a sequence of instructions (in the spirit of the original Al Jazari), to pick up, carry and drops blocks in order to re-organise it’s environment. Halfway through we switch to a robot which is running code that makes it player controllable, so we can look around. This is all more than slightly influenced by teaching Scratch at Code Club and feeling the need for something 3D for them to play with. The scheme bricks coding interface is next…

Android Camera Problems

The DORIS marine mapping platform is taking shape. For this project, touch screens are not great for people wearing gloves in small fishing boats – so one of the things the android app needs to do is make use of physical keys. In order to do that for taking pictures, I’ve had to write my own camera android activity.

It seems that there are differences in underlying camera behaviour across devices – specifically the Acer E330 model we have to take out on the boats. It seemed that nearly every time takePicture() is called the supplied callback functions fail fire. I’ve tested callbacks for the shutter, raw and jpeg and error events and also tried turning off the preview callback beforehand as suggested elsewhere, no luck on the Acer, while it always works fine on HTC.

The only solution I have so far is to close and reopen the camera just before takePicture() is called which seems to work as intended. As it takes some seconds to complete, it’s also important (as this is bound to a key up event) to prevent the camera starting to take pictures before it’s finished processing the previous one as that causes further callback confusion.

import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.hardware.Camera;
import android.hardware.Camera.CameraInfo;
import android.hardware.Camera.PictureCallback;
import android.util.Log;

class PictureTaker 
{
    private Camera mCam;
    private Boolean mTakingPicture;

    public PictureTaker() {
        mTakingPicture=false;
    }

    public void Startup(SurfaceView view) {
        mTakingPicture=false;
        OpenCamera(view);
    }

    private void OpenCamera(SurfaceView view) {
        try {
            mCam = Camera.open();
            if (mCam == null) {
                Log.i("DORIS","Camera is null!");
                return;
            }
            mCam.setPreviewDisplay(view.getHolder());
            mCam.startPreview();
        }
        catch (Exception e) {
            Log.i("DORIS","Problem opening camera! " + e);
            return;
        }
    }

    private void CloseCamera() {
        mCam.stopPreview();
        mCam.release();
        mCam = null;
    }

    public void TakePicture(SurfaceView view, PictureCallback picture)
    {
        if (!mTakingPicture) {
            mTakingPicture=true;
            CloseCamera();
            OpenCamera(view);
            
            try {
                mCam.takePicture(null, null, picture);
            }
            catch (Exception e) {
                Log.i("DORIS","Problem taking picture: " + e);
            }
        }
        else {
            Log.i("DORIS","Picture already being taken");
        }   
    }
}

Al Jazari 2 – minecraft meets fluxus

Some screenshots of the in-progress next generation Al Jazari livecoding world. This is a voxel rendered world, inspired in part by Minecraft but with an emphasis on coding robots in scheme bricks who construct artefacts from the materials around them. The robot language is still to be designed, but will probably resemble Scratch.

aj2-002

You can ‘jump on board’ the different robots (cycling through them with ‘space’) and program them with commands which include picking up or dropping individual blocks. The program above allows you to control the robot with the ‘w’, ‘a’, ‘s’, ‘d’ keys with ‘z’ to tunnel downwards, and ‘x’ to remove the block in front of the robot.

aj2-001

The world is built quite simply from an octree – which provides an optimised structure for rendering the 64x64x64 level cube in realtime. The view below shows the compression – large areas containing the same material (or empty space) can be represented by leaf nodes terminating the tree early without needing to store each of the 262,144 1x1x1 cubes. After each edit, the octree may fragment or collapse it’s tree (via setting new values in 3D space or a ‘compress’ operation). The scheme code can be found here.

aj2-003