Procedural landscape demo on OUYA/Android

A glitchy procedural, infinite-ish landscape demo running on Android and OUYA. Use the left joystick to move around on OUYA, or swiping on Android devices with touchscreens. Here’s the apk, and the source is here.

It’s great to be able to have a single binary that works across all these devices – from OUYA’s TV screen sizes to phones, and using the standard gesture interface at the same time as the OUYA controller.

The graphics are programmed in Jellyfish Lisp, using Perlin noise to create the landscape. The language is probably still a bit too close to the underlying bytecode in places, but the function calling is working and it’s getting easier to write and experiment with the code.

```(define terrain
'(let ((vertex positions-start)
(flingdamp (vector 0 0 0))
(world (vector 0 0 0)))

;; recycle a triangle which is off the screen
(define recycle
(lambda (dir)
;; shift along x and y coordinates:
;; set z to zero for each vertex
(write! vertex
(vector 1 1 0)) dir))
(write! (+ vertex 1)
(+ (*v (read (+ vertex 1))
(vector 1 1 0)) dir))
(write! (+ vertex 2)
(+ (*v (read (+ vertex 2))
(vector 1 1 0)) dir))

;; get the perlin noise values for each vertex
(let ((a (noise (* (- (read vertex) world) 0.2)))
(b (noise (* (- (read (+ vertex 1))
world) 0.2)))
(c (noise (* (- (read (+ vertex 2))
world) 0.2))))

;; set the z coordinate for height
(write! vertex
(+ (*v a (vector 0 0 8))
(vector 0 0 -4))))
(write! (+ vertex 1)
(+ (*v b (vector 0 0 8))
(vector 0 0 -4))))
(write! (+ vertex 2)
(+ (*v c (vector 0 0 8))
(vector 0 0 -4))))

;; recalculate normals
(define n (normalise

;; write to normal data
(write! (+ vertex 512) n)
(write! (+ vertex 513) n)
(write! (+ vertex 514) n)

;; write the z height as texture coordinates
(write! (+ vertex 1536)
(*v (swizzle zzz a) (vector 0 5 0)))
(write! (+ vertex 1537)
(*v (swizzle zzz b) (vector 0 5 0)))
(write! (+ vertex 1538)
(*v (swizzle zzz c) (vector 0 5 0))))))

;; forever
(loop 1
(set! flingdamp (+ (* flingdamp 0.99)
(*v
(vector 0.01 -0.01 0))))

(define vel (* flingdamp 0.002))
;; update the world coordinates
(set! world (+ world vel))

;; for each vertex
(loop (< vertex positions-end)

;; update the vertex position
(write! vertex (+ (read vertex) vel))
(write! (+ vertex 1) (+ (read (+ vertex 1)) vel))
(write! (+ vertex 2) (+ (read (+ vertex 2)) vel))

;; check for out of area polygons to recycle
(cond
(recycle (vector -10 0 0)))
(recycle (vector 10 0 0))))

(cond
((> (swizzle yzz (read vertex)) 4.0)
(recycle (vector 0 -8 0)))
((< (swizzle yzz (read vertex)) -4.0)
(recycle (vector 0 8 0))))

(set! vertex (+ vertex 3)))
(set! vertex positions-start))))
```

This lisp program compiles to 362 vectors of bytecode at startup, and runs well even on my cheap Android tablet. The speed seems close enough to native C++ to be worth the effort, and it’s much more flexible (i.e. future livecoding/JIT compilation possibilities). The memory layout is shown below, it’s packing executable instructions and model data into the same address space and doesn’t use any memory allocation while it’s running (no garbage collection and not even any C mallocs). The memory size is configurable but the nature of the system is such that it would be possible to put executable data into unused graphics sections (eg. normals or vertex colours), if appropriate.

Mongoose 2000

A screen shot from the Mongoose 2000 project, we now have most of the ‘pup focal’ interfaces working and syncing their data via the Raspberry Pi. This is the interface for recording a pup aggression event – including the identity of the aggressive mongoose and some information on the cause and severity. Each mongoose has a code, and we’re using sliding toggle button interfaces for quickly picking them – these can be filtered to restrict them to adults, pups, males or females where required.

The interface was written using “starwisp” – my system for building android applications in Scheme. The Mongoose 2000 app has lots of reusable interfaces, so it’s mostly constructed from fragments. There are no specialised database tables, so I can simply add or modify the widgets here and the data automagically appears in the Raspberry Pi export, which makes it very fast to build. I’ve abstracted the mongoose button grid selectors and tristate buttons (yes/no/maybe) as they are used in a lot of places. Here is the entire definition of the fragment for the interface above, the code includes everything for creating and recording the database entity for this event and all the android callbacks it needs to respond to external events.

```(fragment
"ev-pupaggr"

;; define the interface layout first
(linear-layout
(make-id "") 'vertical fillwrap pf-col
(list
(mtitle "title" "Event: Pup aggression")
(build-grid-selector "pf-pupaggr-partner"
"single" "Aggressive mongoose")
(linear-layout
(make-id "") 'horizontal
(layout 'fill-parent 100 '1 'left 0) trans-col
(list
(vert
(mtext "" "Fighting over")
(spinner (make-id "pf-pupaggr-over")
(list "Food" "Escort" "Nothing" "Other") fillwrap
(lambda (v)
(vert
(mtext "" "Level")
(spinner (make-id "pf-pupaggr-level")
(list "Block" "Snap" "Chase" "Push" "Fight") fillwrap
(lambda (v)
(tri-state "pf-pupaggr-in" "Initiate?" "initiate")
(tri-state "pf-pupaggr-win" "Win?" "win")))
(spacer 20)
(horiz
(mbutton "pf-pupaggr-done" "Done"
(lambda ()
(get-current 'pup-focal-id ""))
(entity-record-values db "stream" "pup-focal-pupaggr")
(list (replace-fragment (get-id "event-holder")
"events"))))
(mbutton "pf-pupaggr-cancel" "Cancel"
(lambda ()
(list (replace-fragment (get-id "event-holder")
"events")))))))

;; define the fragment's event callbacks
(lambda (fragment arg) ;; on create, return layout for building
(activity-layout fragment))

;; on start - update contents from the db
(lambda (fragment arg)
(entity-reset!)
(list
(populate-grid-selector
"pf-pupaggr-partner" "single" ;; select single mongoose
(db-mongooses-by-pack) #t     ;; from the whole pack
(lambda (individual)          ;; <- called when selected
(ktv-get individual "unique_id"))
(list)))
))

(lambda (fragment) '()) ;; on stop
(lambda (fragment) '()) ;; on resume
(lambda (fragment) '()) ;; on pause
(lambda (fragment) '())) ;; on destroy
```

Jellyfish: A daft new language is born

After trying, and failing, to write a flocking system in jellyfish bytecode I wrote a compiler using the prototype betablocker one. It reads a scheme-ish imperative language and generates bytecode (which is also invented, and implemented in C++) it only took a couple evenings and a train journey to write, and it even seems to work.

The basic idea is to walk through the code tree described by the scheme lists generating bits of bytecode that fit together. Let’s take logical “not” as an example. Like GPU processors, the only datatype is vectors of 3 floats, and we define false as 0 in the x position and anything else in x to be true (ignoring what’s in y or z). There is no single instruction for “not” so we have to build it from the other instructions. For example this bit of code:

```(not (vector 0 0 0))
```

should return (vector 1 0 0). When we are walking the tree of lists we check the first element and dispatch to a set of functions, one for each type of (higher level) instruction which ’emit’s a list containing the bytecode required. The one for ‘not’ looks like this, where x is the expression, e.g. ‘(not (vector 0 0 0))’:

```(define (emit-not x)
(append
(emit (vector jmz 3 0))
(emit (vector ldl 0 0))
(emit (vector jmr 2 0))
(emit (vector ldl 1 0))))
```

The first thing it does is return all the instructions required for the expression we pass in the second element of ‘x’ with ’emit-expr’. With our simple example it will just push (vector 0 0 0) onto the stack, but it could be a whole load of complicated nested expressions, and it will work the same.

After that we have some bytecode:

```jmz 3 0 ;; if top of stack is 0, jump forward 3 instructions (ldl 1 0)
ldl 0 0 ;; load 0 onto the stack
jmr 2 0 ;; jump forward 2 instructions (skip to next code section)
ldl 1 0 ;; load 1 onto the stack
```

So this just checks (and removes) the top element on the stack and pushes the opposite logical value. Pushing a single float like the ‘ldl’ (load literal) instructions above expands to a vector value internally, it’s just a convenience. Some instructions (such as those involving vector maths) are just a single instruction, others like conditionals or loops are a bit trickier as they need to count instructions to skip over variable length sections of program.

We add variables in the form of ‘let’ that map to addresses a the start of memory, read and write for accessing model memory like array lookups. The full flocking system looks like this, and animates a points primitive in OpenGL:

```(let ((vertex 512)
(accum-vertex 512)
(closest 9999)
(closest-dist 9999)
(diff 0)
(vel 1024))
(loop 1 ;; infinite loop
(loop (< vertex 532) ;; for every vertex
;; find the closest vertex
(loop (< accum-vertex 532)
(cond
;; if they're not the same vert
((not (eq? accum-vertex vertex))
;; get vector between the points
(cond
;; if it's closer so far
((< (mag diff) closest-dist)
;; record vector and distance
(set! closest diff)
(set! closest-dist (mag closest))))))
(set! accum-vertex (+ accum-vertex 1)))
;; reset accum-vertex for next time
(set! accum-vertex 512)

;; use closest to do the flocking, add new velocity
;; to old (to add some inertia)
(write! vel (+ (* (read vel) 0.99)
;; attract to centre
(* (+ (* (- (read vertex) (vector 0 0 0)) 0.05)
;; repel from closest vertex
(* (normalise closest) -0.15)) 0.01)))
;; add velocity to vertex position

;; reset and increment stuff
(set! closest-dist 9999)
(set! vel (+ vel 1))
(set! vertex (+ vertex 1)))
;; reset for main loop
(set! vertex 512)
(set! vel 1024)))
```

This compiles to 112 vectors of bytecode (I should call it vectorcode really) with extra debugging information added so we can see the start and the end of each higher level instruction. It all looks like this – which most importantly I didn’t need to write by hand!

```10 30000 0 ;; top memory positions are for registers controlling
512 2 1    ;; program and graphics state (primitive type and number of verts)
nop 0 0    ;; space
nop 0 0    ;; for all
nop 0 0    ;; the variables
nop 0 0    ;; we use
nop 0 0    ;; in the program
nop 0 0
nop 0 0
nop 0 0
;; starting let  <- program starts here
ldl 512 0        ;; load all the 'let' variable data up
sta 4 0
ldl 512 0
sta 5 0
ldl 9999 0
sta 6 0
ldl 9999 0
sta 7 0
ldl 0 0
sta 8 0
ldl 1024 0
sta 9 0
;; starting loop  <- start the main loop
;; starting loop
;; starting loop
;; starting cond
;; starting not
;; starting eq?
lda 5 0
lda 4 0
sub 0 0
jmz 3 0
ldl 0 0
jmr 2 0
ldl 1 0
;; ending eq?
jmz 3 0
ldl 0 0
jmr 2 0
ldl 1 0
;; ending not
jmz 38 0
;; starting set!
;; starting -
ldi 4 0
ldi 5 0
sub 0 0
;; ending -
sta 8 0
;; ending set!
;; starting cond
;; starting <
;; starting mag
lda 8 0
len 0 0
;; ending mag
lda 7 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 12 0
;; starting set!
lda 8 0
sta 6 0
;; ending set!
;; starting set!
;; starting mag
lda 6 0
len 0 0
;; ending mag
sta 7 0
;; ending set!
;; ending cond
;; ending cond
;; starting set!
;; starting +
lda 5 0
ldl 1 0
;; ending +
sta 5 0
;; ending set!
;; starting <
lda 5 0
ldl 532 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 2 0
jmr -72 0
;; ending loop
;; starting set!
ldl 512 0
sta 5 0
;; ending set!
;; starting write!
;; starting +
;; starting *
ldi 9 0
ldl 0.9900000095 0
mul 0 0
;; ending *
;; starting *
;; starting +
;; starting *
;; starting -
ldi 4 0
ldlv 0 0
nop 0 0
sub 0 0
;; ending -
ldl 0.05000000075 0
mul 0 0
;; ending *
;; starting *
;; starting normalise
lda 6 0
nrm 0 0
;; ending normalise
ldl -0.150000006 0
mul 0 0
;; ending *
;; ending +
ldl 0.009999999776 0
mul 0 0
;; ending *
;; ending +
sti 9 0
;; ending write!
;; starting write!
;; starting +
ldi 9 0
ldi 4 0
;; ending +
sti 4 0
;; ending write!
;; starting set!
ldl 9999 0
sta 7 0
;; ending set!
;; starting set!
;; starting +
lda 9 0
ldl 1 0
;; ending +
sta 9 0
;; ending set!
;; starting set!
;; starting +
lda 4 0
ldl 1 0
;; ending +
sta 4 0
;; ending set!
;; starting <
lda 4 0
ldl 532 0
jlt 3 0
ldl 1 0
jmr 2 0
ldl 0 0
;; ending <
jmz 2 0
jmr -160 0
;; ending loop
;; starting set!
ldl 512 0
sta 4 0
;; ending set!
;; starting set!
ldl 1024 0
sta 9 0
;; ending set!
ldl 1 0
jmz 2 0
jmr -173 0
;; ending loop
;; ending let
```

Ouya development experiments

The Ouya is a tiny game console which is designed for promoting indy games rather than traditional high budget productions. It’s cheap compared to standard games hardware, and all the games are free to play at least in demo form. It’s very easy to start making games with as it’s based on Android – you can just plug in a USB cable and treat it just the same as any other Android device. You also don’t need to sign anything to start building stuff – it’s just a case of adding one line to your AndroidManifest.xml to tell the Ouya that the program is a game:

``` Ã‚Â <category android:name="tv.ouya.intent.category.GAME"/> ```

and adding a 732Ãƒâ€”412 icon in “res/drawable-xhdpi/ouya_icon.png” so it shows up on the menu.

There is a lot to like about the Ouya’s philosophy, so I tried out some graphics experiments with it to get better acquainted:

This program was made using Jellyfish, part of my increasingly odd rendering stack which started out as a port of Fluxus to PS2. It’s a type of microcode written inside TinyScheme and running in a separate virtual machine that makes it possible to process a lot of geometry without garbage collection overheads. At some point I might write a compiler for this, but writing the code longhand at the moment means I can tweak the instruction set and get a better understanding of how to use it. Here is the microcode for the ribbons above, they run at 20,000 cycles per frame each (so about 1.2MHz):

```;; register section
8 20000 0 ;; control (pc, cycles, stack)
mdl-size prim-tristrip 1 ;; graphics (size, primtype, renderhints)
0 0 0 ;; pos
0 0 0 ;; sensor addr

;; program data section
mdl-start 0 0     ;; 4 address of current vertex
mdl-start 0 0     ;; 5 address of accum vertex (loop)
0 0 0             ;; 6 influence
0 0 0             ;; temp

;; code section
;; add up differences with every other vertex
ldi  4  0         ;; load current vertex
ldi  5  0         ;; load accum vertex
sub  0  0         ;; get the difference
lda  6  0         ;; load the accumulation
nrm  0  0         ;; normalise
sta  6  0         ;; store accumulation

;; accumulation iteration
jlt  2  0         ;; exit if greater than model end (relative address)

;; end accum loop
;; push away from other verts
lda  6  0         ;; load accum
ldl -0.1 0        ;; reverse & make smaller
mul 0 0

;; attract to next vertex
ldi 4 0           ;; load current
ldi 4 1           ;; load next
sub 0 0           ;; get the difference
ldl 0.4 0
mul 0 0           ;; make smaller

;; do the animation
ldi 4 0           ;; load current vertex
sti 4 0           ;; write to model data

;; reset the outer loop
ldl 0 0           ;; load zero
sta 6 0           ;; reset accum
ldl mdl-start 0
sta 5 0           ;; reset accum address

jlt 2  0          ;; if greater than model (relative address)
jmp 8  0          ;; do next vertex

;; end: reset current vertex back to start
ldl mdl-start 0
sta 4 0

;; waggle around the last point a bit
lda mdl-end 0     ;; load vertex pos
rnd 0 0           ;; load random vert
ldl 0.5 0
mul 0 0           ;; make it smaller
sta mdl-end 0     ;; write to model

jmp 8 0```

Mongoose 2000

Mongoose 2000 is a system I’m developing for the Banded Mongoose Research Project. It’s a behavioural recording system for use in remote areas with sporadic internet or power. The project field site is located in Uganda in the countryside and it needs to run for long time frames, so there are big challenges when it comes to setting up the system and debugging it remotely.

In order to make this work we’re using a Raspberry Pi as a low power central wifi node, allowing Android tablets to communicate with each other and synchronise data. There are a couple of types of observations we need to record:

1. Pack composition: including presence in the pack, individual weights and pregnancy state.
2. Pup focal: studies of individual pups, who’s feeding them, when they feed themselves or playing.
3. Group events: warning calls, moving locations, fights with other packs.

We also need to store and manage the pack information, so names, collar and chip ids of individual animals. The data is passed around a bit like this:

The interface design on the tablets is very important – things may happen quickly, often at the same time (for instance group events happening while a pup focal observation is being carried out), so we need multiple simultaneous things on screen, and the priority has to be on responsiveness and speed rather than initial ease of use. For these reasons it has similarities to live music performance interfaces. We can also take advantage of the storage on the tablets to duplicate data on the Raspberry Pi to add redundancy. Data is transferred from the field site by downloading the entire database onto the Android tablets, which can then be emailed using the normal internet, either when it’s working locally or by taking the tablets into the nearby town where bandwidth is better.

The project is a mix of cheap, replaceable hardware and mature well used software – Raspberry Pi’s mean we can afford a backup or two on site, along with plenty of replacement sdcards with the OS cloned. The observation software can also be updated over the Android play store (for bug fixes, or changing the data gathered) without any changes required on the Raspberry Pi. The platform is based on the one I built for the ‘Crap App’ along with experimental stuff I was doing with bike mounted wifi nodes with Kaffe Matthews, and includes SQLite for the underlying database on both platforms (providing atomic writes and journalling) and TinyScheme for Android and Racket for the Raspberry Pi allowing me to share a lot of the code between the hardware platforms.

Sonic Bike Hacklab Part 3: The anti-cloud – towards bike to bike mesh networking

[Continued from part 2] One of the philosophies that pre-dates my involvement with the sonic bikes project is a refusal of cloud technologies – to avoid the use of a central server and to provide everything required (map, sounds and computation) on board the bikes. As the problems with cloud technology become more well known, art projects like this are a good way to creatively prototype alternatives.

The need to abstractly “get the bikes to talk to one another” beyond our earlier FM transmission experiments implies some kind of networking, and mesh networking provides non-hierarchical peer to peer communication, appropriate if you want to form networks between bikes on the street over wifi (which may cluster at times, and break up or reform as people go in different directions). After discussing this a bit with hacklab participant and fellow Beagleboard enthusiast Adam Parkinson I thought this would be a good thing to spend some time researching.

The most basic networking information we can detect with wifi is the presence of a particular bike, and we decided to prototype this first. I’d previously got hold of a low power wifi usb module compatible with a Raspberry Pi (which I found I could run with power from the bike’s beagleboard usb!), and we could use an Android phone on another bike, running fluxa to plug the signal strength into a synth parameter:

It’s fairly simple to make an ad-hoc network on the Raspberry Pi via command line:

``````ifconfig wlan0 down
iwconfig wlan0 channel 4
iwconfig wlan0 essid 'bikemesh'
ifconfig wlan0 192.168.2.1
``````

On the Android side, the proximity synth software continuously measures the strength of the wifi network from the other bike, using a WifiScanReceiver we set up like so:

```wifi = (WifiManager) getSystemService(Context.WIFI_SERVICE);
wifi.startScan();
new IntentFilter(
WifiManager.SCAN_RESULTS_AVAILABLE_ACTION));
```

The WifiScanReceiver is a subclass of BroadcastReceiver, that re-triggers the scan process. This results in reasonably high frequency scanning, a couple a second or so and we also check the SSID names of the networks around the bike for the correct “bikemesh” node:

```import java.util.List;

import android.content.Context;
import android.content.Intent;
import android.net.wifi.ScanResult;
import android.net.wifi.WifiManager;
import android.util.Log;
import android.widget.Toast;

private static final String TAG = "WiFiScanReceiver";

super();
Level=0;
}

static public int Level;

@Override
public void onReceive(Context c, Intent intent) {
List<ScanResult> results = ((Earlobes)c).wifi.getScanResults();
ScanResult bestSignal = null;

for (ScanResult result : results) {
if (result.SSID.equals("bikemesh")) {
String message = String.format("bikemesh located: strength %d",
result.level);
Level = result.level;
}
}
((Earlobes)c).wifi.startScan();
}

}
```

The synth was also using the accelerometers, but ramped up the cutoff frequency of a low pass filter on some white noise and increased modulation on the accelerometer driven sine waves when you were close to the other bike. The result was quite surprising with such a simple setup, as it immediately turned into a game playing situation, bike “hide and seek” – as rider of the proximity synth bike you wanted to hunt out where the wifi bike was, the rider of which would be trying to escape. The range was surprisingly long distance, about halfway across London Fields park. Here is an initial test of the setup (we had the sounds a bit more obvious than this in later tests):

With the hardware and some simple software tested, the next stage would be to run multiple wifi nodes and get them to connect and form a mesh network. I got some way into using Babel for this, which is very self contained and compiles and runs on Beagleboard and Raspberry Pi. The other side to this is what kind of things do we want to do with this kind of “on the road” system, how do we notate and artistically control what happens over a sonic bike mesh network?

Some ideas we had included recording sounds and passing them between bikes, or each bike forming a synth node, so you create and change audio dependant on who is around you and what the configuration is. I made few more notes on the technical stuff here.

Sonic Bike Hacklab Part 2: FM accelerometer transmissions

[Continued from part 1] On day one, after we introduced the project and the themes we wanted to explore, Ryan Jordan had a great idea of how to prototype the bike-bike communication using FM radio transmissions. He quickly freeform built a short range FM transmitter powered by a 9v battery.

The next thing we needed was something to transmit – and another experiment was seeing how accelerometers responded during bike riding on different terrains. I’d been playing with running the fluxa synth code in Android native audio for a while, so I plugged the accelerometer input into parameters of a simple ring modulation synth to see what would happen. We set off with the following formation:

The result was that the vibrations and movements of a rider were being transmitted to the other bikes for playback, including lots of great distortion and radio interference. As the range was fairly short, it was possible to control how much of the signal you received – as you cycled away from the “source cyclist”, static (and some BBC radio 2) started to take over.

We needed to tune the sensitivity of the accelerometer inputs – as this first attempt was a little too glitchy and overactive, the only changes really discernible were the differences between the bike moving and still (and it sounded like a scifi laser battle in space). One of the great things about prototyping with android was that we could share the package around and run it on loads of phones. So we went out again with three bikes playing back their own movements with different synth settings.

Sonic Bike Hacklab: Part 1

Time to report on the sonic bike hacklab Kaffe Matthews and I put on in AudRey HQ in Hackney. We had a sunny and stormy week of investigation into sonic bike technology. After producing three installations with sonic bikes, the purpose of the lab was to open the project up to more people with fresh ideas, as well as a chance to engage with the bikes in a more playful research oriented manner without the pressure of an upcoming production.

Each of the three previous installation, in Ghent, Hailuoto island, Finland, and Porto, we’ve used the same technology, a Beagleboard using a GPS module to trigger samples to play back over speakers mounted on the handlebars. The musical score is a map created using Ushahidi consisting of zones tagged with sample names and playback parameters that the bikes carry around with them.

We decided to concentrate on two areas of investigation, using the bike as a musical instrument and finding ways to get the bikes to talk to each other (rather than being identical independent clones). We had a bunch of different components to play with, donated by the participants, Kaffe and I – while the bikes already provided power via 12v batteries, amplification and speakers. We focused on tech we could rapid prototype with minimal fuss.

The next few posts will describe the different experiments we carried out using these components.

DORIS on the high seas

Yesterday was the first test of the full DORIS marine mapping system I’m developing with Amber Teacher and David Hodgson at Exeter University. We went out on a fishing boat from Mylor harbour for a 5 hour trip along the Cornish coast. It’s a quiet season for lobsters at the moment, so this was an opportunity to practice the sampling without too much pressure. Researcher Charlie Ellis was working with Hannah Knott, who work with the National Lobster Hatchery and need to take photos of hundreds of lobsters and combine them with samples of their genetic material.

By going out on the boats they get accurate GPS positioning in order to determine detailed population structures, and can sample lobsters that are small or with eggs and need to be returned to the sea as well as the ones the fishermen take back to shore to be sold. Each photograph consists of a cunning visual information system of positioning objects to indicate sex, whether they are for return or removal and a ruler for scale.

Android Camera Problems

The DORIS marine mapping platform is taking shape. For this project, touch screens are not great for people wearing gloves in small fishing boats – so one of the things the android app needs to do is make use of physical keys. In order to do that for taking pictures, I’ve had to write my own camera android activity.

It seems that there are differences in underlying camera behaviour across devices – specifically the Acer E330 model we have to take out on the boats. It seemed that nearly every time takePicture() is called the supplied callback functions fail fire. I’ve tested callbacks for the shutter, raw and jpeg and error events and also tried turning off the preview callback beforehand as suggested elsewhere, no luck on the Acer, while it always works fine on HTC.

The only solution I have so far is to close and reopen the camera just before takePicture() is called which seems to work as intended. As it takes some seconds to complete, it’s also important (as this is bound to a key up event) to prevent the camera starting to take pictures before it’s finished processing the previous one as that causes further callback confusion.

```import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.hardware.Camera;
import android.hardware.Camera.CameraInfo;
import android.hardware.Camera.PictureCallback;
import android.util.Log;

class PictureTaker
{
private Camera mCam;
private Boolean mTakingPicture;

public PictureTaker() {
mTakingPicture=false;
}

public void Startup(SurfaceView view) {
mTakingPicture=false;
OpenCamera(view);
}

private void OpenCamera(SurfaceView view) {
try {
mCam = Camera.open();
if (mCam == null) {
Log.i("DORIS","Camera is null!");
return;
}
mCam.setPreviewDisplay(view.getHolder());
mCam.startPreview();
}
catch (Exception e) {
Log.i("DORIS","Problem opening camera! " + e);
return;
}
}

private void CloseCamera() {
mCam.stopPreview();
mCam.release();
mCam = null;
}

public void TakePicture(SurfaceView view, PictureCallback picture)
{
if (!mTakingPicture) {
mTakingPicture=true;
CloseCamera();
OpenCamera(view);

try {
mCam.takePicture(null, null, picture);
}
catch (Exception e) {
Log.i("DORIS","Problem taking picture: " + e);
}
}
else {