This autumn we have been developing a new version of the Farm Crap App with the Duchy College and Rothamstead Research. This project is about tackling the difficulties farmers have using natural fertilisers while needing to report realistic figures the government agencies – and understanding the guidance they provide. The original version was a big success, but only contained information on a handful of manures and didn't deal with the nutrient content of the soil.
In the "Pro edition" we are adding a lot more detail – the nutrients already in the soil can be estimated based on the type of the soil and the previous crops grown there. The needs of specific crops can also be added – we are concentrating on grass, barley and wheat for the moment – as this is a huge area to deal with. Once you have this information you can subtract it from the nutrients added by the manure to come up with a picture of which manure is best suited to a large range of crop, soil, rainfall and seasonal situations.
We are used to dealing with scientific data straight from research, but this data has been processed into a set of tables aimed at farmers and consultants by the agencies and civil servants based on the original research, which is very different. A lot of times you get the feeling that there is an underlying model being used which would be good to have access to. Meanwhile, we are taking these tables and converting them into a usable, minimal set of options that can be accessed and played with in the field – where the decisions happen.
We are also adding a new mapping feature, which was very much the most requested feature from the farmers and producers we tested it with. This allows you to draw on the map to record each field, which means we can get the size estimation from the GPS coordinates fairly accurately as well.
The UAV toolkit’s second project phase is now complete, the first development sprint at the start of the year was a bit of research into what we could use an average phone’s sensors for, resulting in a proof of concept remote sensing android app that allowed you to visually program different scripts which we then tested on some drones, a radio controlled plane and a kite.
This time we had a specific focus on environmental agencies, working with Katie Threadgill at the Westcountry Rivers Trust has meant we’ve had to think about how this could be used by real people in an actual setting (farm advisors working with local farmers). Making something cheap, open source and easy to use, yet open ended has been the focus – and we are now looking at providing WRT with a complete toolkit which would comprise a drone (for good weather) a kite (for bad weather/no flight licences required) and an android phone so they don’t need to worry about destroying their own if something goes wrong. Katie has produced this excellent guide on how the app works.
The idea of appropriate technology has become an important philosophy for projects we are developing at Foam Kernow, in conjunction with unlikely connections in livecoding and our wider arts practice. For example the Sonic Bike project – where from the start we restricted the technology so that no ‘cloud’ network connections are required and all the data and hardware required has to fit on the bike – with no data “leaking” out.
With the UAV toolkit the open endedness of providing a visual programming system that works on a touchscreen results in an application that is flexible enough to be used in ways and places we can’t predict. For example in crisis situations, where power, networking or hardware is not available to set up remote sensing devices when you need them most. With the UAV toolkit we are working towards a self contained system, and what I’ve found interesting is how many interface and programming ‘guidelines’ I have to bend to make this possible – open endedness is very much against the grain of contemporary software design philosophy.
The “app ecosystem” is ultimately concerned with elevator pitches – to do one thing, and boil it down to the least actions possible to achieve it. This is not a problem in itself, but the assumption that this is the only philosophy worth consideration is wrong. One experience that comes to mind recently is having to make and upload banner images of an exact size to the Play Store before it would allow me to release an important fix needed for Mongoose 2000, which is only intended to ever have 5 or 6 users.
For the UAV toolkit, our future plans include stitching together photos captured on the phone and producing a single large map without the need to use any other software on a laptop. There are also interesting possibilities regarding distributed networking with bluetooth and similar radio systems – for example sending code to different phones is needed, as currently there is no way to distribute scripts amongst users. This could also be a way of creating distributed processing – controlling one phone in a remote location with another via code sent by adhoc wifi or SMS for example.
In order to make the software usable in this case, we decided on two directions. On the one hand there needs to be a simple way to start and stop programs (or “flight modes”) that read sensor data, as well as defining certain global settings, ie. flight altitude, desired image coverage etc. At the same time, the code to define what this does needs to still be programmable in the app – and more complex behaviours need to be possible to support both kites and UAVs. Our philosophy is that it has to be open ended, as we don’t know where the toolkit it might be useful (ie. crisis mapping situations) or what new sensors will be available on a device in the future.
The new main screen
One specific set of new behaviours we need is for kite mapping. We already have the ability to choose when to take pictures based on GPS and altitude, but with a kite there can be lots of turbulence and the camera is in a much less controlled state, flipping around taking shots of the sky etc. So we need to calculate things like jerk from change in acceleration and use orientation sensors to only take photos when the lens is pointing directly down, within some degree of acceptable margin.
Below is a section of the code that calculates if we are pointing down using the magnetometer and accelerometer – the drag drop visual code can now be used to build normal Scheme functions using a touchscreen (a bit like scheme bricks). In fact I managed to do all of this work on the phone. There are now two types of code, the main programs or “flight modes” that you can run from the front screen, and a library of editable functions which they use. This means there are now three levels that the software can be used – using it without needing to see any of the code at all, editing the basic behaviour like which sensor’s data are captured, and finally modifying the more detailed code to make it do completely new things.
Mongoose 2000 version 2 is now being used in the Banded Mongoose Research Project Fieldsite on the Mweya Peninsula, in the Queen Elizabeth National Park, western Uganda.
We’ve added two new focal observations – where a single mongoose in a specific life stage is followed, and has it’s activity recorded for 20 minutes. These observations include different events that can happen (fighting or cooperating with other individuals etc). Nearly all the interfaces are shown below – the system includes adding new packs or individuals, data review and syncronisation with other tablets via the Raspberry Pi.
One of our most ambitious projects: Mongoose 2000, is now up and running after 6 months of testing. This is a Raspberry Pi and Android tablet system to synchronise and store masses of data for a long running behavioral experiment recording the activities of packs of mongooses in the field site in Uganda. They broad aims of the project are to study mongoose behaviour in order to understand the evolution of society.
The timespan that we are working with is long, and the location – while not as remote as it could be (there is some internet access and power) required some consideration – so this was a project where we really needed to employ an appropriate technology approach which is manifested in various ways:
Open source software: means that Foam Kernow are not a bottle neck to continued development, as we do not have exclusive control over the source code (which is released into the commons). New developers can be found if required (for whatever reason) who do not need to start from nothing – this gives the research team more control and future proofing.
Use of commodity hardware: it’s likely that the hardware in use will become obsolete in this timeframe. The lifespan of android software should mean it can be installed on compatible devices for a long time, and the team can make use of advances in sensor or battery technology. The raspberry pi we are using is already an older model now, but as it’s a standard linux setup we can easily move it to other machines in the future. Currently it acts as an ‘appliance’ which just needs to be turned on, but we can add a web interface to control it from the tablet – or eventually replace it with a peer to peer syncronisation system.
The Ugandans working on the project have an healthy DIY relationship with technology, they expect to be able to repair or modify things themselves, and I’d like to figure out ways we can work with this more. The UAV toolkit project provides some indication of what can be done with programming these kinds of devices in the field. Part of the decisions on the hardware (and the design of the software, e.g. using a scheme interpreter) were to use devices that were open to a more end-user programming approach in the future.
Syncronising a tablet with observations previously recorded on the Raspberry Pi:
Some photos taken by the UAV toolkit on a recent flight at our gyllyngvase beach test site, using a KAP foil 1.6 kite instead of a drone. Kites have many advantages, no flight licences required, no vibration from engines and a fully renewable power source!
We’re using a 3D printed mounting plate for the phone strung from the top of the single line just below the kite. It needs more wind than we had to get higher altitudes but the first impressions are good. I’ve also added a new trigger mode to the UAV toolkit programming language that remembers the GPS coordinates where all the photos are taken, so it can build up overlapping images even if the movement is harder to control.
The tail of the kite – which turned out to be important for stabilising the flight.
Here is the code using the when-in-new-location trigger to calculate overlap based on the camera angle, gps and altitude – which ideally should be driven somehow by the length of the line. As an aside, this screenshot was taken in the chrome browser which now runs android apps.
View of ground control from a OnePlus phone mounted on a Y6 UAV:
A report from the first flight test of the new UAV android software with the Exeter University UAV science group. We had two aircraft, a nice battle hardened fixed wing RC plane and a very futuristic 3D robotics RTF Y6. We also had two phones for testing, an old cheap Acer Liquid Glow E330 (the old lobster phone) and a new, expensive Oneplus One A0001. Both were running the same version (0.2) of the visual programming toolkit which I quietly released yesterday.
Here is the high-tech mounting solution for the RC aircraft. There were a lot of problems with the Acer, I’m not sure if the GPS triggering was happening too fast or if there is a problem with this particular model of phone but the images appear to be corrupted and overwriting each other (none of this happened in prior testing of course Despite this, there were some Ok shots, but a lot of vibration from the petrol motor during acceleration.
We tried two types of programs running on the phones, one triggered photos by a simple timer, the other used GPS distance, altitude and camera angle in order to calculate an overlap coverage. Both seems to work well, although I need to go through the sensor data for each image to check the coverage by positioning the images using the GPS. One thing I was worried about was the pitch and yaw of the aircraft – but with the Y6 this was extremely stable, along with the altitude too, which can be controlled automatically at a set height.
The vibration seemed less of a problem on the Y6, but on one of the flights the power button got pressed bringing up the keylock screen which annoyingly prevents the camera from working. We did however capture lots of sensor data – accelerometer, magnetometer, orientation and gravity with no problems on the Acer.
The OnePlus phone worked pretty flawlessly overall, and we left it till last as it’s a bit less expendable! It’s possible to mount phones easily underneath the batteries on the Y6 without the need for tape, which looks a bit more professional:
We still have problems with vibration, which seems to cause the bands of fuzziness (see the bottom and top photos) so things to look at next include:
Cushioning for the phone (probably just a small bit of foam).
Reproducing and fixing the Acer camera problem.
Some kind of audio indication from the phone that the camera is working etc.
Try again to lock the keys on the phone or override the key lock screen.
More camera controls, override and lock the exposure.
Output raw files instead of running the jpeg compression in the air! This seems to take longer than actually taking the photo, and we don’t care about space on the sdcard.
Some screenshots of the UAV livecoding visual programming language. Weather being on our side, we’re planning some test flights later this week! The first program uses GPS to take photos with an overlap of 50% at 300 metres altitude, based on the vertical camera angle as reported from the device. It assumes the the flight orientation is level:
The blocks are all drag and drop and get converted into Scheme code which is run by a modified tinyscheme interpreter. The code can be saved and loaded, and I’m planning to make it possible for people to share code via email.
This is a simpler program which takes a photo every 3 seconds and records a handful of sensor data to the database:
At the bottom you can see a squashed camera preview – I’ve tried various approaches (hiding, scaling to 0 pixels etc) but android requires that there is a preview somewhere in order to take a photo properly. You can view the recorded data on the device too, for checking. There is also a ‘flight mode’ which locks and turns off the screen, and ignores all button events. On some phones you need to take out the battery to stop the program running but unfortunately on others you can still use the power button to close the program.
I’ve recently begun a new project with Karen Anderson who runs the UAV research group at the Exeter University Environment and Sustainability Institute. We’re looking at using commodity technology like android phones for environmental research with drones. Ecology research groups and environmental agencies have started using drones as a replacement for expensive and risky light aircraft for gathering data on changes to landscapes due to climate change and erosion. How can we make tools that are simpler and cheaper for them to set up and use? Can our software also be relevant for children using kites in cities for making their own maps, or farmers wishing to record changes to their own fields themselves?
This is a more open ended project than our previous environmental and behavioural projects, so we’re able to approach this with an R&D perspective in relation to the technology. One of the patterns I’ve noticed with this kind of work is that after providing scientists with something that meets their immediate needs, it inspires a ton of new ideas and directions – and I become a bottleneck. Ideally I need to provide something that allows them to build things themselves once they have an understanding of all the possibilities, also adapting to needs ‘in the field’ is an important aspect of the kind of work that they do – which can be in remote locations anywhere in the world.
Some time ago I had a go at porting my musical livecoding language scheme bricks to android for the open sauces project. I’m now applying it as a way of configuring sensor data acquisition and recording by drag/dropping a visual programming language. It’s early days yet, I’m still debugging the (actually rather amazing) android drag/drop API – here are some initial screenshots.