Creative Design Informatics for Horticultural Awareness at the End of the World Garden

Thanks to Paul Chaney who runs The End of the World Garden, we had an opportunity to trial a short workshop based on our Farm Crap App and prototype Allotment Lab on his two-acre forest garden site in Cornwall. This was our contribution to the Bank Holiday Weekend Haymaking Extravaganza (along with a bit of hay-making too).

We started by looking in detail at the Farm Crap App, how it arose from an intersection of governmental policy and the needs of farmers, providing a way to quantify the nutrients present in natural manures that are added to the land. We were able to map a section of the End of the World Garden intended for future agricultural experimentation, and based on its crop history and soil type calculate the additional nutrients required for growing different crops.

img_20170826_160520

One of our discoveries were the limitations of the underlying DEFRA data, being based on a particular approach to farming it uses a pretty broad brush – the necessary simplicity of the data in some areas (for example in the range of crops present) is not so well suited to smallholders. For example high yield crops that maintain profitability are less important to smallholders who are able to use slower growing strains. It was also interesting to work with people who had less experience with farming (me included) and trying to work out the difference between ‘dribble bars’ and ‘splash plates’ and other arcane muck spreading technology.

We also tested our prototype Allotment Lab. This has been designed to provide a couple of trial walk through experiments based on compost quality and soil type estimation for a more general audience – allotment owners, gardeners and smallholder farmers included.

img_20170826_175749

We tested the soil in a small area of field, attempting to form different shapes with our hands and a little water, in order to estimate its specific type and consistency. The soil in this part of Cornwall consists of a top layer of ‘sandy clay’ (actually containing larger granite particles, rather than sand) and a lower layer of ‘silty clay’ laid down from melt water during the last ice age. Paul dug down to find the lower layer so we could check the difference between them using the test.

tex

img_20170826_175132

This workshop was useful in that it highlighted the limitations of top-down governmental data and the potential for citizen science to allow people to gather their own knowledge based on their specific circumstances. One of our challenges is coming up with a range of experiments that can augment official data and allow farmers, allotment owners and others to ask questions, collect data and make decisions that will help them in increasingly difficult social, climatic and economic circumstances.

Crab camouflage citizen science game

The Natural History Museum London commissioned us to build a crab catching camouflage game with the Sensory Ecology Group at the University of Exeter (who we’ve worked with previously on the Nightjar games and Egglab). This citizen science game is running on a touchscreen as part of the Colour and Vision exhibition which is running through the summer. Read more about it here.

crabtitle

28457014310_4f22f34c39_o

28123741394_5420a5331f_o

28741344715_b972d1edaa_o

Red King progress, and a sonification voting system

We have now launched the Red King simulation website. The fundamental idea of the project is to use music and online participation to help understand a complex natural process. Dealing with a mathematical model is more challenging than a lot of our citizen science work, where the connection to organisms and their environments is more clear. The basic problem here is how to explore a vast parameter space in order to find patterns in co-evolution.

After some initial experiments we developed a simple prototype native application (OSX/Ubuntu builds) in order to check we understand the model properly by running and tweaking it.

d714be3e49925c7492edf0623b27a41c

The next step was to convert this into a library we could bind to python. With this done we can run the model on a server, and have it autonomously update it’s own website via django. This way we can continuously run the simulation, storing randomly chosen parameters to build a database and display the results. I also set up a simple filter to run the simulation for 100 timesteps and discard parameters that didn’t look so interesting (the ones that went extinct or didn’t result in multiple host or virus strains).

There is also now a twitter bot that posts new simulation/sonifications as they appear. One nice thing I’ve found with this is that I can use the bot timeline to make notes on changes by tweeting them. It also allows interested people an easy way to approach the project, and people are already starting discussions with the researchers on twitter.

1570fc8a4b4e93b560c2c3c159ba1710

Up to now, this has simply been a presentation of a simulation – how can we involve people so they can help? This is a small project so we have to be realistic what is possible, but eventually we need a simple way to test how the perception of a sonification compares with a visual display. Amber’s been doing research into the sonification side of the project here. More on that soon.

For now I’ve added a voting system, where anyone can up or down-vote the simulation music. This is used as a way to tag patterns for further exploration. Parameter sets are ranked using the votes – so the higher the votes are the higher the likelihood of being picked as the basis for new simulations. When we pick one, we randomise one of its parameters to generate new audio. Simulations store their parents, so you can explore the hierarchy and see what changes cause different patterns. An obvious addition to this is to hook up the twitter retweets and favorites for the same purpose.

ced6bbb49c7d71fd10b46719452d4e9b

Red King – listening to coevolution

Scientific models are used by researchers in order to understand interactions that are going on around us all the time. They are like microscopes – but rather than observing objects and structures, they focus on specific processes. Models are built from the ground up from mathematical rules that we infer from studying ecosystems, and they allow us to run and re-run experiments to gain understanding, in a way which is not possible using other methods.

I’ve managed to reproduce many of the patterns of co-evolution between the hosts and parasites in the red king model by tweaking the parameters, but the points at which certain patterns emerge is very difficult to pin down. I thought a good way to start building an understanding of this would be to pick random parameter settings (within viable limits) and ‘sweep’ paths between them – looking for any sudden points of change, for example:

random_path-21-0-big

This is a row of simulations which are each run for 600 timesteps, with time running downwards. The parasite is red and the host is blue, and both organism types are overlayed so you can see them reacting to each other through time. Each run has a slightly different parameter setting, gradually changing between two settings as endpoints. Halfway through there is a sudden state change – from being unstable it suddenly locks into a stable situation on the right hand side.

I’ve actually mainly been exploring this through sound so far – I’ve built a setup where the trait values are fed into additive synthesis (adding sine waves together). It seems appropriate to keep the audio technique as direct as possible at this stage so any underlying signals are not lost. Here is another parameter sweep image (100 simulations) and the sonified version, which comprises 2500 simulations, overlapped to increase sound density.

random_path-23-0-big

You can hear quite a few shifts and discontinuities between different branching patterns that emerge at different points – writing this I realise an animated version might be a good idea to try too.

Stereo is done by slightly changing one parameter (the host tradeoff curve) across the left and right channels – so it gives the changes a sense of direction, and you are actually hearing 5000 simulations being run in total, in both ears. All the code so far (very experimental at present) is here. The next thing to do is to take a step back and think about the best way to invite people in to experience this strange world.

Here are some more tests:

random_path-21-0-big

random_path-21-0-big

random_path-38-0-big

Red King: Host/Parasite co-evolution citizen science

A new project begins, on the subject of ecology and evolution of infectious disease. This one is a little different from a lot of Foam Kernow’s citizen science projects in that the subject is theoretical research – and involves mathematical simulations of populations of co-evolving organisms, rather than the direct study of real ones in field sites etc.

The simulation, or model, we are working with is concerned with the co-evolution of parasites and their hosts. Just as in more commonly known simulations of predators and prey, there are complex relationships between hosts and parasites – for example if parasites become too successful and aggressive the hosts start to die out, in turn reducing the parasite populations. Hosts can evolve to resist infection, but this has an overhead that starts to become a disadvantage when most of a population is free of parasites again.

graph
Example evolution processes with different host/parasite trade-offs.

Over time these relationships shift and change, and this happens in different patterns depending on the starting conditions. Little is known about the categorisation of these patterns, or even the range of relationships possible. The models used to simulate them are still a research topic in their own right, so in this project we are hoping to explore different ways people can both control a simulation (perhaps with an element of visual live programming), and also experience the results in a number of ways – via a sonifications, or game world. The eventual, ambitious aim – is to provide a way for people to feedback their discoveries into the research.

sketch

Hungry birds citizen science at the Paris Natural History Museum

Some photos of Mónica Arias running her “Hungry Birds” butterfly catching experiment at the Muséum national d’Histoire naturelle in Paris.

comp

The Museum’s internet capability was challenging, so we ran the game server on a Raspberry Pi with an adhoc wifi and provided the data collection ourselves. The project is concerned with analysing pattern recognition and behaviour in predators. We’re using ten different wing patterns (or morphs), and assigning one at random to be the toxic one, and looking at how long it takes people to learn which are edible.

hungrybirds2

New camouflage pattern engine

One of the new projects we have at foam kernow is a ambitious new extension of the egglab player driven camouflage evolution game with Laura Kelley and Anna Hughes at Cambridge Uni.

As part of this we are expanding the patterns possible with the HTML5 canvas based pattern synthesiser to include geometric designs. Anna and Laura are interested in how camouflage has evolved to disrupt perception of movement so we need a similar citizen science game system as the eggs, but with different shapes that move at different speeds.

Here are some test mutations of un-evolved random starting genomes:

3

4

2

This is an example pattern program:

5gen

Hungry birds 2 – the citizen science edition

One of the three citizen science game projects we currently have running at Foam Kernow is a commission for Mónica Arias at the Muséum national d’Histoire naturelle in Paris, who works with this research group. She needed to use the Evolving butterflies game we made last year for the Royal Society Summer exhibition to help her research in pattern evolution and recognition in predators, and make it into a citizen science game. This is a standalone game for the moment, but the source is here.

title

aie

We had several issues to address with this version. Firstly a lot more butterfly wing patterns were needed – still the heliconius butterfly species but different types. Mónica then needed to record all player actions determining toxic or edible patterns, so we added a database which required a server (the original is an educational game that runs only on the browser, using webgl). She also needed to run the game in an exhibition in the museum where internet access is problematic, so we worked on a system than could run on a Raspberry Pi to provide a self contained wifi network (similar to Mongoose 2000). This allows her to use whatever makes sense for the museum or a specific event – multiple tablets or PC with a touchscreen, all can be connected via wifi to display the game in a normal browser with all the data recorded on the Pi.

The other aspect of this was to provide her with an ‘admin’ page where she can control and tweak the gameplay as well as collect the data for analysis. This is important as once it’s on a Raspberry Pi I can’t change anything or support it as I could in a normal webserver – but changes can still be made on the spot in reaction to how people play. This also makes the game more useful as researchers can add their own butterfly patterns and change how the selection works, and use it for more experiments in the future.

admin

Some algorithms for Wild Cricket Tales

On the Wild Cricket Tales citizen science game, one of the tricky problems is grading player created data in terms of quality. The idea is to get people to help the research by tagging videos to measure behaviour of the insect beasts – but we need to accept that there will be a lot of ‘noise’ in the data, how can we detect this and filter it away? Also it would be great if we can detect and acknowledge players who are successful at hunting out and spotting interesting things, or people who are searching through lots of videos. As we found making the camouflage citizen science games, you don’t need much to grab people’s attention if the subject matter is interesting (which is very much the case with this project), but a high score table seems to help. We can also have one per cricket or burrow so that players can more easily see their progress – the single egglab high score table got very difficult to feature on after a few thousand players or so.

We have two separate but related problems – acknowledging players and filtering the data, so it probably makes sense if they can be linked. A commonly used method, which we did with egglab too (also for example in Google’s reCAPTCHA which is also crowdsourcing text digitisation as a side effect) is to get compare multiple people’s results on the same video, but then we still need to bootstrap the scoring from something, and make sure we acknowledge people who are watching videos no one has seen yet, as this is also important.

Below is a simple naive scoring system for calculating a score simply by quantity of events found on a video – we want to give points for finding some events, but over some limit we don’t want to reward endless clicking. It’s probably better if the score stops at zero rather than going negative as shown here, as games should never really ‘punish’ people like this!

quantity-score

Once we have a bit more data we can start to cluster events to detect if people are agreeing. This can give us some indication of the confidence of the data for a whole video, or a section of it – and it can also be used to figure out a likelihood of an individual event being valid using the sum of neighbouring events weighted by distance via a simple drop-off function.

quality-score

If we do this for all the player’s events over a single video we can get an indication of how consistent they are with other players. We could also recursively weight this by a player’s historical scores – so ‘trusted’ players could validate new ones – this is probably a bit too far at this point, but it might be an option if we pre-stock some videos with data from the researchers who are trained with what is important to record.