Vision

Vision is essential to bees, as it is the primary sense used to avoid obstacles in the environment, find flowers to collect nectar from, and find the way back to the hive. We are interested in how the bee uses vision to perform all of these tasks, and we have started first with the simplest task: how bees avoid obstacles.

Current Work

The early GB models of the visual system are those systems of which are primarily used for basic flight control. Currently, we are modelling the circuits which are responsible for motion detection and in turn, are used for flight control tasks (like regulating velocity and roll/pitch/yaw manoeuvres).

Bees primarily use vision to navigate through their environment. This ability to navigate starts with the relatively simple flight control task of avoiding obstacles in the environment. Experiments investigating this task have found two basic control mechanisms that bees use: the optomotor response and the corridor centring response.

Optomotor Response Model

The optomotor response allows bees to fly in straight lines by acting to oppose rotations of the visual field. Deliberate turns by the bee occur faster than the optomotor response can detect and are not affected.

Here, we show a model of the optomotor response as it acts to oppose the rotation of a drum around the simulated bee:

 

Corridor Centring Model

The corridor centring response allows bees to maintain distance from obstacles and the ground and also to regulate their speed. The corridor centring response utilises angular velocity, a measure of the speed that objects move across the visual field. The farther away an object is the less angular velocity will be detected. Therefore, angular velocity gives a measure of the distance to the obstacles the bee is moving past, and allows the bee to change its position to avoid getting too close to obstacles.

Below are instructions for obtaining and running the model. This information can also be found on the Downloads page. There are three experiments included, and there is some set up work to configure each. The model can be run on Linux or OSX.

First, download the zip file from here, and unzip it.

Second install Qt 5 and download the simulated environment (beeworld) from the GitHub repository. You’ll also need scipy.

Third, run QtCreator, load the .pro file and use the default build options, then build the beeworld. Copy the beeworld2 binary (if on Mac you need the one ”inside” the .app package (right click and select ‘show package contents’ to get it). Then replace the beeworld2 file from the zip you downloaded (it is compiled for Mac, but almost certainly won’t work on your computer).

Fourth, install SpineML_2_BRAHMS and BRAHMS as described here. Note the installation locations (on Mac the installation locations are ”inside” the .app package,right click and select ‘show package contents’)

Fifth. The zip contains three directories beginning ‘Paper’ – these are the experiments. The cc_XXXX_model directories are the SpineML models. You now need to configure each experiment for your system – replace the SML_2_B_dir, SML_dir and Model_dir variables in run_FigX.py and analyse_FigX.py with the SpineML_2_BRAHMS, SystemML and model directories on your system, respectively.

Sixth, run
python run_FigX.py && python analyse_FigX.py

You will get a labelled graph of the model output when the batch run is complete.