The Green Brain team is working to build a neuromimetic model of the honeybee brain and use it to control a flying robot. This venture aims to better understand the way that the honeybee brain works.
Once we understand how the bee brain functions, we can then apply this understanding in several ways: to design and develop better algorithms for guiding autonomous flying vehicles; to provide insight into how all brains work by translating the lessons we learn from the less complex bee brain to the brains of mammals; and finally, to understand the neurophysiological basis of the threats that these essential pollinators face in the modern world.
Generally speaking, the necessary objectives to achieve this goal include modelling the brain, building the robotic platforms, and developing the toolchains that allows the model to run in real-time on GPUs.
At this stage in the project, our honeybee brain models consist of the fundamental processes involved in vision and olfaction and how these relate to behaviour. The brains models are then further developed and tested using the Green Brain robot team.
To learn more about progress in the different areas of the project, click on the corresponding topic:
The Green Brain team aims to make a neurologically based model of the honeybee brain and embody it within robotic platforms. Neural systems can be modelled at various levels of detail – from the movement of ions across cell membranes to treating groups of neurons as single computational devices.
The GB team is modelling using an approach where we model individual neurons but treat these as having no spatial extent. To simulate how these individual neurons react to their inputs, we use the simplified Izhikevich formulation or simple leaky integrator equations for the membrane dynamics. The behaviour of the models is therefore largely determined by the way that the neurons are interconnected.
We aim to build closed-loop systems: modelling complete neural circuits from the sensory input to the motor output. Closed-loop modeling allows behavioural data from bees to be used in testing the models. This provides a richer set of data to explore how our models perform, as well as being essential for embodying our models in our robots!
Behavioural experiments on honeybees have a long history. One of the best tool for vision, olfaction, and decision-making testing is the Y-maze. Via the Y-maze, bees can be tested abilities like speed-accuracy trade-offs, positive and negative conditioning, and concept learning and transference. Among various other testing, the ultimate assessment of the Green Brain will be evaluated on its completion of this task in the lab.
The Y-maze Task:
- Bees enter through the front (bottom of diagram) and are faced with two maze arms.
- At the decision point, the two stimuli subtend a fixed visual angle.
- The bee must enter the correct arm of the maze in order to receive a reward.