Event Abstract

Visual processing and visual memory in ant navigation

  • 1 University of Edinburgh, School of Informatics, United Kingdom

Many insects have been shown to use visual memories for navigation. The ant Cataglyphis velox will retrace highly consistent routes between a feeder and its nest through vegetation; a capability that seems to be principally if not exclusively under visual control. Our aim is to understand, and reproduce on a robot, the underlying brain algorithms that account for this behaviour. We use mixed experimental and modelling approaches to address the following questions:

How much do ants remember?
An insect needing to home to a single location in an open area can, at least in principle, store just one view and recover the best direction to move by comparison between this and the current view. However, long routes through cluttered environments would seem to require multiple memories, and possibly additional information such as image sequences. Ants also appear able to remember multiple routes. Using a virtual environment based on a real ant’s habitat, and simulating an agent recapitulating the routes of real ants, we can quantify the memory storage needed by different homing algorithms.

How much experience do ants need to form this memory?
Can ants remember a route after only one traverse or do they need repeated runs to acquire reliable visual memories? Do they gradually build up a web of route memories through incremental extensions in their excursions? We have carried out a study of route ontogeny, tracking every foraging route of an individual ant from their first exit from the nest, and testing their recall of routes after limited experience. The results suggest near one-shot learning. We compare this ability of the ant to visual mapping methods used in robotics, tested in the same environment.

What do ants actually see?
To compare the capabilities of proposed algorithms, we need to understand the actual perceptual input of the ant. There has recently been more focus on using images captured in the ant’s real environment from an ant-eye perspective, and model testing often includes degradation of images to ant-eye resolution. Less often taken into account are factors such as the visual field dimensions (especially the rear blindspot), the wavelengths seen by the ant (which are not captured in conventional photography), and how polarised vision might contribute to visual memory. Also relatively neglected is the issue of motion: both the additional information that may be available from optic flow; and the amount ofpitch and roll of an ant’s head relative to its surroundings in normal locomotion, which substantially changes the views it experiences, and poses major problems for some proposed visual homing algorithms.

What are the brain mechanisms of ant memory?
Most models of visual navigation simply assume the ant stores the image information it needs for future navigation. But can we relate the capacities described above to known neural circuits for learning in insect brains? Alternatively, can we extend current models of learning to deal with the continuous input and sparse reward conditions of the ant’s navigation task? We are exploring a mushroom body model originally devised to account for odour learning for its capacity to learn visual familiarity, and also considering the architecture of the central comples as an alternative navigation memory centre.

Acknowledgements

This work is funded by BBSRC and EPSRC

Keywords: Ants, navigation, visual memory, Computational models, neural mechanisms

Conference: International Conference on Invertebrate Vision, Fjälkinge, Sweden, 1 Aug - 8 Aug, 2013.

Presentation Type: Oral presentation preferred

Topic: Navigation and orientation

Citation: Webb B, Mangan M and Ardin P (2019). Visual processing and visual memory in ant navigation. Front. Physiol. Conference Abstract: International Conference on Invertebrate Vision. doi: 10.3389/conf.fphys.2013.25.00118

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 01 Jun 2013; Published Online: 09 Dec 2019.

* Correspondence: Prof. Barbara Webb, University of Edinburgh, School of Informatics, Edinburgh, EH8 9AB, United Kingdom, bwebb@inf.ed.ac.uk