Skip to main content

BRIEF RESEARCH REPORT article

Front. Ecol. Evol., 17 February 2022
Sec. Social Evolution
Volume 9 - 2021 | https://doi.org/10.3389/fevo.2021.745707

Statistical Atlases and Automatic Labeling Strategies to Accelerate the Analysis of Social Insect Brain Evolution

  • 1Department of Biology, Boston University, Boston, MA, United States
  • 2Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse, CNRS, UPS, Toulouse, France
  • 3Departamento de Biología y Geología, Física y Química Inorgánica, Área de Biodiversidad y Conservación, Universidad Rey Juan Carlos, Madrid, Spain
  • 4Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), San Sebastian, Spain
  • 5IKERBASQUE, Basque Foundation for science, Bilbao, Spain
  • 6Donostia International Physics Center (DIPC), San Sebastian, Spain
  • 7Institut Universitaire de France, Paris, France
  • 8Graduate Program for Neuroscience, Boston University, Boston, MA, United States

Current methods used to quantify brain size and compartmental scaling relationships in studies of social insect brain evolution involve manual annotations of images from histological samples, confocal microscopy or other sources. This process is susceptible to human bias and error and requires time-consuming effort by expert annotators. Standardized brain atlases, constructed through 3D registration and automatic segmentation, surmount these issues while increasing throughput to robustly sample diverse morphological and behavioral phenotypes. Here we design and evaluate three strategies to construct statistical brain atlases, or templates, using ants as a model taxon. The first technique creates a template by registering multiple brains of the same species. Brain regions are manually annotated on the template, and the labels are transformed back to each individual brain to obtain an automatic annotation, or to any other brain aligned with the template. The second strategy also creates a template from multiple brain images but obtains labels as a consensus from multiple manual annotations of individual brains comprising the template. The third technique is based on a template comprising brains from multiple species and the consensus of their labels. We used volume similarity as a metric to evaluate the automatic segmentation produced by each method against the inter- and intra-individual variability of human expert annotators. We found that automatic and manual methods are equivalent in volume accuracy, making the template technique an extraordinary tool to accelerate data collection and reduce human bias in the study of the evolutionary neurobiology of ants and other insects.

Introduction

Our understanding of pattern and process in brain evolution in group-living animals benefits from sampling phylogenetically diverse species. Ants and other eusocial insects (primarily wasps, bees, and termites) have become important models to explore what is broadly conceptualized as “social brain evolution” (Dunbar, 1998; Lihoreau et al., 2012, 2019; Godfrey and Gronenberg, 2019; Muratore and Traniello, 2020; Coto and Traniello, 2021). Eusocial insects have exceptional reproductive and ergonomic polyphenisms associated with division of labor and highly cooperative behavior, and thus offer multiple opportunities and a rich array of species to examine how reproductive competence, sterility, and morphological and behavioral differentiation impact social roles and neuroarchitecture. Workers show extraordinary behavior as individuals as well as members of groups that act collectively, and individuals are so interdependent that the colony is considered to be a “superorganism” (Hölldobler and Wilson, 2009). The brains of colony members have evolved to respond as individuals but also as decision-making groups to cope socially with the environment and its challenges, as well as facilitate communication and coordinate foraging, defense, and nest construction and regulate task performance and nestmate recognition. Important questions integrating insect sociobiology and evolutionary neurobiology concern how selection may favor either an increase or reduction in brain size and structure (Wehner et al., 2007; Muscedere and Traniello, 2012; Riveros et al., 2012; O’Donnell et al., 2018; Arganda et al., 2020; DeSilva et al., 2021).

Ant brains and those of other insects can be adaptive allometric mosaics composed of functionally specialized brain compartment allometries. Neuropils are involved in primary sensory processing (e.g., the antennal, optic lobes, subesophageal zone), motor control and navigation (the central complex and subesophageal zone), and multi-sensorial higher-order processing and integration, learning and memory (the mushroom bodies) (Strausfeld, 2012). Immunohistochemistry, confocal microscopy, and other techniques are commonly used to image brains and neuropil volumes are quantified using image analysis software to examine brain structure within and across insect species. Methods to calculate neuropil volumes require allocating significant effort to manually annotate brain compartments and subregions because an anatomical label must be assigned to every pixel or voxel in 2D and 3D images, respectively (Figure 1). This technique of recording neuroanatomical data is both time consuming and susceptible to human bias and error.

FIGURE 1
www.frontiersin.org

Figure 1. Anatomy of a P. spadonia minor brain. MB-LC (mushroom body lateral calyx), MB-MC (mushroom body medial calyx), SEZ (subesophageal zone), OL (optic lobes), AL (antennal lobes), MB-P (mushroom body peduncle), CX (central complex), and ROCB (rest of the central brain). Scale bar = 100 um. Three brain slices have been selected to show all the subregions analyzed.

Technical problems associated with imaging ant brains can be reduced by using methodologies developed to study the human brain (Talairach and Tournoux, 1988). These techniques usually combine images from multiple brains into a single reference brain or template (Figure 2). This method has been applied in studies on honey bees (e.g., Rybak, 2012), flies (e.g., Rein et al., 2002; Costa et al., 2016; Arganda-Carreras et al., 2018), and other insects (e.g., Kurylas et al., 2008; Menzel, 2012; el Jundi and Heinze, 2020). The use of several brain images to build a template avoids potential biases arising during tissue fixation and imaging, and accounts for the natural variability among samples, allows a statistical representation of the brain of a species or worker phenotype. This type of template, as opposed to a reference brain derived from a single individual, is called a “group-wise template.” Because combining all samples in a single brain representation requires transforming them onto the same reference space, templates allow normalizing information from brains that might have been imaged under different conditions. In addition, group-wise templates are usually associated to annotations (labels) of brain subcompartments. These labels of the template are used to automatically segment (annotate or label) these sub-compartments in new samples, by registering them against the template, which consist of transforming them to be in the same reference space as the template (e.g., Arganda-Carreras et al., 2018). An alternative to his strategy is that of Rybak (2012), where a template brain is created in a similar way to ours, but individual brains are first labeled using a statistical shape model and then registered against the template using the label volumes instead of the gray-value ones. This approach has the advantage of a label-oriented registration, where each anatomical region can be treated independently. However, its performance may be too sensitive to the segmentation result obtained by the model, which should correctly estimate the sometimes very large shape diversity of the dataset.

FIGURE 2
www.frontiersin.org

Figure 2. Automatic labeling methods. (A) Ten confocal images of brains of P. spadonia minors are combined on a single group-wise template, which is manually traced (creating “direct labels”). Each brain used to build the template (and other new brains) can be registered against the template with a transformation function T. The inverse function T–1 can be used on the manual labels of the template to automatically label the registered brain. (B) Nine confocal images of brains of P. spadonia minors are combined on a single group-wise template. The existing manual labels of each brain are registered against the template, and every voxel is assigned to one label by majority voting (creating “consensus labels”). New brains can be registered against the template with a transformation function T. The inverse function T–1 can be used on the consensus labels of the template to automatically label the registered brain. (C) Twelve confocal images of brains of P. spadonia, P. rhea, P. tepicana, and P. obtusospinosa (three of each species) are combined on a single multispecies group-wise template. Consensus labels are created for the template as in B, and the same procedure is applied to automatically trace new brains. Scale bar = 100 um. A single slice per brain has been shown for illustration clarity.

Although template strategies have been widely applied in mammals (Talairach and Tournoux, 1988; Evans et al., 1994; Mazziotta et al., 1995; Chen et al., 2006; Dogdas et al., 2007; Shattuck et al., 2008; Yu et al., 2010), their implementation in insect research has been less frequent. It has been expanded from Drosophila (Rein et al., 2002; Jefferis et al., 2007; Cachero et al., 2010; Costa et al., 2016; Arganda-Carreras et al., 2018) to other insects only in the last decade (e.g., Menzel, 2012; Rybak, 2012; el Jundi and Heinze, 2020). The application of this methodology to research focusing on ants has occurred more slowly, probably because of their high diversity (∼15,000 species). In addition, intra-specific variability is an issue per se: workers may show greater variation in brain anatomy at the level of species and colony, thus constraining the image registration process needed to generate a template, which usually requires a minimum spatial overlap of positions between same sub-regions in the co-registered brain images. Another difficulty is that neuroanatomical studies performed on ants focus on more or less detailed brain subdivisions, creating different sets of compartmental anatomical labels (e.g., Muscedere and Traniello, 2012; Amador-Vargas et al., 2015; Bressan et al., 2015; O’Donnell et al., 2018; Gordon et al., 2019; Sheehan et al., 2019; Habenstein et al., 2020). Consequently, most studies describing ant brain organization have not aimed at building brain templates (e.g., Bressan et al., 2015; Habenstein et al., 2020).

Here, we describe and evaluate experimental strategies to generate brain templates in ants to promote standardized approaches for comparative neuroanatomical analysis. While finer descriptions of neuropil sub-compartments exist for ant brains (e.g., Bressan et al., 2015; Habenstein et al., 2020) and other social Hymenoptera (e.g., Brandt et al., 2005; Rybak, 2012; Groothuis et al., 2019), we focused this first approach on major neuropils (which are commonly used to explore neuroanatomical differences among species, castes, subcastes and experimentally manipulated individuals, e.g., Kamhi et al., 2016; Seid and Junge, 2016; Gordon et al., 2017; Grob et al., 2021). We recently applied state-of-the-art imaging techniques to generate templates using brains from a single or multiple ant species (Arganda-Carreras et al., 2017; Gordon et al., 2019). Using careful annotations by trained researchers as our standard, we evaluate template-based strategies to automatically segment ant brain confocal images, allowing more efficient and less biased volumetric data acquisition. We validate the template method by evaluating its application to workers of species in the ant genus Pheidole.

Materials and Methods

We present three methods to produce and use templates for automatic segmentation (Figure 2). The first consists of building a template using confocal gray value whole brain images of a single species, and manually labeling brain compartments on the template (Figure 2A). This “direct label method” involves manually tracing a single anatomy (the one of the template) and automatically tracing other gray value brain images (by registration against the labeled template). The second “consensus label method” also uses a single-species template, but gray value brain images used to build the template contain manually annotated labels (Figure 2B). Then, these manual labels are used to create the final template labels. This method considers label values resulting from more than a single (potentially biased) tracing, and thus, it may be more accurate than the first method, at the expense of requiring more manual work. In this case, the method is only useful to trace new brains. The third possibility—the “multispecies template method”—is similar to the second but uses gray value brain images from several species (Figure 2C), thus enabling the expansion of species sampling. We next describe the ant brain dataset used, the methods to generate the different templates and labels, and how to evaluate the efficacies of the different methods.

Brain Anatomy Dataset

We imaged brains of minor workers of four species of the hyperdiverse ant genus Pheidole (P. spadonia, P. rhea, P. tepicana, and P. obtusospinosa). While Pheidole is typically characterized by complete dimorphism in the worker caste (small minor and large major [soldier] workers) and in some basal species (P. rhea) trimorphism, which includes a third, larger worker subcaste (super soldiers), we used only minors for proof of concept. Minors and majors are easily discriminated by body size and head allometry (Wilson, 2003).

Minor workers were decapitated and their brains were dissected from the head capsule in ice cold HEPES-buffered saline. Brains were fixed and immunohistochemically stained using SYNORF1 (a monoclonal Drosophila synapsin I antibody obtained from the Developmental Studies Hybridoma Bank, catalog 3C11) and secondarily stained using Alexa Fluor 488 for visualization of neuropil (slightly modified from Ott, 2008). Mounted in methyl salicylate, brains were imaged on an Olympus Fluoview BX50 laser (488 nm) scanning confocal microscope with a × 20 air objective (NA = 0.5) at a resolution of ∼0.7 × 0.7 × 5μm/voxel, producing gray images of 16 bits (in TIFF format). We imaged 10 brains from P. spadonia, and three each of P. rhea, P. tepicana, and P. obtusospinosa minor workers. Each brain image was manually labeled as described in the “Manual labeling of original brain images and template” section below.

Standard Brain Image Method: Image Registration and Template Generation

Templates were built in a diffeomorphic space1 as an average-shape brain (“Template,” Figure 2). The diffeomorphic space allows for smooth invertible transformations from one anatomy to another (T1 and T–1, Figure 2). Our methodology is based on a two-step approach using symmetric diffeomorphic image registration2 (SyN, Avants et al., 2008) of a group of gray value brain images to one another by maximizing mutual information3 first and cross-correlation4 later. Following this optimization process, the group of images are warped into the same coordinate system. In the first step, all gray value brain images are registered against one randomly selected image by optimizing mutual information and allowing only affine transformations (translations and proportional changes in size). Transformed images are then averaged to build a preliminary “blurry” reference brain image. In the second step, the original gray value brain images are registered to this blurry average using non-rigid transformations (i.e., allowing local deformations) by maximizing the cross-correlation of the intensities of all brains. In this step, the registration is gradually improved at four resolution levels (sequentially at ⅛, ¼, ½, and 1/1 of the original sizes, following a resolution pyramid strategy) and produces an optimal average template. The first registration compensates for large disparities in size while the second locally finds an optimal solution. The template was generated by the normalized voxel-wise median of the co-registered volumes (Arganda-Carreras et al., 2017). All steps were implemented in the Advanced Normalization Tools (ANTs) software (Avants et al., 2011) after transforming in Fiji (Schindelin et al., 2012) gray and label images to the open format NRRD. For a detailed description of the software methods used in this paper, we refer the reader to Supplementary Material.

Seven group-wise templates were generated for this study (Supplementary Table 1) with 9 (“consensus label method”), 10 (“direct label method”) or 12 (multispecies template method) original gray value brain images. Six of them were single-species templates, built using only P. spadonia minor gray value brain images (“direct/consensus label methods”). One was a hybrid template, generated from brains of P. spadonia, P. rhea, P. tepicana, and P. obtusospinosa minors (“multispecies template method”), three brains per species. Templates were also associated with anatomical brain label values obtained either by manual or consensus labeling (see below).

Neuropil Labeling

Manual Labeling of Original Brain Images and Template

For each original gray value brain, an expert annotator determined the region occupied by each brain compartment by labeling them manually using Amira (version 6.0 or 2019.2). Labels were traced on eight compartments (as in Muscedere and Traniello, 2012; Gordon et al., 2017): the optic lobes (OL, comprising lobula, medulla and lamina and connecting fibers), antennal lobes (AL, comprising glormeruli, and central hub), mushroom-body medial calyx (MB-MC), mushroom-body lateral calyx (MB-LC), mushroom-body peduncle (MB-P), central complex (CX, comprising the lower and upper division of the central body, the protocerebral bridge and the noduli), subesophageal zone (SEZ) and rest of the central brain (ROCB). This manual tracing was performed in only one brain hemisphere, except for the CX, SEZ and ROCB, which lack a clear subdivision between hemispheres. A trained annotator requires approximately 1 h to label a brain hemisphere. Figure 1 shows three confocal scans of a P. spadonia brain. Studies aiming to analyze differences between the right and the left sides of the brains would require, however, to have fully traced brains.

A single dataset of manual labels for the template generated for the “direct label method” was obtained using the same methodology described above.

Consensus Labeling of Templates

One method used to obtain the same regional label values on the group-wise template is based on combining the information provided by the manual label values of the original brains used to build the template, which also needed to be transformed to the NRRD format. The first step consisted of applying to each label image the same diffeomorphic transformations performed on its original brain anatomy (T1, Figures 2B,C), and later a per-voxel majority voting over all deformed label images of the same brain center to produce “consensus labels.” Since not all the samples of our original dataset contained labels of the same hemisphere, we used Fiji’s tool “Flip horizontally” (Schindelin et al., 2012) when needed to create mirror images of brain anatomies and their manual labels to only have samples with right-hemisphere labels.

Automatic Labeling of Original Brain Images

To automatically label gray value brain images, individual brain images were registered against a group-wise template performing the same two-step method described above—initial affine registration maximizing mutual information followed by a non-rigid registration optimizing cross-correlation. The inverse transformations (T–1, Figure 2) were then applied to the template regional labels (regardless of the method chosen to generate them), automatically building label values for individual gray value brain images registered against the template. To avoid always tracing the same side and prevent bias due to natural brain asymmetries, a proportion of the gray value brain image datasets to be traced can be flipped.

Five P. spadonia gray value brain images were automatically traced using the three methods described before. It is important to notice that for the “direct label method,” these five gray value brain images were also used to build the template, while for the other two methods, which use consensus labels, these five brains were left out of the templates. This is because the consensus labels integrate the information from the manual labels of the brain anatomies used for the template: on one hand, it would seem unnecessary to relabel those brains, and on the other hand, the original manual labels and the automatically obtained labels would be basically the same and the objectivity of the evaluation of the method would be compromised.

Evaluation of Approaches

Because automatic and manual labels are expected to produce slightly different results, we needed to determine whether these differences were acceptable. To do so, we compared differences between automatic and manual labels with the differences between manual labels generated by several expert annotators (“Inter-Person”) and by the same annotator (“Intra-Person”) tracing the same gray value brain image more than once (Supplementary Table 3). Three annotators (with at least 2 years of experience tracing brains) traced the same five brains (to have an acceptable measure of interpersonal differences, “Inter-Person”), and one of them traced the same five brains three times (to have an acceptable measure of intrapersonal differences, “Intra-Person”). The three expert annotators also traced the single species (P. spadonia) template for the “direct label method.” As explained for consensus label creation, when manual labels were on the left side, the gray value brain anatomy and the labels were flipped to be on the right side.

Because many comparative neuroanatomical studies use volumetric data, as a measure of neuropil investment (Wehner et al., 2007; Muscedere and Traniello, 2012; Riveros et al., 2012; O’Donnell et al., 2018; Arganda et al., 2020), Volume similarity (Eq. 1) was used as the relevant metric for evaluating automatic labeling methods and was calculated for each label and brain, as well as for the total brain volume, using volumes estimated with the open-source toolbox MorphoLibJ (Legland et al., 2016; see Supplementary Table 2).

Volumesimilarity=2×|Volumelabelmethod1-Volumelabelmethod 2||Volumelabelmethod1+Volumelabelmethod 2|(1)

Volume similarity between labels annotated for the same compartment obtained by different methods was calculated within the same gray value brain image, and for the automatic and manual labels pairing labels always related to the same original annotator (e.g., OL volume obtained by the “multispecies template” strategy using consensus labels built from manual labels by annotator 1 + OL volume obtained by manual labels from annotator 1; see Supplementary Table 3).

Statistical Analysis

We used bootstrapping to perform statistical analyses (Efron and Tibshirani, 1994). This method has the advantage of making no assumptions about the distributions underlying the data and of being able to handle datasets where data are not fully independent, as is the case in our dataset for different measurements performed on the same brain. To make pairwise comparisons between volume similarity measurements of one brain center provided by two methods, we first we selected one brain at random and pooled all volume similarity measurements for the same brain center from the control and the method. From this pool, we selected volume similarity measurements randomly and with replacement, creating two randomized sets of measurements, with the same sizes as the originals. We then selected a new brain at random with replacement (the same brain can be selected several times) and repeated the same procedure 5 times because our dataset to evaluate the methods has a total of 5 brains. We thus obtained a randomized dataset with the same statistical characteristics as the original, but in which measurements in the two groups came from the same distribution. We then computed the difference between the means of the measurements of the two groups, d_rand. We repeated this procedure 10,000 times, obtaining a distribution for d_rand. This distribution is centered at 0 by construction, and its width represents the differences between method and control that we could expect by chance if both belonged to the same distribution. We then computed the difference between each method and control from the dataset and defined our p-value as the proportion of d_rand that had a value greater than the actual difference between the two methods found in our study. We set the significance level at p < 0.05.

Results

We compared the variability (measured as volume similarity) between automatic methods (“Direct labels,” “Consensus labels,” and “Multispecies template”) and expert annotators, to the variability among (“Inter-Person”) and within (“Intra-Person”) annotators (Figure 3 and Supplementary Table 4). This allows the determination of whether the differences between automatic and manual labels are comparable to those produced by expert annotators that we accept as inevitable errors. Regardless of the comparisons between automatic and manual methods, our results showed that the inter- and intra-individual differences can be considerable, reaching ca. 10% and even higher in compartments such as the AL and the CB (Figure 3).

FIGURE 3
www.frontiersin.org

Figure 3. Variability between annotations for brain compartments and the whole brain. Variation (using volume similarity) given is for the (A) optic lobe (OL). (B) Antennal lobe (AL). (C) Mushroom body medial calyx (MB-MC). (D) Mushroom body lateral calyx (MB-LC). (E) Mushroom peduncle (MB-P). (F) Central complex (CX). (G) Subesophageal zone (SEZ). (H) For the rest of the central brain (ROCB). (I) For the whole brain. Statistical comparisons are made using bootstrapping tests for comparing the volume differences found between the manual and the automatic labels (“Direct labels,” “Cons. Labels,” and “Multisp. temp.”) and between individuals (“Inter-Person”) or within the same individual (“Intra-Person”). “*” indicates p-values smaller than 0.05.

In general, we found that, regardless of method, differences between automatic and manual labels were similar to inter- and intra-individual tracing variability, and in some cases actually smaller. This indicate that automatic methods were more reliable than having different annotators or the same annotator repeat the labels. Compared with inter-person variability for the same compartments, the variability of the “direct labels method” was 5% smaller in the OL (Figure 3A, p-value = 0.028, Supplementary Table 4), 11% smaller in the AL (Figure 3B, p-value = 0.025, Supplementary Table 4), and 5% smaller in the MB-P (Figure 3E, p-value = 0.01, Supplementary Table 4).

The variability of the “multispecies template method” was 4% larger than the inter-person variability only for the ROCB (Figure 3H, p = 0.025, Supplementary Table 4). When comparing the automatic methods with the intra-individual variability, larger variabilities of the automatic methods were found for the MB-MC, in which the variability of “direct label method” was 5% larger (Figure 3C, p = 0.026, Supplementary Table 4), for the ROCB, in which the variability of the “direct label method” was 2% larger (Figure 3E, p-value = 0.004, Supplementary Table 4) and the variability of the “multispecies template method” was 6% larger (Figure 3E, p-value = 0.015, Supplementary Table 4). A marginally significant difference (8% smaller, Supplementary Table 4) was found when comparing the “consensus label method” and the intra-person variabilities.

The “consensus label method” produced variabilities similar to those among and within annotators for all compartments. Some differences were marginally significant (Supplementary Table 4) in comparison to the variability among annotators (5% smaller in the OL, 4% larger in the MB-LC, and 6% smaller in the CX) and within the same annotator (4% larger in the MB-LC and 3% larger in the SEZ, the ROCB and for the whole brain).

Discussion

Statistical templates serve as representative neuroanatomies that integrate variation in brain structure across samples. When associated with neuroanatomical labels, they are a valuable tool to automatically and efficiently segment compartments in similar brains that have not been previously traced. With these annotations we can calculate descriptive metrics such as brain compartment volumes useful to understand differential investment in brain centers and their associated neural functions in behavior.

We presented and evaluated three methods to determine whether their results are comparable to manual annotations. To do so, we compared volumetric differences between automatic and manual labels to volumetric differences due to inter- and intra-individual variability of annotators. We found that automatic segmentation produced satisfactory results. Our three automatic methods produced compartmental volumetric data similar to those obtained via manual annotations by different annotators or by the same annotator repeatedly tracing the same brain. In some cases, we found that the variability between automatic and manual data was even smaller than inter-person variability. Only for one center evaluated (ROCB), the “multispecies template method” produced a variability 2% larger than the inter-person one. This error level might be acceptable considering the benefits of automation and the reduction in human bias. We expected to find more differences in comparison with intra-person variability. Surprisingly, only for two neuropils (MB-MC and ROCB), the “direct label method” and the “multispecies template method” produced larger differences (2–6% larger) between automatic and manual data than intra-person variability.

Standardized average brain atlases (group-wise templates) are increasingly applied in insects (Rein et al., 2002; Brandt et al., 2005; El Jundi et al., 2009; Kvello et al., 2009; Rybak et al., 2010; Peng et al., 2011; Menzel, 2012; Rybak, 2012; Costa et al., 2016; Arganda-Carreras et al., 2017, 2018; Gordon et al., 2019; Groothuis et al., 2019; el Jundi and Heinze, 2020) to efficiently and accurately collect data required to test hypotheses of brain evolution and to facilitate the establishment of connectomes. They allow, for example, the registration of multiple marked neurons into standard anatomies to determine their spatial relationships and possible inclusion in common neuronal circuits (e.g., Brandt et al., 2005; Peng et al., 2011). Annotated atlases also provide information on the shape and size of the different brain compartments to make intra and interspecific comparisons (e.g., Rein et al., 2002; Heinze et al., 2013; De Vries et al., 2017) and generate and test hypotheses on the importance of particular modalities of sensory processing in insect behavior, ecology, and sociobiology, and life history.

In ants, most brain studies present 3D models based on representative individuals (e.g., Bressan et al., 2015; Habenstein et al., 2020) instead of standardized brain atlases. Aside from accounting for interindividual variability and reducing the possible bias of a single representative, the use of group-wise templates allows the rapid and accurate collection of volumetric neuroanatomical data. To our knowledge, we were the first to generate group-wise templates and consensus labels in ants to automatically trace similar brains (Arganda-Carreras et al., 2017). In this work, we also presented for the first time a multispecies template. In another study, we used group-wise templates manually traced to reduce the time needed to trace 60 brains of three different brain phenotypes of the polymorphic turtle ant Cephalotes varians (Gordon et al., 2019). Here we validate these different methods using the variability of human annotations as the “gold standard.” All the methods presented reduce the time required for manually tracing each brain and help decrease potential errors of multiple annotators, either by allocating a single annotator to a large dataset or by combining labels that integrate variability between samples. Group-wise templates also advantageously ensure blind annotations for samples of different origins known to the annotator (for example, different treatments or species) thus minimizing biases. For this purpose, we plan to build single templates for polymorphic species in future studies. Each strategy might be more suitable to answer some research questions than others; for example, the “direct label method” is recommendable for blind studies comparing individuals under different treatments. The “consensus label method” might provide with robust reference anatomical atlases that consider interindividual variability. And the “multispecies template method” can make evolutionary and comparative studies requiring large datasets from multiple species more robust. While our methods have been evaluated using descriptions of major neuropils, testing them on finer neuropil sub-structures will be a logical next step that will increase their potentiality. Regardless of the neuroanatomical scale, the use of templates to accurately and rapidly collect volumetric neuroanatomical data, combined with sociobiological, socioecological, phylogenetic, metabolic, or neurochemical analyses can help elucidate macroevolutionary and microevolutionary patterns of brain evolution. This will allow to better understand encephalization and allometric scaling in regard to the behavioral ecology and sociobiology of individual workers, and the impact of emergent colony-level processes on the brain.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.

Author Contributions

DG prepared the brains for imaging, registered images, and manually labeled all brain images and the templates. AH manually labeled a sample of brain images and the templates. SA manually labeled a sample of brain images and the templates and prepared all brain images for template creation. IA-C implemented the software to create the templates and to automatically label brain subregions. SA and AP-E designed the methodology to evaluate automatic labeling and performed the statistical analysis. SA, IA-C, and JT conceptualized and designed the study and wrote the first draft of the manuscript. SA, IA-C, MG, and JT and secured funding. All authors edited and approved the final content of the manuscript.

Funding

This research was supported by the National Science Foundation grants IOS 1354291 and IOS 1953393 to JT, a Marie Sklodowska-Curie Individual Fellowship BrainiAnts-660976 and Ayudas destinadas a la atracción de talento investigador a la Comunidad de Madrid en centros de I+D to SA, the University of the Basque Country UPV/EHU grant GIU19/027 to IA-C, and by the Institut Universitaire de France to MG.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

We thank Dr. Ming Huang for kindly supplying us with colonies of Pheidole spadonia, P. rhea, P. tepicana, and P. obtusospinosa.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fevo.2021.745707/full#supplementary-material

Footnotes

  1. ^ Diffeomorphism: differentiable transform that allows mapping the coordinates of one image onto the coordinates of another image in a smooth and invertible way.
  2. ^ Image registration: process of transforming one image (usually known as moving image) into the coordinate system of another image (usually known as fixed image).
  3. ^ Mutual information: metric taken from information theory and used on image registration to measure the amount of information that one image contains about another image. It should be maximum when both image are perfectly aligned.
  4. ^ Cross-correlation: metric of the similarity of two images as a function of the displacement of one with respect to the other.

References

Amador-Vargas, S., Gronenberg, W., Wcislo, W. T., and Mueller, U. (2015). Specialization and group size: brain and behavioural correlates of colony size in ants lacking morphological castes. Proc. R. Soc. B Biol. Sci. 282:20142502. doi: 10.1098/rspb.2014.2502

PubMed Abstract | CrossRef Full Text | Google Scholar

Arganda, S., Hoadley, A. P., Razdan, E. S., Muratore, I. B., and Traniello, J. F. A. (2020). The neuroplasticity of division of labor: worker polymorphism, compound eye structure and brain organization in the leafcutter ant Atta cephalotes. J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 206, 651–662. doi: 10.1007/s00359-020-01423-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Arganda-Carreras, I., Gordon, D. G., Arganda, S., Beaudoin, M., and Traniello, J. F. A. (2017). “Group-wise 3D registration based templates to study the evolution of ant worker neuroanatomy,” in Proceedings of the International Symposium on Biomedical Imaging, (Melbourne, VIC), 429–432. doi: 10.1109/ISBI.2017.7950553

CrossRef Full Text | Google Scholar

Arganda-Carreras, I., Manoliu, T., Mazuras, N., Schulze, F., Iglesias, J. E., Bühler, K., et al. (2018). A statistically representative atlas for mapping neuronal circuits in the Drosophila adult brain. Front. Neuroinform. 12:13. doi: 10.3389/fninf.2018.00013

PubMed Abstract | CrossRef Full Text | Google Scholar

Avants, B. B., Epstein, C. L., Grossman, M., and Gee, J. C. (2008). Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. Med. Image Anal. 12, 26–41. doi: 10.1016/j.media.2007.06.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Avants, B. B., Tustison, N. J., Song, G., Cook, P. A., Klein, A., and Gee, J. C. (2011). A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 54, 2033–2044. doi: 10.1016/j.neuroimage.2010.09.025

PubMed Abstract | CrossRef Full Text | Google Scholar

Brandt, R., Rohlfing, T., Rybak, J., Krofczik, S., Maye, A., Westerhoff, M., et al. (2005). Three-dimensional average-shape atlas of the honeybee brain and its applications. J. Comp. Neurol. 492, 1–19. doi: 10.1002/cne.20644

PubMed Abstract | CrossRef Full Text | Google Scholar

Bressan, J. M. A., Benz, M., Oettler, J., Heinze, J., Hartenstein, V., and Sprecher, S. G. (2015). A map of brain neuropils and fiber systems in the ant Cardiocondyla obscurior. Front. Neuroanat. 8:166. doi: 10.3389/fnana.2014.00166

PubMed Abstract | CrossRef Full Text | Google Scholar

Cachero, S., Ostrovsky, A. D., Yu, J. Y., Dickson, B. J., and Jefferis, G. S. X. E. (2010). Sexual dimorphism in the fly brain. Curr. Biol. 20, 1589–1601. doi: 10.1016/j.cub.2010.07.045

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, X. J., Kovacevic, N., Lobaugh, N. J., Sled, J. G., Henkelman, R. M., and Henderson, J. T. (2006). Neuroanatomical differences between mouse strains as shown by high-resolution 3D MRI. Neuroimage 29, 99–105. doi: 10.1016/j.neuroimage.2005.07.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Costa, M., Manton, J. D., Ostrovsky, A. D., Prohaska, S., and Jefferis, G. S. X. E. (2016). NBLAST: rapid, sensitive comparison of neuronal structure and construction of neuron family databases. Neuron 91, 293–311. doi: 10.1016/j.neuron.2016.06.012

PubMed Abstract | CrossRef Full Text | Google Scholar

Coto, Z. N., and Traniello, J. F. A. (2021). Brain Size, metabolism, and social evolution. Front. Physiol. 12:612865. doi: 10.3389/fphys.2021.612865

PubMed Abstract | CrossRef Full Text | Google Scholar

De Vries, L., Pfeiffer, K., Trebels, B., Adden, A. K., Green, K., Warrant, E., et al. (2017). Comparison of navigation-related brain regions in migratory versus non-migratory noctuid moths. Front. Behav. Neurosci. 11:158. doi: 10.3389/fnbeh.2017.00158

PubMed Abstract | CrossRef Full Text | Google Scholar

DeSilva, J. M., Traniello, J. F. A., Claxton, A. G., and Fannin, L. D. (2021). When and why did human brains decrease in size? A new change-point analysis and insights from brain evolution in ants. Front. Ecol. Evol. 9:742639. doi: 10.3389/fevo.2021.742639

CrossRef Full Text | Google Scholar

Dogdas, B., Stout, D., Chatziioannou, A. F., and Leahy, R. M. (2007). Digimouse: a 3D whole body mouse atlas from CT and cryosection data. Phys. Med. Biol. 52:577. doi: 10.1088/0031-9155/52/3/003

PubMed Abstract | CrossRef Full Text | Google Scholar

Dunbar, R. I. M. (1998). The social brain hypothesis. Evol. Anthropol. Issues News Rev. 6, 178–190.

Google Scholar

Efron, B., and Tibshirani, R. J. (1994). An Introduction to the Bootstrap. Boca Raton, FL: CRC Press.

Google Scholar

el Jundi, B., and Heinze, S. (2020). “Three-dimensional atlases of insect brains,” in Neurohistology and Imaging Techniques. Neuromethods, Vol. 153, eds R. Pelc, W. Walz, and J. R. Doucette (New York, NY: Humana), doi: 10.1007/978-1-0716-0428-1_3

CrossRef Full Text | Google Scholar

El Jundi, B., Huetteroth, W., Kurylas, A. E., and Schachtner, J. (2009). Anisometric brain dimorphism revisited: implementation of a volumetric 3D standard brain in Manduca sexta. J. Comp. Neurol. 517, 210–225. doi: 10.1002/cne.22150

PubMed Abstract | CrossRef Full Text | Google Scholar

Evans, A. C., Collins, D. L., Mills, S. R., Brown, E. D., Kelly, R. L., and Peters, T. M. (1994). “3D statistical neuroanatomical models from 305 MRI volumes,” in Proceedings of the IEEE Nuclear Science Symposium & Medical Imaging Conference, Vol. in, (San Francisco, CA), 1813–1817. doi: 10.1109/nssmic.1993.373602

CrossRef Full Text | Google Scholar

Godfrey, R. K., and Gronenberg, W. (2019). Brain evolution in social insects: advocating for the comparative approach. J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 205, 13–32. doi: 10.1007/s00359-019-01315-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Gordon, D. G., Zelaya, A., Arganda-Carreras, I., Arganda, S., and Traniello, J. F. A. (2019). Division of labor and brain evolution in insect societies: neurobiology of extreme specialization in the turtle ant Cephalotes varians. PLoS One 14:e0213618. doi: 10.1371/journal.pone.0213618

PubMed Abstract | CrossRef Full Text | Google Scholar

Gordon, D. G. D. G., Ilieş, I., and Traniello, J. F. A. (2017). Behavior, brain, and morphology in a complex insect society: trait integration and social evolution in the exceptionally polymorphic ant Pheidole rhea. Behav. Ecol. Sociobiol. 71:166. doi: 10.1007/s00265-017-2396-z

CrossRef Full Text | Google Scholar

Grob, R., Heinig, N., Grübel, K., Rössler, W., and Fleischmann, P. N. (2021). Sex-specific and caste-specific brain adaptations related to spatial orientation in Cataglyphis ants. J. Comp. Neurol. 529, 3882–3892. doi: 10.1002/cne.25221

PubMed Abstract | CrossRef Full Text | Google Scholar

Groothuis, J., Pfeiffer, K., el Jundi, B., and Smid, H. M. (2019). The jewel wasp standard brain: average shape atlas and morphology of the female Nasonia vitripennis brain. Arthropod Struct. Dev. 51, 41–51. doi: 10.1016/j.asd.2019.100878

PubMed Abstract | CrossRef Full Text | Google Scholar

Habenstein, J., Amini, E., Grübel, K., Jundi, B., and Rössler, W. (2020). The brain of Cataglyphis ants: neuronal organization and visual projections. J. Comp. Neurol. 528, 3479–3506. doi: 10.1111/cne.24934

CrossRef Full Text | Google Scholar

Heinze, S., Florman, J., Asokaraj, S., El Jundi, B., and Reppert, S. M. (2013). Anatomical basis of sun compass navigation II: the neuronal composition of the central complex of the monarch butterfly. J. Comp. Neurol. 521, 267–298. doi: 10.1002/cne.23214

PubMed Abstract | CrossRef Full Text | Google Scholar

Hölldobler, B., and Wilson, E. O. (2009). The Superorganism: the beauty, elegance, and strangeness of insect societies. Nature 456:544. doi: 10.1038/456320a

CrossRef Full Text | Google Scholar

Jefferis, G. S. X. E., Potter, C. J., Chan, A. M., Marin, E. C., Rohlfing, T., Maurer, C. R., et al. (2007). Comprehensive maps of Drosophila higher olfactory centers: spatially segregated fruit and pheromone representation. Cell 128, 1187–1203. doi: 10.1016/j.cell.2007.01.040

PubMed Abstract | CrossRef Full Text | Google Scholar

Kamhi, J. F., Gronenberg, W., Robson, S. K. A., and Traniello, J. F. A. (2016). Social complexity influences brain investment and neural operation costs in ants. Proc. R. Soc. B Biol. Sci. 283:20161949. doi: 10.1098/rspb.2016.1949

PubMed Abstract | CrossRef Full Text | Google Scholar

Kurylas, A. E., Rohlfing, T., Krofczik, S., Jenett, A., and Homberg, U. (2008). Standardized atlas of the brain of the desert locust, Schistocerca gregaria. Cell Tissue Res. 333, 125–145. doi: 10.1007/s00441-008-0620-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Kvello, P., Løfaldli, B. B., Rybak, J., Menzel, R., and Mustaparta, H. (2009). Digital, three-dimensional average shaped atlas of the Heliothis virescens brain with integrated gustatory and olfactory neurons. Front. Syst. Neurosci. 3:14. doi: 10.3389/neuro.06.014.2009

PubMed Abstract | CrossRef Full Text | Google Scholar

Legland, D., Arganda-Carreras, I., and Andrey, P. (2016). MorphoLibJ: integrated library and plugins for mathematical morphology with ImageJ. Bioinformatics 32, 3532–3534. doi: 10.1093/bioinformatics/btw413

PubMed Abstract | CrossRef Full Text | Google Scholar

Lihoreau, M., Dubois, T., Gomez-Moracho, T., Kraus, S., Monchanin, C., and Pasquaretta, C. (2019). Putting the ecology back into insect cognition research. Adv. Insect Phys. 57, 1–25. doi: 10.1016/bs.aiip.2019.08.002

CrossRef Full Text | Google Scholar

Lihoreau, M., Latty, T., and Chittka, L. (2012). An exploration of the social brain hypothesis in insects. Front. Physiol. 3:442. doi: 10.3389/fphys.2012.00439

PubMed Abstract | CrossRef Full Text | Google Scholar

Mazziotta, J. C., Toga, A. W., Evans, A., Fox, P., and Lancaster, J. (1995). A probabilistic atlas of the human brain: theory and rationale for its development. Neuroimage 2, 89–101. doi: 10.1006/nimg.1995.1012

PubMed Abstract | CrossRef Full Text | Google Scholar

Menzel, R. (2012). Introduction to the research topic on standard brain atlases. Front. Syst. Neurosci. 6:24. doi: 10.3389/fnsys.2012.00024

PubMed Abstract | CrossRef Full Text | Google Scholar

Muratore, I. B., and Traniello, J. F. A. (2020). Fungus-growing ants: models for the integrative analysis of cognition and brain evolution. Front. Behav. Neurosci. 14:599234. doi: 10.3389/fnbeh.2020.599234

PubMed Abstract | CrossRef Full Text | Google Scholar

Muscedere, M. L., and Traniello, J. F. A. (2012). Division of labor in the hyperdiverse ant genus Pheidole is associated with distinct subcaste-and age-related patterns of worker brain organization. PLoS One 7:e31618. doi: 10.1371/journal.pone.0031618

PubMed Abstract | CrossRef Full Text | Google Scholar

O’Donnell, S., Bulova, S., Barrett, M., and Von Beeren, C. (2018). Brain investment under colony-level selection: soldier specialization in Eciton army ants (Formicidae: Dorylinae). BMC Zool. 3:3. doi: 10.1186/s40850-018-0028-3

CrossRef Full Text | Google Scholar

Ott, S. R. (2008). Confocal microscopy in large insect brains: zinc-formaldehyde fixation improves synapsin immunostaining and preservation of morphology in whole-mounts. J. Neurosci. Methods 172, 220–230. doi: 10.1016/j.jneumeth.2008.04.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Peng, H., Chung, P., Long, F., Qu, L., Jenett, A., Seeds, A. M., et al. (2011). BrainAligner: 3D registration atlases of Drosophila brains. Nat. Methods 8, 493–498. doi: 10.1038/nmeth.1602

PubMed Abstract | CrossRef Full Text | Google Scholar

Rein, K., Zöckler, M., Mader, M. T., Grübel, C., and Heisenberg, M. (2002). The Drosophila standard brain. Curr. Biol. 12, 227–231. doi: 10.1016/S0960-9822(02)00656-5

CrossRef Full Text | Google Scholar

Riveros, A. J., Seid, M. A., and Wcislo, W. T. (2012). Evolution of brain size in class-based societies of fungus-growing ants (Attini). Anim. Behav. 83, 1043–1049. doi: 10.1016/j.anbehav.2012.01.032

CrossRef Full Text | Google Scholar

Rybak, J. (2012). “The digital honey bee brain atlas,” in Honeybee Neurobiology and Behavior, eds C. Galizia, D. Eisenhardt, and M. Giurfa (Dordrecht: Springer), 125–140. doi: 10.1007/978-94-007-2099-2

CrossRef Full Text | Google Scholar

Rybak, J., Kuß, A., Lamecker, H., Zachow, S., Hege, H.-C., Lienhard, M., et al. (2010). The digital bee brain: integrating and managing neurons in a common 3D reference system. Front. Syst. Neurosci. 4:30. doi: 10.3389/fnsys.2010.00030

PubMed Abstract | CrossRef Full Text | Google Scholar

Schindelin, J., Arganda-Carreras, I., Frise, E., Kaynig, V., Longair, M., Pietzsch, T., et al. (2012). Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682. doi: 10.1038/nmeth.2019

PubMed Abstract | CrossRef Full Text | Google Scholar

Seid, M. A., and Junge, E. (2016). Social isolation and brain development in the ant Camponotus floridanus. Sci. Nat. 103:42. doi: 10.1007/s00114-016-1364-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Shattuck, D. W., Mirza, M., Adisetiyo, V., Hojatkashani, C., Salamon, G., Narr, K. L., et al. (2008). Construction of a 3D probabilistic atlas of human cortical structures. Neuroimage 39, 1064–1080. doi: 10.1016/j.neuroimage.2007.09.031

PubMed Abstract | CrossRef Full Text | Google Scholar

Sheehan, Z. B. V., Kamhi, J. F., Seid, M. A., and Narendra, A. (2019). Differential investment in brain regions for a diurnal and nocturnal lifestyle in Australian Myrmecia ants. J. Comp. Neurol. 527, 1261–1277. doi: 10.1002/cne.24617

PubMed Abstract | CrossRef Full Text | Google Scholar

Strausfeld, N. J. (2012). Atlas of an Insect Brain. Berlin: Springer Science & Business Media.

Google Scholar

Talairach, J., and Tournoux, P. (1988). Co-Planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System: An Approach to Cerebral Imaging. Cambridge: Cambridge University Press, doi: 10.1111/mono.12083

CrossRef Full Text | Google Scholar

Wehner, R., Fukushi, T., and Isler, K. (2007). On being small: brain allometry in ants. Brain. Behav. Evol. 69, 220–228. doi: 10.1159/000097057

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, E. O. (2003). Pheidole in the New World: A Dominant, Hyperdiverse ant Genus, Vol. 1. Cambridge, MA: Harvard University Press.

Google Scholar

Yu, J. Y., Kanai, M. I., Demir, E., Jefferis, G. S. X. E., and Dickson, B. J. (2010). Cellular organization of the neural circuit that drives Drosophila courtship behavior. Curr. Biol. 20, 1602–1614. doi: 10.1016/j.cub.2010.08.025

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: standardized brain atlases, computational neuroimaging, evolutionary neurobiology, neuroethology, social brain evolution, Neuroanatomy, Ant brains

Citation: Arganda S, Arganda-Carreras I, Gordon DG, Hoadley AP, Pérez-Escudero A, Giurfa M and Traniello JFA (2022) Statistical Atlases and Automatic Labeling Strategies to Accelerate the Analysis of Social Insect Brain Evolution. Front. Ecol. Evol. 9:745707. doi: 10.3389/fevo.2021.745707

Received: 22 July 2021; Accepted: 28 December 2021;
Published: 17 February 2022.

Edited by:

Heikki Helanterä, University of Oulu, Finland

Reviewed by:

Qike Wang, The University of Melbourne, Australia
Jürgen Rybak, Max Planck Institute for Chemical Ecology, Germany

Copyright © 2022 Arganda, Arganda-Carreras, Gordon, Hoadley, Pérez-Escudero, Giurfa and Traniello. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sara Arganda, sarijuela@gmail.com

Download