Skip to main content

REVIEW article

Front. Insect Sci., 23 January 2023
Sec. Insect Neurobiology
Volume 3 - 2023 | https://doi.org/10.3389/finsc.2023.1016277

Micro-CT and deep learning: Modern techniques and applications in insect morphology and neuroscience

  • Institute of Biology, Karl-Franzens-University Graz, Graz, Austria

Advances in modern imaging and computer technologies have led to a steady rise in the use of micro-computed tomography (µCT) in many biological areas. In zoological research, this fast and non-destructive method for producing high-resolution, two- and three-dimensional images is increasingly being used for the functional analysis of the external and internal anatomy of animals. µCT is hereby no longer limited to the analysis of specific biological tissues in a medical or preclinical context but can be combined with a variety of contrast agents to study form and function of all kinds of tissues and species, from mammals and reptiles to fish and microscopic invertebrates. Concurrently, advances in the field of artificial intelligence, especially in deep learning, have revolutionised computer vision and facilitated the automatic, fast and ever more accurate analysis of two- and three-dimensional image datasets. Here, I want to give a brief overview of both micro-computed tomography and deep learning and present their recent applications, especially within the field of insect science. Furthermore, the combination of both approaches to investigate neural tissues and the resulting potential for the analysis of insect sensory systems, from receptor structures via neuronal pathways to the brain, are discussed.

Introduction

Technological advances during the last decades have given researchers many new tools and methods that either opened up hitherto less approachable or even inaccessible areas of science or that enabled pushing the boundaries of already established fields. Some of the most obvious advances have taken place in the development of computer technology, with computing power and available space to store and access information increasing near exponentially over the last 50 years (1). This computational revolution has driven the development and rise of, amongst many others, advanced imaging technologies as well as artificial intelligence (AI) with its subfields machine and deep learning. Here, I present an overview of a (necessarily small) selection of the recent advances in those quickly expanding fields, with a focus on X-ray micro-computed tomography (µCT) and AI-supported approaches to analyse and interpret two- and three-dimensional (2D and 3D, respectively) image data gathered with µCT. For both fields, I will give a brief introduction and illustrate how these technologies and methods can be combined specifically to shed light on the external and internal morphology of insects; their outer form as well as their neuronal structure.

X-ray micro-computed tomography

Introduction

Since the development of the first X-ray computed tomography (CT) scanner by Godfrey Hounsfield in the early 1970s (2), CT scanners have become practically ubiquitous in hospitals around the world for the fast, non-invasive detection of a wide range of pathologies, from brain aneurysms to bone fractures and tumours. With the advent of ever cheaper scan units and increasing image resolutions, CT has also become an increasingly interesting tool for biologists to investigate the external and internal morphology of their organisms of interest.

Many excellent reviews exist that describe the development, function and the many uses of CT and µCT in both the medical and natural sciences in detail (e.g., 29). Thus, the method will be explained only briefly here: Generally, in commercially available µCT scanners used in laboratories (also called desktop scanners), X-rays are generated through the deceleration of fast electrons in an X-ray tube and directed in a cone-shaped beam towards a sample placed on a rotating stage in front of a 2D detector array. Traversing the sample, X-rays are attenuated, with the amount of attenuation or absorption depending on electron density and atomic number of the chemical elements in the sample (10). The detector is collecting the attenuated X-rays in form of a 2D X-ray image, called radiograph. The sample is then rotated in small angular intervals and an image is taken at each step, resulting in datasets of many 100s or 1000s of radiographs. Using filtered back projection or iterative reconstruction algorithms (7, 11), 2D tomographic images (axial cross-sections or ortho-slices) of the whole sample are generated, which form the basis for a complete 3D reconstruction, where the grey value of each voxel represents the X-ray attenuation of this volume element of the sample. Both 2D tomographic images and 3D reconstructions can then be used to explore the interior and exterior structures of the sample qualitatively and quantitatively.

The main advantage of µCT over classical techniques for anatomical and morphological studies is certainly the possibility to non-destructively and non-invasively create whole-volume 3D reconstructions of complete organisms in a relatively short amount of time due to minimal sample preparation, fast image acquisition and streamlined data post-processing (12). This is in sharp contrast to sectional image reconstruction using time-consuming and destructive histological methods including complex staining and embedding protocols, microtome slicing, manual mounting and sequential imaging of samples with light microscopes (LM) or confocal laser scanning microscopy (CLSM). Disadvantages, on the other side, are the comparatively lower image resolution of most µCT scanners in comparison with CLSM and LM (typically ~200 nm; the Abbe diffraction limit. 13) and a lack of contrast in soft tissues. The latter is due to the low absorption rates of X-rays in biological tissues containing predominantly low-Z elements (low atomic number; e.g., carbon, oxygen, nitrogen), leading to mostly uniform density patterns in the image data (10, 14, 15). To overcome this issue, various contrasting or staining agents can be used to increase tissue contrast. Contrasting agents are usually comprised of high-Z elements that bind to tissues, increasing the X-ray absorption and thus image contrast. Common staining methods include elements like iodine (e.g., Lugol’s iodine potassium iodide solution), tungsten (phosphotungstic acid; PTA), osmium (osmium tetroxide), lead (lead nitrate or lead acetate) or silver (Bodian’s reduced silver stain or Golgi stain). The reader is here referred to 10, 1517 for in-depth evaluations of contrast-enhancing solutions and methods. Most of these contrast agents are relatively non-specific stains. For example, iodine is generally absorbed by lipid-rich soft tissue, PTA binds to various proteins like fibrin and collagen and Golgi’s method stains neurons randomly (10, 15). To improve selectivity of contrast-enhancing stains suitable for µCT, recent studies have employed immunohistochemical methods, coupling neuron-specific antibodies with gold nanoparticles to selectively stain neuronal nuclei in mice (9, 18). This approach of combining traditional immunohistological staining protocols with X-ray-absorbing nanoparticles is still new but has the potential to substantially improve the quantitative analysis of nervous systems using µCT.

Another way to produce high-contrast images and 3D models of soft and nervous tissue is to use phase-contrast µCT (PC-µCT). X-ray beams are not only absorbed when passing through a material but the electromagnetic waves also undergo phase-shifts at the boundaries of materials with different densities and thus refractive indices. Phase shifts along material boundaries can be analysed and used to reconstruct images with high edge-contrast between tissues without the need for contrast agents. To detect these changes in phase, parallel and highly coherent X-ray beams are needed, so that the use of phase-contrast µCT is usually constricted to synchrotron beamlines (15, 19, 20; however, see 2123 for notable exceptions). Although this makes synchrotron-based PC-µCT as a technique not as accessible as general absorption-contrast µCT using laboratory-based desktop scanners, there are certain advantages to consider: Maximum spatial resolution using PC-µCT is considerably increased, reaching 10-50 nm/voxel, depending on the sample dimensions (and often at the cost of a highly reduced field of view; 15, 22, 24). Additionally, image acquisition at synchrotron facilities can be faster by a factor of 1000 when compared to commercially available µCT scanners, allowing for both high sample throughput and fast collection of time-series scans (9, 15, 25; and see examples below). In Walker et al., e.g., the authors present time-resolved (sub-millisecond) tomographic data showing the mechanics and kinematics of the blowfly (Calliphora vicina) thoracic flight motor in 3D during tethered flight at a resolution of ~3 µm/voxel (26).

µCT and insect neuroscience

While the early generation of medical CT scanners only allowed identifying rather large and well delineated anatomical structures like brain tumours, major blood vessels (for angiography) or bones (for fracture detection and density measurements), a steady improvement of scanning speeds and spatial resolution led to an increased use of CT in small-mammal preclinical studies (mostly rodents; in and ex vivo; 3, 4, 6). The comparably low resolution of conventional medical CTs prevented the widespread use of this technique for invertebrate morphology. From the beginning of the 2000s, however, µCT with spatial resolutions down to ~5 µm became more widely available (and affordable), leading to a “renaissance of insect morphology” in the years to come (27, 28). Today, many desktop µCT scanners can routinely achieve isotropical resolutions down to 1-2 µm/voxel, with some commercially available high-end scanners now even approaching nanoscale resolutions of ~0.4-0.2 µm/voxel (8, 29, 30). Additionally, recent years have also seen advances in the development of laboratory-based phase-contrast CT scanners with resolutions in the range of 0.5-0.1 µm, approaching spatial resolutions available at synchrotron beamlines (21, 22).

These advances in µCT technology have made it feasible to not only use CT as a tool in insect morphology (e.g., larval development: 31, 32; locomotion: 25, 33; muscular system: 19, 34; respiration: 35, 36; vision: 37, 38; wing structure: 39, 40) but to also explore the central nervous system (CNS) of insects. In 2007, for example, Mizutani et al. imaged the brain of larval Drosophila melanogaster using synchrotron-based µCT with a resolution of ~1 µm/voxel. The authors could visualise the major components of the developing supraoesophageal ganglion, including the optic lobes and the peduncles of the mushroom bodies. 3D views of the brain also included neuronal cell bodies and axons (41). A year later, Ribi et al. (42), presented the first application of a lab-based µCT-system for the scanning and 3D visualisation of an invertebrate brain. Here, the authors scanned whole heads of the honey bee Apis mellifera stained with osmium tetroxide at a resolution of 7 µm/voxel. In the resulting 3D reconstruction, the main brain neuropiles (protocerebrum, antennal and optic lobes as well as the mushroom bodies and some substructures) are visualised and well delineated, demonstrating the suitability of the method for insect neuroscience (42). In 16, 3D and 2D views of a lacewing head stained with iodine and a mantophasmid tibia stained with PTA and scanned with a lab-based µCT are presented at even lower resolutions of 2 and 0.9 µm, respectively. In the lacewing head, individual ommatidia and the layers of the optic lobe (lamina, medulla, lobula) are clearly delineated, as are other brain neuropiles like the peduncles and calyces of the mushroom bodies. In the 3D model of the tibia, some individual sensory cells as well as a scolopidial organ can be distinguished (16). Building on the pioneering work by Ribi et al. (42) Smith et al. (43) refined and adapted PTA-staining procedures to stain the brains of 19 bumblebees (Bombus terrestris) that were subsequently µCT-scanned at resolutions ranging from 3.1-4.6 µm/voxel. The resulting 3D reconstructions were then used for the first time to quantify brain allometry and differences in the volumes of bee brain neuropiles (43). Expanding on this work, Smith et al. (44) used their existing protocols to scan and segment the brains of 78 adult bumblebee workers exposed to either a neonicotinoid insecticide during development or to a control sucrose solution. Performing volumetric analyses on the brain neuropiles, the authors could show that the mushroom body calyxes of workers exposed to the insecticide had significantly lower relative volumes compared to the control bees and that this negatively influenced their behaviour in learning and responsiveness tests. Rother et al. (45) then reused parts of the data collected by Smith et al. (44) to construct the first 3D insect brain atlas based on µCT instead of on CLSM data. The atlas visualises all major brain neuropiles in B. terrestris, including smaller ones like the central complex and its sub-divisions, the anterior optic tubercles and substructures of the mushroom body calyces. Additionally, several prominently visible neuronal tracts linking different neuropiles are identified. Although the authors also map some individual neurons and their projections into the central complex, these are visualised through classical immunostaining methods and CLSM, as no individual neurons are visible on the CT data itself (45). In contrast to this, Ribi and Zeil (46) successfully traced the origins and projection areas of all large ocellar interneurons (L-neurons) in orchid bee (Euglossa imperialis) brains stained with osmium tetroxide and µCT scanned at a resolution of 3 µm/voxel. Using fibre-tracing methods, the authors reconstruct the individual dendritic origins of the 27 L-neurons in the ocellar plexi (below the retinae) and follow their paths to the ipsi- or contralateral protocerebrum where they terminate within close proximity to optic (lobula) and mechanosensory (antennal lobe) interneurons. While identification and reconstruction of the large fibers (up to 22 µm in diameter) and their main arborizations proves unproblematic at the resolution used, the authors also acknowledge that fine arborizations and branching patterns, and thus exact connectivities, are not resolved (46).

Through these examples, it becomes clear that a combination of adequate staining protocols and voxel resolutions in the low µm-range is sufficient to gain a general insight into the organisation of the insect CNS (end even some of its functions; 44), at least on the level of brain neuropiles. At resolutions of ~1-3 µm/voxel (now achievable with many desktop µCTs), cell bodies and axons of some neurons are visible, but these most likely constitute only a subset of the neuronal population, i.e., neurons with sufficiently large cell bodies and axon diameters to enable a clear distinction from background tissue. While identification and tracing of populations of larger neurons is possible, reconstructing their finer dendritic arborization or synaptic fields would certainly require higher resolutions (as demonstrated below).

Overall, neuron sizes are highly variable in both vertebrates and invertebrates but are generally smaller in smaller animals (47, 48). In insects, neuron cell bodies can reach diameters of many 10s of µm while the minimum soma size seems to be limited to ~2-3 µm due to limitations in the size of the nucleus (49, 50). The diameter of unmyelinated axons, on the other side, only seem to be constrained by the electrical properties of the neuronal cell machinery, the ion channels. At diameters <0.1 µm, the stochastical noise generated by thermodynamically activated ion channels increases exponentially, preventing meaningful information transport via the axon (51, 52). Taking neuron cell body and axon sizes into account, Chin et al. (24) argue that a voxel resolution of 0.3 µm “…is not just a sufficient level but in fact an optimized compromise between…” image resolution and acquisition speed when setting out to map neuronal brain architecture in insects. In the same study, the authors then use synchrotron-based µCT at said resolution and 250 Golgi-stained Drosophila melanogaster heads to create a standard 3D map of the fly’s whole-brain neural network “…in a few days” (24).

This study exemplifies the potential of modern µCT technology for the field of insect morphology and neuroscience: Non-destructive, fast, high-resolution data acquisition at levels sufficient to enable the construction of three-dimensional, detailed outer and inner anatomy models in conjunction with maps of the neuronal architecture. As a result, µCT could be easily applied to study insect sensory systems in high anatomical and neuronal detail: high-resolution scans of stained whole-body preparations can be generated and the sensory system in question can be visualised, virtually dissected and explored; from the outer signal receiving structures to the sensory receptors to the neuronal projections into the brain or other body parts. Arguably, resolution of fine axonal projections or dendritic arborizations will not be as high as with CLSM, but both synchrotron- as well as modern lab-based CT setups can achieve resolutions high enough (~0.3 µm) that such quick models can provide valuable overviews and inform and guide subsequent experimental approaches using other, complementary methods to add to our understanding of the sensory systems in question.

AI, artificial neural networks and deep learning

After performing high-resolution µCT scans, one is confronted with very large datasets, usually consisting of many hundreds or thousands of images with megapixel resolution. A µCT dataset of a bush-cricket ear, for example, can thus easily contain 3 gigabytes (GB) of raw (8 bit; uncompressed) image data at 1 µm/voxel resolution (C. gorgonensis, ~1.5 x 1.5 x 1.3 mm³; 53). The much smaller head of a Drosophila (~0.5³ mm³) scanned at a voxel resolution of 0.3 µm – as carried out by 24 – would result in a ~4.6 GB dataset per individual fly. To reconstruct the outer and inner anatomy of an organism, the image data has to be labelled or annotated according to the desired level of detail, a process called segmentation. In essence, during segmentation each voxel of interest in a 3D dataset is assigned a label that identifies the voxel belonging to a specific structure or object and there are now many free, open-source (e.g., 3D Slicer, Drishti, ITK-SNAP, SPIERS) as well as commercial (Amira/Avizo, Dragonfly, VGStudio) software packages that allow importing, manipulating and segmentation of 3D datasets (see e.g., 7, 54, 55). Some segmentation tasks can be performed in a semi-automated manner, especially when contrast between the structures to be labelled and image signal-to-noise ratio is high. The outer cuticle or mandibular structures of insects, for example, can often be easily segmented by simple thresholding methods, as these relatively hard body parts will be clearly delineated from softer materials. For the most part, however, segmentation of µCT data is a time-intensive process that often involves hours or days of manually labelling images.

While it is feasible to manually segment e.g. the cuticle and major internal organs of one or several CT-scanned specimens (56, 57), the cost in terms of time quickly becomes prohibitive when one wants to investigate species on a population level (24, 44, 45) or when attempting to segment neuronal networks (58, 59). For the latter cases, AI techniques like deep learning using artificial neural networks (ANN) can provide solutions to enable the analysis of large-scale CT datasets within sensible timeframes.

There are a multitude of introductory textbooks (e.g., 6063) and excellent reviews about deep learning, its history (64 recapitulates 70 years of ANN development) and implementation in both biological and medical contexts (e.g., 6570). Here, I will only give a brief overview before concentrating on examples of specific network architectures applied to (mostly) biological and biomedical imaging, and potential applications of deep learning specific to µCT data in insect science.

A brief introduction to deep learning

Deep learning, a subset of AI machine-learning techniques, has experienced considerable breakthroughs in the last decade that makes it a valuable tool for practically anyone who wishes to analyse very large and complex datasets (“big data”); from engineers constructing self-driving cars to economists predicting stock-market developments to linguists performing natural language processing and speech recognition tasks to molecular biologists investigating protein folding structures (71, 72).

The advantage of deep learning over more general machine-learning approaches lies in the organisation of the specific ANNs employed to solve particular tasks. In conventional machine learning, an algorithm learns to perform a certain task on the basis of manually designed (“hand-crafted”) features that have to be extracted from the raw data (67, 68). This approach thus requires a good a priori understanding of which feature representations of a given dataset can be useful for solving the task at hand. In contrast, ANNs don’t rely on a manual definition of features but can learn abstract hierarchical features directly from the raw data (65, 69). In supervised learning, the ANN is given pre-labelled training data, presenting the ground-truth for a specific task. Given enough training data, the network then learns to predict the correct label for data is has not encountered in the training phase. In unsupervised learning, the network uses unlabelled data to uncover (often abstract) features or patterns inherent in the data. Such networks are often used in combination with supervised ANNs to enhance the overall performance (67, 73, 74).

ANNs are composed of various layers of artificial neurons or nodes (loosely inspired by the organisation and connectedness of neurons in the brain) that take information from previous layer neurons, apply mathematical functions and associated weights and biases to it and pass it on to neurons in the next layer of the network. A basic ANN normally consists of an input layer, a (low) number of hidden neuron layers that are sequentially (often fully) connected and an output that contains the result (e.g., a decision or classification). Convolutional neural networks (CNN) are a class of ANN that are especially useful for the analysis of 2D data like images or sequential data and are extensively used for tasks like image recognition, object detection, motion tracking or speech analysis. Here, the first layers of the network after the input – the convolutional layers – are inspired by the neuronal organisation of the visual cortex, so that neurons are not fully connected to each other but form receptive fields with shared weights and biases (60, 73, 75). Subsequently, the information extracted from the convolutional layers is sent to a number of pooling layers, which are simplifying the information before sending it to either more convolutional layers or fully-connected layers and finally to an output (65, 68). While the overall architecture of CNNs seems to be more complex than that of standard ANNs with only a few hidden layers, the sharing of weights and biases in the convolutional layers and the subsequent pooling reduces the overall number of connections and parameters, making CNNs easier to train (76).

While CNNs with various but usually low depths were already in use in the 1990s and 2000s (64, 77), it was the development and implementation of ever faster graphics card processors in conjunction with advances in CNN architecture that led to important breakthroughs for deep learning in computer vision in the early 2010s. In 2013, Krizhevsky et al. presented a 13-layer deep CNN (AlexNet; five convolutional layers interspersed with five pooling and normalisation layers plus three fully connected layers) which classified the 1.2 million images of the ImageNet dataset into its 1000 individual classes with nearly half the error rates of the then state-of-the-art models (70, 76). Later research on transfer-learning (the idea that networks pre-trained on bigger, more general datasets could be used as basis for fine-tuned networks performing different but related tasks; 78, 79) and deep residual learning networks resulted in the construction of very deep ResNets (up to 152 layers) that further significantly reduced error rates in image classification, detection, localisation and segmentation tasks (80). ResNets are now often used as backbone architecture for the development of other networks for computer vision tasks (81). Prominent examples are the fully convolutional networks U-Net and V-Net, both building on the ideas of ResNets to increase performance specifically in 2D and 3D biomedical image segmentation (67, 82, 83). The next section will mainly concentrate on these more recent deep learning networks and on their use in the context of image segmentation. Since the (bio)medical sciences have not only been playing a considerable role in the development of CT and µCT technology but have also partly driven the development of ANNs able to analyse the resulting data, examples given will include implementations from biological and medical fields alike.

AI-supported image analysis

In the medical sciences, mainly driven by the early and ubiquitous application of imaging devices like MRI and CT as diagnostic tools, AI has been used since the 1990s on an ever-growing scale (see e.g., 77, 84). Here, learning algorithms are used in image analysis tasks ranging from object detection and classification (e.g., nuclei, cell types, tumours) to automated segmentation in 2D and 3D (e.g., brain, liver, prostate, cardiac vessels; 65, 70, 82, 83). The applications for AI in medical imaging are numerous, as are the data sources used to train the networks on, which leads to a high amount of very specialised ANN implementations (and associated issues concerning training data availability, label noise, non-standard data acquisition with different modalities, biased datasets, etc.). Some examples of these implementations for the analysis of medical CT data are given below:

Hamwood et al. (85) use two modified U-Nets to automatically segment the boundary of the bony parts of the human orbit from MRI or CT scans with results similar to segmentation by human specialists, although in a fraction of the time (1.5-3 min instead of ~4 h). However, the training and test data only includes MRI and CT scans from 11 test subjects, suggesting that the algorithm could perform better after being trained with more data and also highlighting the problem of data availability in some clinical contexts (85). In other circumstances, however, data availability is less of an issue: Xie et al. (86), for example, use publicly available, high-resolution (sub-mm in this context) lung CT scans from 5000 subjects with chronic obstructive pulmonary disease (COPD) to train a relational 2-stage U-Net (RTSU-Net) to automatically segment the five pulmonary lobes. Their approach includes one stage that extracts global features from the whole 3D scan and a second (simultaneous) stage that captures local, high-resolution details. The authors then apply a transfer learning approach and retrain the network with CT scans of the lungs of 370 patients with suspicion of COVID-19. In both cases (COPD and COVID-19), RTSU-Net significantly outperforms three standard networks and reaches human segmentation accuracy, even for the COVID-19 dataset with its much smaller training set and different pathologies (86).

For a different segmentation task, Lindgren Belal et al. (87) construct and train a fully convolutional network to automatically identify and measure the volumes of 49 bones of the human skeleton (mainly vertebrae and ribs) on the basis of whole-body CT scans. The CNN is trained on 100 manually segmented CT scans, tested on scans of 46 different patients and its performance compared against segmentation results of a trained radiologist. This specialist manually segmented a set of bones twice for five patients with data from two different CT scans per patient. Dice coefficients (a measure of accuracy that considers true positive, false positive and false negative labels; with a value of 1 corresponding to full and 0 to no overlap of automated segmentation and ground-truth) are in the range of 0.85 but, interestingly, the intraindividual volume differences between bones of the five test-patient scans are much higher for the manual observer (3-14%) than for the CNN (1-7%), suggesting that the CNN produced results with a much higher reproducibility than the human specialist (87).

More in the context of preclinical µCT using small mammals, Malimban et al. (88) apply the nnU-Net pipeline (an “out-of-the-box”, hyper-versatile, self-configuring segmentation method based on the U-Net architecture; 89) to an auto-contouring and -segmentation task of internal organs of mice on the basis of in vivo µCT images. The nnU-Net is applied to the 3D data and shown to perform better and more robustly than a comparable 2D network (Dice coefficients of 0.91-0.97, depending on segmented organ). Additionally, the authors show that the nnU-Net’s performance on data taken from a different distribution than the original training data (contrast-enhanced instead of normal CT scans) is far superior to the 2D network, demonstrating a high generalisability of the trained network (88).

In biology, the use of AI to analyse image data has been on the rise as well, especially since the advent of deep CNNs and the accompanying frameworks (e.g., Keras, TensorFlow) which dramatically decreased the error rates of ANNs and increased the facility of implementing specific model architectures (i.e., the structure of the neural network).Widespread uses for AI in biological image analysis are the automated identification of species from still images (e.g., iNaturalist, Merlin Bird ID or Flora Incognita; 90) or markerless pose estimation or motion tracking from videos of behaving animals (e.g., DeepLabCut, 91, 92), to name but a few examples. There are certainly many more uses of deep learning in ecology, bioacoustics or behavioural sciences and the interested reader is referred to e.g., 66, 9397.

ANNs and insect neuroscience

While deep learning networks in species recognition or motion tracking perform mainly object recognition tasks, ANNs in neurobiology are now increasingly used to perform segmentation of neurons and biological neural networks, usually based on 3D microscopy data. Here, data stems from using CLSM, scanning electron microscopy (EM), transmission EM or similar high-resolution imaging methods. Li et al., for example, apply a complex, 200+ layer, ResNet-like CNN to automatically trace neurons from 3D optical microscopy datasets of single (stained) neurons of various vertebrate and invertebrate species. The authors demonstrate that their neural segmentation approach is robust against image noise and that it significantly outperforms prior state-of-the-art tracing methods (98).

In another approach, Januszewski et al. (59) construct a new type of flood-filling network (FFN) to automatically trace and reconstruct 3D neurons in serial block-face EM images of a section of zebra finch brain (~100³ µm³ at a xy-resolution of 9 nm and z-resolution of 20 nm; containing ~450 somata). While the network itself is a 19-layer CNN building on AlexNet and ResNet characteristics, it differs from these in that it only segments one object at any given time and that it implements a new type of feedback pathway. This feedback enables the network to use information about previously labelled voxels, increasing its performance by an order of magnitude when compared to two baseline CNNs (1.1 mm mean error-free neurite length and only four wrongly merged neuronal processes within 97 mm of traced neurons). Applied to different EM datasets of Drosophila optic lobe and mouse somatosensory cortex, the FFN outperformed all previously applied methods, including a U-Net approach (59). With the help of this FFN and using focussed ion beam SEM, Scheffer et al. (99) built the complete connectome of a Drosophila hemibrain (one half of the central region of the brain, including the whole central complex; ~250³ µm³ in volume with a resolution of 8 nm/voxel), identifying, tracing and characterising ~25000 individual neurons, their neurites and synapses (~20 million). This connectome represents approximately 1/4 of all neurons in the fly’s brain and substantially adds to information previously collected in smaller sections of the Drosophila brain using similar methods (e.g., the connectome of the ~1000 neurones of the mushroom body α lobe; 100). However, even with the extensive use of ANNs to analyse and segment the data, the authors estimate that an additional 50-100 person-years of proofreading were invested to generate the final outcome (99).

Despite these advances in using AI and deep learning tools for automated analysis and segmentation tasks, there seems to be only two studies so far that investigate fully-automated tissue segmentation from µCT data in a non-medical context: Toulkeridou et al. (101) produce complete µCT scans of 76 species of ants (iodine-stained; resolutions between 0.5 and 3.5 µm/voxel) and subsequently segment their brains in a semi-automated manner (using a seed-based watershed method followed by manual post-processing; ~5 hours work per brain). This provides the authors with a decent training (60%) and testing (40%) dataset of 76 brains with an average of 1000 labelled images (cross-sections in all three dimensions) each. They then train a 15-layer U-Net on the 2D data (using data from all three dimensions) to segment the brain from other tissue (fat, muscles, etc.) and the cuticle. The trained ANN segments whole brains in 1-2 minutes, achieves Dice coefficients of 0.9 (after combining data into 3D and post-processing) and is shown to also segment the brains of other insects (wasp and praying mantis) with similar accuracy (101). However, a significant improvement of its performance could potentially be accomplished if the network was adapted and trained on 3D data, thus taking spatial or structural information like position and size into account (like ANNs in 59, 88).

To this effect, in a recent study by Lösel et al. (102), a 3D U-Net is trained to automatically segment µCT data of brains of both honey bees and bumblebees (PTA-stained; 5.4 µm/voxel resolution) into six major areas (antennal lobes, mushroom bodies, central complex, medulla, lobula and ‘other neuropils’). Here, the authors make use of the free, open-source, online platform Biomedisa (biomedisa.org; 103) for semi-automated and deep learning-supported image segmentation. In its basic form, Biomedisa allows the user to upload a sparsely annotated 3D dataset (i.e., structures of interest are not completely labelled but labels are supplied in intervals of tens of slices) in a variety of common file formats. The pre-segmented images are then used as starting points for adaptive weighted random walks algorithms that perform ‘smart interpolation’ whilst taking into account the underlying 3D data (103). Using this semi-automated way, one can generate complete sets of segmentation labels of 3D datasets in a fraction of the time usually needed and with better accuracy when compared to other interpolation methods. Lösel et al. (102), thus create a training dataset of 26 fully labelled honey bee brains with an average Dice score of 0.97 when compared to manual labelling and in contrast to a Dice score of 0.93 when using standard interpolation methods (see also 103, for a table comparing various segmentation methods). The authors proceed to use Biomedisa’s inbuilt 3D U-Net architecture to train a network that subsequently segmented a further 84 honey bee and 64 bumblebee brains with Dice scores of 0.99 and 0.98, respectively. This particular approach allowed the authors to perform quantitative neuroanatomical comparisons of brain areas for not a select few, but a high amount of individual animals in two species whilst reducing manual segmentation and analysis effort by up to 98% (102).

Conclusion

Overall, the studies by Toulkeridou et al. (101) and Lösel et al. (102) and, likewise, the lack of other research in this direction, highlights the following: While using deep learning approaches for automated segmentation tasks in insects (or invertebrates in general) is certainly possible, one of the bottlenecks is still the availability of large sets of standardised and sufficiently labelled training data. Although, projects like Biomedisa and the resulting (openly available) data present ways to overcome this issue (102, 103). However, while more and more researchers are implementing µCT in their studies, datasets like those presented above (24, 44, 45, 101, 102), where either many closely related species or many individuals of one species are imaged with the same modality, are few and far between. While medical researchers can often access more or less standardised image data from big, centralised databases (86), similar options are usually not (yet) available for biologists. Nevertheless, the increasing ubiquity of µCT in biology and the efforts of the community to share raw and annotated data on open-access repositories and websites (e.g. Dryad, Zenodo, Biomedisa) and pre-trained networks on developer platforms (e.g., GitHub) provide interesting possibilities: Researchers could make use of transfer learning methods and use those pre-trained networks in conjunction with own training data, augmented datasets and parameter fine-tuning to derive ANNs able to better fulfil a desired task (78, 79). Conceivably, one could also collect more diverse datasets and train deep ANNs with advanced architectures (like the promising nnU-Net used in 88 or FFNs in 99) to extract features that will allow them to perform tasks independent of variables like imaging modality, resolution, staining protocol or species. Hypothetically, such a network could then be trained to, for example, automatically segment and quantify not only whole insect brains, but also individual neuropiles, across taxa or during various developmental stages.

Also, while image resolutions like those attained with EM methods (59, 99) are far higher than those of state-of-the-art lab-based µCT scanners, they are, in practice, too high to even allow imaging of whole invertebrate brains in a realistic timeframe. For example, Scheffer et al. (99) report the total imaging time for one (!) Drosophila hemibrain as “roughly four [ … ] years”. However, using either synchrotron or lab-based CT methods in combination with appropriate tissue fixation and staining protocols to create sufficient tissue contrast, imaging the neural tissue of complete insect brains or even whole bodies at sub-µm resolution (0.3 µm/voxel, as suggested by 24) has become a, if not trivial, then at least not unreasonable endeavour. Such datasets would undoubtedly be quite large: a whole-body Drosophila (which very roughly fits a bounding box of 3.5 x 1.5 x 1.5 mm³), scanned at 0.5 µm/voxel resolution, would result in ~63 GB of uncompressed image data. However, employing an ANN that analyses data in 3D and has been trained to, e.g., separate neuronal tissue from internal organs and muscles, a dataset like this could be automatically segmented into broad anatomical components in a matter of minutes. If one could also conjugate contrast-enhancing nanoparticles to antibodies specifically targeting insect neurons (e.g., anti-horseradish peroxidase antibodies; 104, 105), like it has been shown in mice (18), an additional ANN could subsequently identify and trace those targeted neurons. While it can be argued that even a resolution of 0.3 µm/voxel combined with targeted contrast enhancers is still not sufficient to trace the finest dendritic arborizations and thus the full connectivity of neurons (as done in 99, 100) such an approach would still significantly facilitate the exploration of neuronal circuits in the insect CNS.

In summary, combining state-of-the-art µCT scanning methods with subsequent, AI-supported, 3D data analysis to study insect morphology has the potential to allow researchers to approach certain questions or problems from a different methodological angle: µCT can be used to quickly, cheaply, non-invasively and non-destructively image either whole insects or specific regions of interest with relatively high spatial resolution. Resulting datasets can be used to produce highly detailed virtual 3D models of the animal’s exterior and interior that can be virtually explored and dissected. These models already allow for the qualitative and quantitative analyses of major anatomical structures that would be technically highly complicated and very time-consuming to carry out with more traditional methods: One can, for example, easily compare the relative densities of various parts of the chitinous cuticle (intra-, inter-individually or between species), set morphological landmarks and perform morphometric measurements in 3D with high accuracy or measure the volume of internal organs, trachea, muscles or parts of the CNS. As an aside, these models can additionally serve as detailed geometrical blue-prints for the construction of virtual sensory receptors like, e.g., mechano- or electroceptive hairs (106, 107), the different antennal structures linked to the mechanoreceptive Johnston’s organ (108) or the tympanic auditory receptors of bush-crickets (109). On the basis of these 3D geometries, advanced finite element models (another quickly advancing field with great potential that could only be mentioned briefly here) can then be constructed to simulate the structural mechanical behaviour in response to physical stimuli, be it mechanical motion, acoustic waves or electric fields; thus also complementing our understanding of insect sensory systems. Finally, there is the possibility of implementing properly trained, deep ANNs to automatically classify the vast number of digital pixels resulting from high-resolution µCT scans into categories like cuticle, muscle or neuronal tissue, leading to the automated, fast and accurate segmentation and tracing of the internal insect anatomy, including at the level of individual neurons. Although some problems do still exist – like standardised training data availability or neuron-specific contrast-enhancers – these will most likely be solved in the near future, leading hopefully to methodological frameworks and streamlined pipelines that will allow researchers to, for example, map whole sensory systems, from the receptor, via its neuronal pathways, to the brain. Such projects could be completed in weeks or a few months, instead of taking years, thus helping in extending our knowledge and understanding of the neuronal systems guiding the behaviour of insects.

Author contributions

TJ devised and wrote the manuscript. The author confirms being the sole contributor of this work and has approved it for publication.

Funding

TJ has received funding from the Austrian Science Fund (FWF): [P 35792-B].

Acknowledgments

TJ thanks the reviewers for their very helpful comments on the original manuscript.

Conflict of interest

The authors declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Leiserson CE, Thompson NC, Emer JS, Kuszmaul BC, Lampson BW, Sanchez D, et al. There’s plenty of room at the top: What will drive computer performance after moore’s law? Sci (New York N.Y.) (2020) 368:1–7. doi: 10.1126/science.aam9744

CrossRef Full Text | Google Scholar

2. Petrik V, Apok V, Britton JA, Bell BA, Papadopoulos MC. Godfrey Hounsfield and the dawn of computed tomography. Neurosurgery (2006) 58:780–7. doi: 10.1227/01.neu.0000204309.91666.06

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Holdsworth DW, Thornton MM. Micro-CT in small animal and specimen imaging. Trends Biotechnol (2002) 20:S34–9. doi: 10.1016/S0167-7799(02)02004-8

CrossRef Full Text | Google Scholar

4. Kalender WA. X-Ray computed tomography. Phys Med Biol (2006) 51:R29–43. doi: 10.1088/0031-9155/51/13/R03

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Socha JJ, Westneat MW, Harrison JF, Waters JS, Lee W-K. Real-time phase-contrast x-ray imaging: A new technique for the study of animal form and function. BMC Biol (2007) 5:1–15. doi: 10.1186/1741-7007-5-6

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Schambach SJ, Bag S, Schilling L, Groden C, Brockmann MA. Application of micro-CT in small animal imaging. Methods (2010) 50:2–13. doi: 10.1016/j.ymeth.2009.08.007

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Sutton MD, Rahman IA, Garwood RJ. Techniques for virtual palaeontology. Chichester: Wiley Blackwell (2014).

Google Scholar

8. Clark DP, Badea CT. Advances in micro-CT imaging of small animals. Phys Med (2021) 88:175–92. doi: 10.1016/j.ejmp.2021.07.005

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Rodrigues PV, Tostes K, Bosque BP, Godoy JVP, Amorim Neto DP, Dias CSB, et al. Illuminating the brain with x-rays: Contributions and future perspectives of high-resolution microtomography to neuroscience. Front Neurosci (2021) 15:627994. doi: 10.3389/fnins.2021.627994

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Mizutani R, Suzuki Y. X-Ray microtomography in biology. Micron (2012) 43:104–15. doi: 10.1016/j.micron.2011.10.002

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Du Plessis A, Broeckhoven C, Guelpa A, Le Roux SG. Laboratory x-ray micro-computed tomography: A user guideline for biological samples. GigaScience (2017) 6:1–11. doi: 10.1093/gigascience/gix027

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Bournonville S, Vangrunderbeeck S, Kerckhofs G. Contrast-enhanced microCT for virtual 3D anatomical pathology of biological tissues: A literature review. Contrast media Mol Imaging (2019) 2019:8617406. doi: 10.1155/2019/8617406

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Hell SW, Schmidt R, Egner A. Diffraction-unlimited three-dimensional optical nanoscopy with opposing lenses. Nat Photon (2009) 3:381–7. doi: 10.1038/nphoton.2009.112

CrossRef Full Text | Google Scholar

14. Westneat MW, Socha JJ, Lee W-K. Advances in biological structure, function, and physiology using synchrotron X-ray imaging. Annu Rev Physiol (2008) 70:119–42. doi: 10.1146/annurev.physiol.70.113006.100434

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Rawson SD, Maksimcuka J, Withers PJ, Cartmell SH. X-Ray computed tomography in life sciences. BMC Biol (2020) 18:21. doi: 10.1186/s12915-020-0753-2

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Metscher BD. microCT for comparative morphology: simple staining methods allow high-contrast 3D imaging of diverse non-mineralized animal tissues. BMC Physiol (2009) 9:1–14. doi: 10.1186/1472-6793-9-11

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Pauwels E, van Loo D, Cornillie P, Brabant L, van Hoorebeke L. An exploratory study of contrast agents for soft tissue visualization by means of high resolution X-ray computed tomography imaging. J Microscopy (2013) 250:21–31. doi: 10.1111/jmi.12013

CrossRef Full Text | Google Scholar

18. Depannemaecker D, Santos LEC, Almeida A-CG, Ferreira GBS, Baraldi GL, Miqueles EX, et al. Gold nanoparticles for X-ray microtomography of neurons. ACS Chem Neurosci (2019) 10:3404–8. doi: 10.1021/acschemneuro.9b00290

PubMed Abstract | CrossRef Full Text | Google Scholar

19. Betz O, Wegst U, Weide D, Heethoff M, Helfen L, Lee W-K, et al. Imaging applications of synchrotron X-ray phase-contrast microtomography in biological morphology and biomaterials science. i. general aspects of the technique and its advantages in the analysis of millimetre-sized arthropod structure. J microscopy (2007) 227:51–71. doi: 10.1111/j.1365-2818.2007.01785.x

CrossRef Full Text | Google Scholar

20. Fonseca M, d. C, Araujo BHS, Dias CSB, Archilha NL, Neto DPA, et al. High-resolution synchrotron-based X-ray microtomography as a tool to unveil the three-dimensional neuronal architecture of the brain. Sci Rep (2018) 8:12074. doi: 10.1038/s41598-018-30501-x

PubMed Abstract | CrossRef Full Text | Google Scholar

21. Müller M, Sena Oliveira I, Allner S, Ferstl S, Bidola P, Mechlem K, et al. Myoanatomy of the velvet worm leg revealed by laboratory-based nanofocus X-ray source tomography. Proc Natl Acad Sci U S A (2017) 114:12378–83. doi: 10.1073/pnas.1710742114

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Töpperwien M, Krenkel M, Vincenz D, Stöber F, Oelschlegel AM, Goldschmidt J, et al. Three-dimensional mouse brain cytoarchitecture revealed by laboratory-based x-ray phase-contrast tomography. Sci Rep (2017) 7:42847. doi: 10.1038/srep42847

PubMed Abstract | CrossRef Full Text | Google Scholar

23. Romell J, Jie VW, Miettinen A, Baird E, Hertz HM. Laboratory phase-contrast nanotomography of unstained Bombus terrestris compound eyes. J microscopy (2021) 283:29–40. doi: 10.1111/jmi.13005

CrossRef Full Text | Google Scholar

24. Chin A-L, Yang S-M, Chen H-H, Li M-T, Lee T-T, Chen Y-J, et al. A synchrotron X-ray imaging strategy to map large animal brains. Chin J Phys (2020) 65:24–32. doi: 10.1016/j.cjph.2020.01.010

CrossRef Full Text | Google Scholar

25. dos Santos Rolo T, Ershov A, van de Kamp T, Baumbach T. In vivo X-ray cine-tomography for tracking morphological dynamics. Proc Natl Acad Sci U S A (2014) 111:3921–6. doi: 10.1073/pnas.1308650111

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Walker SM, Schwyn DA, Mokso R, Wicklein M, Müller T, Doube M, et al. In vivo time-resolved microtomography reveals the mechanics of the blowfly flight motor. PloS Biol (2014) 12:e1001823. doi: 10.1371/journal.pbio.1001823

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Hörnschemeyer T, Beutel RG, Pasop F. Head structures of Priacma serrata leconte (coleptera, archostemata) inferred from X-ray tomography. J morphology (2002) 252:298–314. doi: 10.1002/jmor.1107

CrossRef Full Text | Google Scholar

28. Friedrich F, Beutel RG. Micro-computer tomography and a renaissance of insect morphology. In: Stock SR, editor. Developments in X-ray tomography VI, vol. 70781U . Bellingham, WA, USA: SPIE (2008).

Google Scholar

29. Busse M, Müller M, Kimm MA, Ferstl S, Allner S, Achterhold K, et al. Three-dimensional virtual histology enabled through cytoplasm-specific X-ray stain for microscopic and nanoscopic computed tomography. Proc Natl Acad Sci U S A (2018) 115:2293–8. doi: 10.1073/pnas.1720862115

PubMed Abstract | CrossRef Full Text | Google Scholar

30. Hipsley CA, Aguilar R, Black JR, Hocknull SA. High-throughput microCT scanning of small specimens: Preparation, packing, parameters and post-processing. Sci Rep (2020) 10:13863. doi: 10.1038/s41598-020-70970-7

PubMed Abstract | CrossRef Full Text | Google Scholar

31. Lowe T, Garwood RJ, Simonsen TJ, Bradley RS, Withers PJ. Metamorphosis revealed: Time-lapse three-dimensional imaging inside a living chrysalis. J R Society Interface (2013) 10:20130304. doi: 10.1098/rsif.2013.0304

CrossRef Full Text | Google Scholar

32. Schoborg TA, Smith SL, Smith LN, Morris HD, Rusan NM. Micro-computed tomography as a platform for exploring Drosophila development. Dev (Cambridge England) (2019) 146:1–15. doi: 10.1242/dev.176685

CrossRef Full Text | Google Scholar

33. Simon MA, Woods WA, Serebrenik YV, Simon SM, van Griethuijsen LI, Socha JJ, et al. Visceral-locomotory pistoning in crawling caterpillars. Curr Biol CB (2010) 20:1458–63. doi: 10.1016/j.cub.2010.06.059

CrossRef Full Text | Google Scholar

34. Lieberman ZE, Billen J, van de Kamp T, Boudinot BE. The ant abdomen: The skeletomuscular and soft tissue anatomy of Amblyopone australis workers (Hymenoptera: Formicidae). J Morphol (2022) 283:693–770. doi: 10.1002/jmor.21471

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Lee W-K, Socha JJ. Direct visualization of hemolymph flow in the heart of a grasshopper (Schistocerca americana). BMC Physiol (2009) 9:2. doi: 10.1186/1472-6793-9-2

PubMed Abstract | CrossRef Full Text | Google Scholar

36. Socha JJ, Förster TD, Greenlee KJ. Issues of convection in insect respiration: Insights from synchrotron X-ray imaging and beyond. Respir Physiol Neurobiol (2010) 173 Suppl:S65–73. doi: 10.1016/j.resp.2010.03.013

PubMed Abstract | CrossRef Full Text | Google Scholar

37. Taylor GJ, Ribi W, Bech M, Bodey AJ, Rau C, Steuwer A, et al. The dual function of orchid bee ocelli as revealed by X-ray microtomography. Curr Biol CB (2016) 26:1319–24. doi: 10.1016/j.cub.2016.03.038

CrossRef Full Text | Google Scholar

38. Taylor GJ, Tichit P, Schmidt MD, Bodey AJ, Rau C, Baird E. Bumblebee visual allometry results in locally improved resolution and globally improved sensitivity. eLife (2019) 8:1–32. doi: 10.7554/eLife.40613

CrossRef Full Text | Google Scholar

39. Jongerius SR, Lentink D. Structural analysis of a dragonfly wing. Exp Mech (2010) 50:1323–34. doi: 10.1007/s11340-010-9411-x

CrossRef Full Text | Google Scholar

40. Salcedo MK, Socha JJ. Circulation in insect wings. Integr Comp Biol (2020) 60:1208–20. doi: 10.1093/icb/icaa124

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Mizutani R, Takeuchi A, Hara T, Uesugi K, Suzuki Y. Computed tomography imaging of the neuronal structure of Drosophila brain. J Synchrotron Radiat (2007) 14:282–7. doi: 10.1107/S0909049507009004

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Ribi W, Senden TJ, Sakellariou A, Limaye A, Zhang S. Imaging honey bee brain anatomy with micro-x-ray-computed tomography. J Neurosci Methods (2008) 171:93–7. doi: 10.1016/j.jneumeth.2008.02.010

PubMed Abstract | CrossRef Full Text | Google Scholar

43. Smith DB, Bernhardt G, Raine NE, Abel RL, Sykes D, Ahmed F, et al. Exploring miniature insect brains using micro-CT scanning techniques. Sci Rep (2016) 6:21768. doi: 10.1038/srep21768

PubMed Abstract | CrossRef Full Text | Google Scholar

44. Smith DB, Arce AN, Ramos Rodrigues A, Bischoff PH, Burris D, Ahmed F, et al. Insecticide exposure during brood or early-adult development reduces brain growth and impairs adult learning in bumblebees. Proc Biol Sci (2020) 287:1–10. doi: 10.1098/rspb.2019.2442

CrossRef Full Text | Google Scholar

45. Rother L, Kraft N, Smith DB, el Jundi B, Gill RJ, Pfeiffer K. A micro-CT-based standard brain atlas of the bumblebee. Cell Tissue Res (2021) 386:29–45. doi: 10.1007/s00441-021-03482-z

PubMed Abstract | CrossRef Full Text | Google Scholar

46. Ribi W, Zeil J. Three-dimensional visualization of ocellar interneurons of the orchid bee Euglossa imperialis using micro X-ray computed tomography. J Comp Neurol (2017) 525:3581–95. doi: 10.1002/cne.24260

PubMed Abstract | CrossRef Full Text | Google Scholar

47. Meinertzhagen IA. The organisation of invertebrate brains: Cells, synapses and circuits. Acta Zoologica (2010) 91:64–71. doi: 10.1111/j.1463-6395.2009.00425.x

CrossRef Full Text | Google Scholar

48. Polilov AA, Makarova AA, Kolesnikova UK. Cognitive abilities with a tiny brain: Neuronal structures and associative learning in the minute Nephanes titan (coleoptera: Ptiliidae). Arthropod Structure Dev (2019) 48:98–102. doi: 10.1016/j.asd.2018.11.008

CrossRef Full Text | Google Scholar

49. Quesada R, Triana E, Vargas G, Douglass JK, Seid MA, Niven JE, et al. The allometry of CNS size and consequences of miniaturization in orb-weaving and cleptoparasitic spiders. Arthropod Structure Dev (2011) 40:521–9. doi: 10.1016/j.asd.2011.07.002

CrossRef Full Text | Google Scholar

50. Polilov AA, Makarova AA. Constant neuropilar ratio in the insect brain. Sci Rep (2020) 10:21426. doi: 10.1038/s41598-020-78599-2

PubMed Abstract | CrossRef Full Text | Google Scholar

51. Faisal AA, White JA, Laughlin SB. Ion-channel noise places limits on the miniaturization of the brain’s wiring. Curr Biol (2005) 15:1143–9. doi: 10.1016/j.cub.2005.05.056

PubMed Abstract | CrossRef Full Text | Google Scholar

52. Eberhard WG, Wcislo WT. Grade changes in brain–body allometry. In: Casas J, editor. Spider physiology and behaviour: Physiology. Amsterdam: Academic Press (2011). p. 155–214.

Google Scholar

53. Celiker E, Jonsson T, Montealegre ZF. On the tympanic membrane impedance of the katydid Copiphora gorgonensis (Insecta: Orthoptera: Tettigoniidae). J Acoustical Soc America (2020) 148:1952. doi: 10.1121/10.0002119

CrossRef Full Text | Google Scholar

54. Abel RL, Laurini CR, Richter M. A palaeobiologist’s guide to ‘virtual’ micro-CT preparation. Palaeontologia Electronica. (2012) 15(2;6T):1–16. doi: 10.26879/284

CrossRef Full Text | Google Scholar

55. Lautenschlager S. Reconstructing the past: Methods and techniques for the digital restoration of fossils. R Soc Open Sci (2016) 3:160342. doi: 10.1098/rsos.160342

PubMed Abstract | CrossRef Full Text | Google Scholar

56. Faulwetter S, Vasileiadou A, Kouratoras M, Thanos D, Arvanitidis C. Micro-computed tomography: Introducing new dimensions to taxonomy. ZooKeys (2013) (263):1–45. doi: 10.3897/zookeys.263.4261

PubMed Abstract | CrossRef Full Text | Google Scholar

57. Alba-Alejandre I, Alba-Tercedor J, Vega FE. Anatomical study of the coffee berry borer (Hypothenemus hampei) using micro-computed tomography. Sci Rep (2019) 9:17150. doi: 10.1038/s41598-019-53537-z

PubMed Abstract | CrossRef Full Text | Google Scholar

58. Helmstaedter M, Briggman KL, Turaga SC, Jain V, Seung HS, Denk W. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature (2013) 500:168–74. doi: 10.1038/nature12346

PubMed Abstract | CrossRef Full Text | Google Scholar

59. Januszewski M, Kornfeld J, Li PH, Pope A, Blakely T, Lindsey L, et al. High-precision automated reconstruction of neurons with flood-filling networks. Nat Methods (2018) 15:605–10. doi: 10.1038/s41592-018-0049-4

PubMed Abstract | CrossRef Full Text | Google Scholar

60. Nielsen MA. Neural networks and deep learning. Determination Press (2015). Available at: http://neuralnetworksanddeeplearning.com (last checked 12.01.23).

Google Scholar

61. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge, MA: MIT Press (2016).

Google Scholar

62. Aggarwal CC. Neural networks and deep learning: A textbook. Cham, Switzerland: Springer (2018).

Google Scholar

63. Chollet F. Deep learning with Python. Boston, MA: Manning Publications; Safari (2021).

Google Scholar

64. Schmidhuber J. Deep learning in neural networks: An overview. Neural networks: Off J Int Neural Network Soc (2015) 61:85–117. doi: 10.1016/j.neunet.2014.09.003

CrossRef Full Text | Google Scholar

65. Xing F, Xie Y, Su H, Liu F, Yang L, Fuyong X, et al. Deep learning in microscopy image analysis: A survey. IEEE Trans Neural Networks Learn Syst (2018) 29:4550–68. doi: 10.1109/TNNLS.2017.2766168

CrossRef Full Text | Google Scholar

66. Christin S, Hervet É., Lecomte N. Applications for deep learning in ecology. Methods Ecol Evol (2019) 10:1632–44. doi: 10.1111/2041-210X.13256

CrossRef Full Text | Google Scholar

67. Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z fur medizinische Physik (2019) 29:102–27. doi: 10.1016/j.zemedi.2018.11.002

CrossRef Full Text | Google Scholar

68. Maier A, Syben C, Lasser T, Riess C. A gentle introduction to deep learning in medical image processing. Z fur medizinische Physik (2019) 29:86–101. doi: 10.1016/j.zemedi.2018.12.003

CrossRef Full Text | Google Scholar

69. Høye TT, Ärje J, Bjerge K, Hansen OLP, Iosifidis A, Leese F, et al. Deep learning and computer vision will transform entomology. Proc Natl Acad Sci United States America (2021) 118:1–10. doi: 10.1073/pnas.2002545117

CrossRef Full Text | Google Scholar

70. Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, et al. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. Proc IEEE (2021) 109:820–38. doi: 10.1109/JPROC.2021.3054390

CrossRef Full Text | Google Scholar

71. Hatcher WG, Yu W. A survey of deep learning: Platforms, applications and emerging research trends. IEEE Access (2018) 6:24411–32. doi: 10.1109/ACCESS.2018.2830661

CrossRef Full Text | Google Scholar

72. Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, et al. Highly accurate protein structure prediction with AlphaFold. Nature (2021) 596:583–9. doi: 10.1038/s41586-021-03819-2

PubMed Abstract | CrossRef Full Text | Google Scholar

73. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature (2015) 521:436–44. doi: 10.1038/nature14539

PubMed Abstract | CrossRef Full Text | Google Scholar

74. Lenchik L, Heacock L, Weaver AA, Boutin RD, Cook TS, Itri J, et al. Automated segmentation of tissues using CT and MRI: A systematic review. Acad Radiol (2019) 26:1695–706. doi: 10.1016/j.acra.2019.07.006

PubMed Abstract | CrossRef Full Text | Google Scholar

75. Fukushima K. Artificial vision by multi-layered neural networks: Neocognitron and its advances. Neural networks: Off J Int Neural Network Soc (2013) 37:103–19. doi: 10.1016/j.neunet.2012.09.016

CrossRef Full Text | Google Scholar

76. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. In: Bartlett P, Pereira F, Burges CJ, Bottou L, Weinberger KQ, editors. Advances in neural information processing systems: 26th annual conference on neural information processing systems 2012. Red Hook, NY: Curran (2013).

Google Scholar

77. Sahiner B, Chan HP, Petrick N, Wei D, Helvie MA, Adler DD, et al. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images. IEEE Trans Med Imaging (1996) 15:598–610. doi: 10.1109/42.538937

PubMed Abstract | CrossRef Full Text | Google Scholar

78. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR) (2014). p. 580–7. Los Alamitos, CA: IEEE / Institute of Electrical and Electronics Engineers Incorporated.

Google Scholar

79. Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data (2019) 6:1–48. doi: 10.1186/s40537-019-0197-0

CrossRef Full Text | Google Scholar

80. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. arXiv.org (2015) 2023 Accessed January 02. :1–12. doi: 10.48550/arXiv.1512.03385

CrossRef Full Text | Google Scholar

81. Wu X, Sahoo D, Hoi SC. Recent advances in deep learning for object detection. Neurocomputing (2020) 396:39–64. doi: 10.1016/j.neucom.2020.01.085

CrossRef Full Text | Google Scholar

82. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. Medical image computing and computer-assisted intervention – MICCAI 2015. Cham: Springer International Publishing (2015). p. 234–41.

Google Scholar

83. Milletari F, Navab N, Ahmadi S-A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision: 3DV 2016. Piscataway, NJ: IEEE (2016). p. 565–71.

Google Scholar

84. Greenspan H, van Ginneken B, Summers RM. Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Trans Med Imaging (2016) 35:1153–9. doi: 10.1109/TMI.2016.2553401

CrossRef Full Text | Google Scholar

85. Hamwood J, Schmutz B, Collins MJ, Allenby MC, Alonso-Caneiro D. A deep learning method for automatic segmentation of the bony orbit in MRI and CT images. Sci Rep (2021) 11:13693. doi: 10.1038/s41598-021-93227-3

PubMed Abstract | CrossRef Full Text | Google Scholar

86. Xie W, Jacobs C, Charbonnier J-P, van Ginneken B. Relational modeling for robust and efficient pulmonary lobe segmentation in CT scans. IEEE Trans Med Imaging (2020) 39:2664–75. doi: 10.1109/TMI.2020.2995108

PubMed Abstract | CrossRef Full Text | Google Scholar

87. Lindgren Belal S, Sadik M, Kaboteh R, Enqvist O, Ulén J, Poulsen MH, et al. Deep learning for segmentation of 49 selected bones in CT scans: First step in automated PET/CT-based 3D quantification of skeletal metastases. Eur J Radiol (2019) 113:89–95. doi: 10.1016/j.ejrad.2019.01.028

PubMed Abstract | CrossRef Full Text | Google Scholar

88. Malimban J, Lathouwers D, Qian H, Verhaegen F, Wiedemann J, Brandenburg S, et al. Deep learning-based segmentation of the thorax in mouse micro-CT scans. Sci Rep (2022) 12:1–12. doi: 10.1038/s41598-022-05868-7

PubMed Abstract | CrossRef Full Text | Google Scholar

89. Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-net: A self-configuring method for deep learning-based biomedical image segmentation. Nat Methods (2021) 18:203–11. doi: 10.1038/s41592-020-01008-z

PubMed Abstract | CrossRef Full Text | Google Scholar

90. Mäder P, Boho D, Rzanny M, Seeland M, Wittich HC, Deggelmann A, et al. The flora incognita app – interactive plant species identification. Methods Ecol Evol (2021) 12:1335–42. doi: 10.1111/2041-210X.13611

CrossRef Full Text | Google Scholar

91. Mathis A, Mamidanna P, Cury KM, Abe T, Murthy VN, Mathis MW, et al. DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nat Neurosci (2018) 21:1281–9. doi: 10.1038/s41593-018-0209-y

PubMed Abstract | CrossRef Full Text | Google Scholar

92. Nath T, Mathis A, Chen AC, Patel A, Bethge M, Mathis MW. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nat Protoc (2019) 14:2152–76. doi: 10.1038/s41596-019-0176-0

PubMed Abstract | CrossRef Full Text | Google Scholar

93. Wäldchen J, Mäder P. Machine learning for image based species identification. Methods Ecol Evol (2018) 9:2216–25. doi: 10.1111/2041-210X.13075

CrossRef Full Text | Google Scholar

94. Mathis A, Schneider S, Lauer J, Mathis MW. A primer on motion capture with deep learning: Principles, pitfalls, and perspectives. Neuron (2020) 108:44–65. doi: 10.1016/j.neuron.2020.09.017

PubMed Abstract | CrossRef Full Text | Google Scholar

95. Mathis MW, Mathis A. Deep learning tools for the measurement of animal behavior in neuroscience. Curr Opin Neurobiol (2020) 60:1–11. doi: 10.1016/j.conb.2019.10.008

PubMed Abstract | CrossRef Full Text | Google Scholar

96. Kahl S, Wood CM, Eibl M, Klinck H. Birdnet: A deep learning solution for avian diversity monitoring. Ecol Inf (2021) 61:101236. doi: 10.1016/j.ecoinf.2021.101236

CrossRef Full Text | Google Scholar

97. Stowell D. Computational bioacoustics with deep learning: A review and roadmap. PeerJ (2022) 10:e13152. doi: 10.7717/peerj.13152

PubMed Abstract | CrossRef Full Text | Google Scholar

98. Li R, Zeng T, Peng H, Ji S. Deep learning segmentation of optical microscopy images improves 3-d neuron reconstruction. IEEE Trans Med Imaging (2017) 36:1533–41. doi: 10.1109/TMI.2017.2679713

PubMed Abstract | CrossRef Full Text | Google Scholar

99. Scheffer LK, Xu CS, Januszewski M, Lu Z, Takemura S-Y, Hayworth KJ, et al. A connectome and analysis of the adult Drosophila central brain. eLife (2020) 9:1–83. doi: 10.7554/eLife.57443

CrossRef Full Text | Google Scholar

100. Takemura S-Y, Aso Y, Hige T, Wong A, Lu Z, Xu CS, et al. A connectome of a learning and memory center in the adult Drosophila brain. eLife (2017) 6:1–43. doi: 10.7554/eLife.26975

CrossRef Full Text | Google Scholar

101. Toulkeridou E, Gutierrez CE, Baum D, Doya K, Economo EP. Automated segmentation of insect anatomy from micro-CT images using deep learning. bioRxiv.org (2021) 1–16. doi: 10.1101/2021.05.29.446283

CrossRef Full Text | Google Scholar

102. Lösel PD, Monchanin C, Lebrun R, Jayme A, Relle J, Devaud J-M, et al. Natural variability in bee brain size and symmetry revealed by micro-CT imaging and deep learning. bioRxiv.org (2022) 1–16. doi: 10.1101/2022.10.12.511944

CrossRef Full Text | Google Scholar

103. Lösel PD, van de Kamp T, Jayme A, Ershov A, Faragó T, Pichler O, et al. Introducing biomedisa as an open-source online platform for biomedical image segmentation. Nat Commun (2020) 11:1–14. doi: 10.1038/s41467-020-19303-w

PubMed Abstract | CrossRef Full Text | Google Scholar

104. Wang X, Sun B, Yasuyama K, Salvaterra PM. Biochemical analysis of proteins recognized by anti-HRP antibodies in Drosophila melanogaster: Identification and characterization of neuron specific and male specific glycoproteins. Insect Biochem Mol Biol (1994) 24:233–42. doi: 10.1016/0965-1748(94)90002-7

PubMed Abstract | CrossRef Full Text | Google Scholar

105. Loesel R, Weigel S, Bräunig P. A simple fluorescent double staining method for distinguishing neuronal from non-neuronal cells in the insect central nervous system. J Neurosci Methods (2006) 155:202–6. doi: 10.1016/j.jneumeth.2006.01.006

PubMed Abstract | CrossRef Full Text | Google Scholar

106. Yack JE. The structure and function of auditory chordotonal organs in insects. Microsc. Res Tech. (2004) 63:315–37. doi: 10.1002/jemt.20051

PubMed Abstract | CrossRef Full Text | Google Scholar

107. Koh K, Robert D. Bumblebee hairs as electric and air motion sensors: Theoretical analysis of an isolated hair. J R Soc Interface (2020) 17:20200146. doi: 10.1098/rsif.2020.0146

PubMed Abstract | CrossRef Full Text | Google Scholar

108. Robert D, Hoy RR. Auditory systems in insects. In: North G, Greenspan RJ, editors. Invertebrate neurobiology. Cold Spring Harbor, N.Y: Cold Spring Harbor Laboratory Press (2007). p. 155–84.

Google Scholar

109. Montealegre-Z F, Robert D. Biomechanics of hearing in katydids. J Comp Physiol (2015) 201:5–18. doi: 10.1007/s00359-014-0976-1

CrossRef Full Text | Google Scholar

Keywords: micro-CT (computed tomography), deep learning, ANN - Artificial neural networks, image segmentation - deep learning, 3D modelling

Citation: Jonsson T (2023) Micro-CT and deep learning: Modern techniques and applications in insect morphology and neuroscience. Front. Insect Sci. 3:1016277. doi: 10.3389/finsc.2023.1016277

Received: 10 August 2022; Accepted: 06 January 2023;
Published: 23 January 2023.

Edited by:

Daniel Robert, University of Bristol, United Kingdom

Reviewed by:

Natasha Mhatre, Western University, Canada
Emily Baird, Stockholm University, Sweden
Eirik Søvik, Volda University College, Norway

Copyright © 2023 Jonsson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Thorin Jonsson, thorin.jonsson@uni-graz.at

Download