METHODS article

Front. Plant Sci., 03 May 2022

Sec. Plant Symbiotic Interactions

Volume 13 - 2022 | https://doi.org/10.3389/fpls.2022.881382

TAIM: Tool for Analyzing Root Images to Calculate the Infection Rate of Arbuscular Mycorrhizal Fungi

  • 1. Graduate School of Engineering, Osaka Prefecture University, Osaka, Japan

  • 2. Graduate School of Life and Environmental Sciences, Osaka Prefecture University, Osaka, Japan

Article metrics

View details

6

Citations

5,1k

Views

1,2k

Downloads

Abstract

Arbuscular mycorrhizal fungi (AMF) infect plant roots and are hypothesized to improve plant growth. Recently, AMF is now available for axenic culture. Therefore, AMF is expected to be used as a microbial fertilizer. To evaluate the usefulness of AMF as a microbial fertilizer, we need to investigate the relationship between the degree of root colonization of AMF and plant growth. The method popularly used for calculation of the degree of root colonization, termed the magnified intersections method, is performed manually and is too labor-intensive to enable an extensive survey to be undertaken. Therefore, we automated the magnified intersections method by developing an application named “Tool for Analyzing root images to calculate the Infection rate of arbuscular Mycorrhizal fungi: TAIM.” TAIM is a web-based application that calculates the degree of AMF colonization from images using automated computer vision and pattern recognition techniques. Experimental results showed that TAIM correctly detected sampling areas for calculation of the degree of infection and classified the sampling areas with 87.4% accuracy. TAIM is publicly accessible at http://taim.imlab.jp/.

1. Introduction

Arbuscular mycorrhizal fungi (AMF) infect plant roots and are considered to improve plant growth (Treseder, 2013). Recent research (Kameoka et al., 2019) has succeeded in the axenic culture of AMF. Therefore, AMF may be mass-produced in the future and are predicted to be used as a microbial fertilizer. To evaluate the usefulness of AMF as a microbial fertilizer, we need to investigate the relationship between the degree of root colonization of AMF and plant growth.

The most commonly studied effect of AMF infections on plant roots is the absorption of phosphorus. However, some studies have shown that injecting mycorrhizal fungi was one of the causes of promoting phosphorus absorption (Van Der Heijden et al., 1998; Smith and Read, 2010; Richardson et al., 2011; Yang et al., 2012), while others have ruled it out (Smith et al., 2004). Therefore, as the results are still controversial, further research on the relationship between phosphorus and AMF is needed. To promote the research, objective evaluation of experiments conducted by different observers under different conditions is indispensable. Therefore, calculating a reliable AMF colonization degree is essential for the research.

In general, the degree of AMF colonization is calculated using the magnified intersections (MI) method (McGonigle et al., 1990). All steps of the MI method are performed manually and thus the method is extremely labor-intensive. Moreover, given that the colonization degree is assessed manually, the decision criterion and results will vary among observers. For these reasons, conducting a comprehensive survey with this method is difficult, and the relationship between AMF colonization and plant growth remains unclear. Clarification of this relationship requires a fixed criterion for estimation of AMF colonization and automation of estimation of colonization degree.

In this article we propose a method for automation of the MI method for estimation of AMF infection degree. Based on the proposed method, we developed an application system named “Tool for Analyzing root images to calculate the Infection rate of arbuscular Mycorrhizal fungi” (TAIM) (Muta et al., 2020). TAIM is a web-based application that automatically calculates the AMF colonization degree from microscopic images with 40x magnification prepared for an AMF infection rate measurement method. Using a machine-learning-based classifier, TAIM calculates the AMF infection rate objectively, unlike manual calculation. Moreover, TAIM has two functions that allow the user to be incorporated to boost its estimation accuracy. One is to upload their own data, which increases training data for TAIM. The other is to correct wrong estimation results, which improves the quality of training data. By retraining TAIM using the updated training data, the estimation accuracy of TAIM can be boosted. Experiments to evaluate the performance of the proposed method demonstrated that the sampling areas for calculation of the infection rate were detected correctly and the degree of infection was determined with 87.7% accuracy.

2. Related Work

As microscopic images are captured under stable lighting, the images are especially suitable for image processing. Therefore, many processing methods have been proposed.

The most popular target for processing of microscopic images is a cell. In many cases, images include too many cells for manual observation. Therefore, image processing methods for cell image analysis have been proposed as an alternative to human observation. To date, procedures for detection (Al-Kofahi et al., 2010; Buggenthin et al., 2013; Schmidt et al., 2018; Weigert et al., 2020), tracking (Debeir et al., 2005; Chen et al., 2006; Dzyubachyk et al., 2010), and cell counting (Lempitsky and Zisserman, 2010) have been proposed. Given the stable lighting used for observation of microscopic images, many methods previously proved to be relatively accurate even before the emergence of deep neural networks (DNNs). Subsequent to the advent of DNNs, the accuracy of detection and tracking has drastically improved (Xie et al., 2018; Korfhage et al., 2020; Kushwaha et al., 2020; Nishimura et al., 2020; Liu et al., 2021). In addition, methods for performing more challenging tasks, such as detection of mitosis (Su et al., 2017), three-dimensional cell segmentation (Weigert et al., 2020), nuclei (Xing et al., 2019), and chromosomes (Sharma et al., 2017) have been published.

Image processing is also used for microscopic medical images. It is practical to use microscopic images to diagnose a disease caused by abnormal cell growth, such as cancer. Many methods have been developed to detect cancer (Yu et al., 2016; Vu et al., 2019) and diagnose cancer from microscopic images (Song et al., 2017; Huttunen et al., 2018; Kurmi et al., 2020). Microscopic images are also helpful to detect infectious diseases. Malaria is an infectious disease for which image processing is the most widely used detection method, and various methods have been proposed for its detection and diagnosis (Ave et al., 2017; Muthu and Angeline Kirubha, 2020). In addition, virus detection methods (Devan et al., 2019; Xiao et al., 2021) have been proposed. For medical applications other than disease diagnosis, methods such as blood cell identification have been developed (Razzak and Naz, 2017).

A typical example of microscopic image analysis in plants is the analysis of pollen. As pollen grains are small, microscopic observation is essential. For example, pollen detection and recognition methods from air samples have been proposed (Rodrìuez-Damián et al., 2006; Landsmeer et al., 2009). Recently, a method applying DNNs has been developed (Gallardo-Caballero et al., 2019). Pollen is also an object of study in paleontology as well as in botany. Pollen analysis, or palynology, involves the study of pollen grains in fossil-bearing matrices or sediments for consideration of the history of plants and climatic changes, for example. As pollen classification requires a broad range of knowledge and is labor-intensive, methods for automated pollen classification (Battiato et al., 2020; Bourel et al., 2020; Romero et al., 2020) have been proposed to replace manual observation.

Recently, Evangelisti et al. (2021) developed AMFinder to analyze plant roots using deep learning-based image processing. This tool can detect AMF and visualize the degree of AMF colonization. These authors' motivation and methodology were similar to our own. The main differences between AMFinder and TAIM are as follows. TAIM is based on the MI method (McGonigle et al., 1990), which uses the intersections of grid lines to quantify AMF colonization of roots, whereas AMFinder divides an image into squares of a user-defined size. TAIM is designed to be a web-based application accessible to all users who can use a web browser, whereas AMFinder is a standalone application consisting of a command-line tool and a graphical interface that requires installation on a computer equipped with graphics processing units (GPUs) and users must set up the environment themselves.

3. Materials and Methods

3.1. Materials

The roots used for the dataset were from soybean. The soybean plants were grown in a glasshouse under an average temperature of 30.9°C in the experimental field of Osaka Prefecture University. Each plant was grown in a pot in sterilized soil and was subsequently inoculated with AMF (Rhizophagus irregularis MAFF520059). Therefore, AMF were the only fungi present in the soil of the pots. Table 1 lists the cultivar, place of origin, and number of days growth for the soybeans.

Table 1

GenotypesPlace of originGrowing days
M 581India48
OUDUKorea36
JAVA 7Indonesia52
U 1290-1Nepal54
KARASUMAMETaiwan44
KADI BHATTONepal59

Details of the soybean root dataset used in the experiment.

The plants were removed from the pots and the roots were washed with water. The roots were softened using 10% potassium hydroxide to aid decolorization. We used 0.5% hydrogen peroxide aqueous solution to clear the softened roots and hydrochloric acid to neutralize. The roots were stained using 0.05% trypan blue to stain the AMF blue. The stained root sample is placed on a glass microscope slide with 0.25-mm-wide grid lines at 1.00 mm intervals. We use the glass microscope slide to make observation more efficient.

The prepared slides were observed under a 40x objective with an optical microscope (OLYMPUS BH-2) and images were captured with a digital camera (OLYMPUS PEN E-PLI). The resolution of the images was 4,032 × 3,024 pixels.

3.2. AMF Infection Rate Measurement Method

We automate a method to measure the infection rate of AMF based on the MI method (McGonigle et al., 1990), which is a popular and accurate method for estimation of AMF colonization degree (Sun and Tang, 2012). The MI method is an improved version of the grid-line intersect (GLI) method (Giovannetti and Mosse, 1980). The GLI method uses a dissecting microscope and measures the colonization degree as the ratio of colonized sampling areas. The MI method uses a light microscope, which provides higher resolution and therefore MI can measure the colonization degree more accurately.

The method we automated uses roots prepared as described in Section 3.1. An observer categorizes the roots at the intersections of the grid lines into four categories. In the MI method categorizes the sampling areas into four categories: negative, arbuscules, vesicles, and hyphae only. While the proposed method and MI method essentially perform the same procedure, there is one difference in the categorization of sampling areas between the methods: the MI method excludes the sampling areas that do not include roots in advance, whereas the proposed method does not. Therefore, we added a new class, “no root,” for the intersections that do not include roots, to enable the proposed method to classify the root-less sampling areas.

In addition, in our experiments, we integrated the “hyphae only” class into the “arbuscules” class to reflect the experimental environment. In our experiments, we sterilized the soil and then inoculated it with AMF. To avoid mixing the other microorganisms with the unsterilized soils in the pots, the plants were placed in a separate area from the unsterilized potted plants and treated to prevent soil contamination by watering. Hence, all the hyphae in the soil originate from AMF. In summary, we added a new class named “no root” and treated “hyphae only” as “arbuscules.” Therefore, in our experiments, we classified the sampling area into four categories: “vesicles,” “arbuscules,” “no root,” and “negative.” The original MI method uses a magnification of 200x, but this paper uses a 40x image. The reason for this is that 40x was sufficient for classifying in this study. Figure 1 shows a sample of each class.

Figure 1

After classifying the intersections, the proportions of arbuscular colonization (AC), vesicular colonization (VC) are calculated as

where Na, Nv, Nn, and Nnr are the numbers of intersections categorized into arbuscules, vesicles, and no root, respectively, and Ns is the total number of intersections.

3.3. Software Design

We propose TAIM, a web-based application system to automate calculation of the degree of AMF colonization. In this section, we explain the system architecture of TAIM in Section 3.3.1, the method by which TAIM automatically calculates the infection rate of AMF in Section 3.3.2. We also explain a dataset we used for constructing TAIM in Section 3.3.3, and functions of TAIM in Section 3.3.4.

3.3.1. Overview of TAIM

Figure 2 presents an overview of TAIM. TAIM consists of client and server systems. The client system runs on a web platform, receives images from users, and shows the calculation results. We use a web platform because it is independent of an OS environment and thus users can use TAIM on any device that can run a web browser. The server receives images from the client, calculates the AMF colonization degree, and transmits the results to the user. The server calculates the AMF colonization degree by detecting the sampling area using computer vision techniques and categorizing the sampling areas as colonized or not using machine-learning techniques. We use HTML5, CSS3, and JavaScript for client development and Django, which is a web application framework implemented by Python, for server development.

Figure 2

3.3.2. Calculation of the AMF Colonization Degree

In this section, we explain the method for calculating the AMF colonization degree in detail. TAIM automates the method described in Section 3.2. Server part of Figure 2 shows an overview of the procedure for calculating the AMF colonization degree. The inputs are images captured from a microscope slide prepared for the MI method (Figure 2A). TAIM detects intersections of the grid lines as sampling areas for calculating the colonization degree. Examples of the intersections are shown as green rectangles in Figure 2B. TAIM then categorizes the sampling areas into four categories (Figure 1). Finally, TAIM calculates the AMF colonization degrees using Equation (1). Note that these four categories recognized by TAIM differ from the four categories of the MI method. Details are provided in Section 3.3.2.2.

3.3.2.1. Intersection Detection

For intersection detection, we use a simple computer vision technique, namely edge detection using projection profile. The overall intersection detection process is shown in Figure 4A. The orange grid lines in the image are almost orthogonal (Figure 4A) If the lines are oriented in the horizontal or vertical direction of the image, the horizontal and vertical lines are expressed as the horizontal and vertical lines and are easily detected. Therefore, before intersection detection, we rotate the image so that the orange lines are oriented in the horizontal and vertical directions of the image. We term this process image rotation normalization.

Figure 3 shows the process of image rotation normalization. To execute image normalization, we estimate the angle of rotation. We use a histogram of the gradient directions of an input image to determine the angles. First, we apply a derivative filter, i.e., Sobel filter, horizontally and vertically to the input image to calculate the image gradients. We then calculate the gradient direction in each pixel and make a histogram of the gradient directions. As the orange lines are orthogonal, the histogram ideally has four peaks in every 90°. Therefore, the image should be rotated around the image center by the minimum angle that ensures the peaks are oriented in the horizontal and vertical directions. The image rotation normalization is applied to all input images.

Figure 3

As shown in Figure 4, the intersections of the grid lines are detected on the normalized images in the following manner. We begin by applying the Sobel filter horizontally and vertically to the normalized image to calculate the gradient of each pixel (Figure 4B). We then detect the edges of the grid lines by detecting the peaks of the projection profiles of the gradients shown in Figure 4C. A projection profile is a sum of pixel values along an axis. Given that the grid lines in the normalized image are oriented in the horizontal and vertical directions, the edges of the lines appear as peaks of the horizontal and vertical projection profiles. As a result of the projection profile, we detect the edges of the grid lines, denoted in green in Figure 4D. The width of the grid lines is narrower than the distance between the lines (Figure 4A). Therefore, we regard a pair of green lines as a grid line if two green lines are at a close distance. Finally, we detect the intersections; the crossings of the grid lines are detected as intersections (Figure 4E).

Figure 4

3.3.2.2. Categorization of the Sampling Areas

We categorize the sampling areas (i.e., the intersections of grid lines) according to the degree of colonization using a pattern recognition technique. An overview of the categorization process is shown in Figure 5. We extract a feature vector from the input image (i.e., sampling area) and categorize it into four classes. In the feature extraction, we use convolutional neural networks (CNNs) because classifiers using CNNs have previously performed well in image classification tasks (Russakovsky et al., 2015; Krizhevsky et al., 2017). As good classification accuracy is expected when the feature is extracted by CNNs, we employ CNNs for feature extraction. We used CNN models pretrained on ImageNet (Deng et al., 2009), which consisted of more than 10 million images of 1,000 categories. After connecting a fully connected (FC) layer to the end of the pretrained CNN model, we fine-tuned the model in the task of categorizing the training images into four classes.

Figure 5

We used two kinds of classifiers in the categorization process: FC layer and support vector machine (SVM). In the former case, we used the FC layer that was used for pretraining. In the latter case, we used the SVM. To be more precise, we used the SVM for classification and the CNN model (without the FC layer) for feature extraction. In the training, the SVM was trained on the features extracted by the CNN model. Note that the SVM is a supervised learning model that shows good classification accuracy in biological image recognition (Noble, 2006). By using maximum-margin classifiers, the SVM achieves high recognition accuracy. Moreover, the SVM can cope with non-linear classification problems by introducing non-linear kernels.

The colonization degree is calculated as AC and VC in Equation (1) with at least 200 sampling areas.

3.3.3. Dataset

We constructed an original dataset to construct TAIM and conduct experiments. The dataset consisted of 896 microscopic slide images prepared for as described in Section 3.1 (Figure 2A). We reduced the images to 1, 008 × 756 pixels for efficient detection and categorization of intersections. One of the authors manually classified the 5,002 intersection areas into four classes, which were mentioned in Section 3.2. We regarded the classification results as the ground truth (correct answers) of the data and used them to train and evaluate deep neural networks and SVM classifiers. Each intersection area was about 150 × 150 pixels.

3.3.4. Functions

In addition to calculating AMF colonization degree, TAIM has two other functions: modifying the classification results by users and adding new data to improve the classification accuracy. These two functions are implemented to improve the classification accuracy and the objectivity of AMF colonization.

When viewing the classification results, as shown in Figure 2C), the modification function allows users to correct erroneous classification results. In pattern recognition, classifiers sometimes make mistakes depending on the lighting and changes in appearance because of the limitations of the training data. It is also true for the classifiers of TAIM. Therefore, we implemented the modification function to fix incorrect classification results.

TAIM also has a function to register new data, which is expected to improve the classification accuracy. Currently, the classifiers of TAIM are trained on only our dataset, which consists of soybean roots grown in the field and labeled by us. Therefore, the classifiers are expected to perform well on our data and on data with similar properties, but are under-learned for those with different properties. For the classifiers to be robust, a diverse dataset is essential. This is because root data grown in diverse locations are expected to help the classifiers of TAIM to acquire a strongly robust recognition capability. Hence, we implemented the function of TAIM that allows users to register new data that are used for additional training. Adding new data through the functions of TAIM, which stores the data and modifies the categorization results, is also expected to improve the objectivity of the categorization of colonization degree by TAIM. The initial TAIM classifiers for the colonization categorization are trained with the data labeled by one of the authors. That is, the trained classifiers of TAIM reflect the criteria of a single observer. The modified categorization results can reflect the criteria of other observers. Therefore, if the number of users involved in the labeling process increases, the classifiers reflect the criteria of multiple observers, leading to the classifiers becoming more objective.

4. Results

We conducted experiments to evaluate the detection and classification performance of TAIM using the dataset we created. This section presents details on the dataset as well as the detection and classification results.

4.1. Evaluation of Intersection Detection

We evaluated the intersection detection performance using the soybean root dataset. An overview of the proposed intersection detection method is presented in Section 3.3.2.1; here, we describe its implementation in more detail. We used the 3 × 3 Sobel filter as a derivative filter to calculate the gradient and direction of an image. From the image gradient direction, we estimated the rotation angle for normalization and normalized the image as described in Section 3.3.2.1. The projection profile used for line edge detection was calculated based on the gradient images. In the peak detection, we adopted a public domain code written in Python1. We set the distance parameter to 50 and used the default values for the other parameters to execute the code.

For evaluation of detection performance, we adopted the intersection over union (IoU) score. The IoU is a criterion of how accurately a method detects the areas of target objects. The IoU is calculated as the ratio of the overlapping area (intersection) between the ground truth and predicted area over their union (Figure 6A). Therefore, the larger the IoU score, the more accurately the sampling areas are detected. We used 896 images of the soybean root dataset for the experiment to calculate the IoU scores. The mean IoU score of TAIM was 0.86. If we considered successful detection in the IoU score as more than 0.75, the detection precision of TAIM was 0.95. Therefore, TAIM achieved a satisfactory detection performance. Based on the distribution of the IoU scores (Figure 6B), most sampling areas were detected correctly; the IoU score of most sampling areas was >0.75. However, detection of a few sampling areas failed, as indicated by an IoU score close to zero.

Figure 6

To clarify the reason for the failure in detection, we compared examples of successful and failed detection. Supplementary Figure 1 shows examples of the detection experiments. The left column is an example of successful detection, and the central and right columns are examples of failed detection. In the detection results (the second row of Supplementary Figure 1), blue rectangles represent the ground truth and green rectangles the detection results. The IoU score of each detected area is shown in white text. In the peak detection results (the fifth and sixth rows of Supplementary Figure 1), detected peaks are indicated by red and green circles. In the left column, as the edges of the orange lines were clear, the edges were easily detected by peak detection. In contrast, in the central column, the edges of the orange lines were jagged. As a result, some sampling areas failed to be detected. In the right column, the input image contained bubbles, whereas the edges were clear. The bubble edges were incorrectly detected as a peak because the intensity of the gradient of the bubble edges was large.

4.2. Evaluation of Classification Accuracy

We conducted experiments to evaluate the classification accuracy of the proposed method. We used the cropped sampling areas from the soybean root dataset. We applied intersection detection, as described in Section 3.3.2.1, to the dataset and used the areas for which the IoU score was >0.5. The number of cropped sampling images was 5,002 and annotated manually. The images were cropped to be squares and normalized to 224 × 224 pixels using bilinear interpolation. The CNN models used for feature extraction were AlexNet (Krizhevsky et al., 2012), VGG-19 (Simonyan and Zisserman, 2015), and ResNet-18 (He et al., 2016). These models were pretrained on ImageNet and fine-tuned on the cropped sampling areas. The number of epochs, batch size, and initial learning coefficient were set to 20, 32, and 10e-4, respectively, and the learning rate was reduced by half every five epochs. For optimization, we used Adam with weight decay of 1e-5. For classification, we used the SVM with a radial basis function (RBF) kernel. The cost parameters of the SVM and RBF kernels were set to 1.0 and 0.25, respectively. The output from the network was compared with the annotation, and if they matched, the output was considered correct, and if they did not match, the output was considered incorrect. The percentage of correct outputs was considered to be the classification accuracy. We used five-fold cross-validation to evaluate the classification accuracy. We divided the cropped image samples into five subsamples. We fine-tuned the CNN models and trained classifiers with four subsamples and evaluated the remaining subsamples. We repeated the procedure five times so that all subsamples were evaluated, and the overall accuracy was averaged. Regardless of the combination of feature extractor and classifier, the classification accuracy was >84% (Table 2). The hightest classification accuracy (87.7%) was achieved when using VGG-19 as the feature extractor and FC as the classifier.

Table 2

Feature extractor (CNN)ClassifierAccuracy (%)
AlexNetFC84.1
VGG-19FC87.7
ResNet-18FC84.6
AlexNetSVM84.0
VGG-19SVM86.9
ResNet-18SVM84.9

Results of the classification experiment.

To clarify which class was misrecognized, a confusion matrix when VGG-19 and FC were used for feature extraction and classification was generated. TAIM tended to confuse class 0 (vesicles) with class 1 (arbuscules) and class 2 (negative) with class 1 (Figure 7). There are two possible reasons for misclassification. First, the appearance of the respective classes was similar. Classes 0 (vesicles) and 1 (arbuscules) were similar, and classes 2 (negative) and 1 were also similar (Figure 1). Therefore, such similarity in appearance may have caused misclassification. The second possible reason is the imbalance of the training data. The number of training data for the four classes was 423, 1,351, 628, and 2,600, respectively. As classes with fewer training data tend to be treated as less important, samples belonging to classes 0 and 2 were more frequently misclassified compared with the other classes.

Figure 7

We generated a class activation map (CAM) (Zhou et al., 2016) of the fine-tuned ResNet-18 to visualize how the CNN classified the sample. The CAM visualizes as a heat map the importance of regions for classification of the sample. In other words, important regions for classification of a class are considered to contain class-specific features for the class. Figure 8 shows three original images and the corresponding CAM; blue regions are more important and red are less important. Figure 8A is an example of an image that was classified as class 1 correctly. The CAM of this image showed that classification was based on the hyphae in the lower right corner. Figures 8B,C are examples of images that were misclassified; B was class 1 but classified as class 0, and C was class 1 but classified as class 2. The CAM of Figure 8B revealed that the only hyphae were located in the class-specific region. Therefore, the appearance of the original image in the region was similar to that of class 1. This appearance similarity led to the misclassification as the arbuscules class. In the original image of Figure 8C, hyphae were located at the top but only close to the edge of the image. Hence, this is a difficult sample to correctly classify. In the CAM, a class-specific area existed in the top right corner, which may exacerbate the misclassification.

Figure 8

As TAIM has a function that collects data from users, it is possible to increase the training data while TAIM is running. To clarify the effect of increasing the training data, we increased the number of images using augmentation techniques and observed the change in accuracy. We reflected images horizontally and vertically, cropped them randomly, and resized them to 224 × 224 pixels. The total dataset was increased to 300,012 images. The images were divided into training, validation, and test samples in the ratio of 3:1:1. We used the same feature extractors and classifiers as in Figure 2. We trained the networks while changing the training data from 1% of the total training data to 100%. We used the same hyperparameter setting for the networks as the previous classification experiment. We used validation data to evaluate the training accuracy of the networks and choose the parameter of the SVM. The training data were used for evaluation of the classification accuracy. Figure 9 shows the relation between the number of training data and the classification accuracy. The figure shows that all combinations between the networks and classifiers tended to increase the accuracy. In addition, the accuracy is expected to be further improved using more training data because the accuracy still showed an increasing trend when 100% data were used for training. Therefore, it is expected that the classification accuracy of TAIM will improve with increase in the amount of data used for training.

Figure 9

5. Discussion

In the experiments, we evaluated TAIM on soybean and one AMF. Further evaluation with other plants and AMFs should be conducted to demonstrate the usefulness of the proposed system.

AMFinder (Evangelisti et al., 2021), introduced in Section 2, is a method that shares a similar motivation with TAIM. The most significant difference of AMFinder from TAIM is the presence or absence of grids in the input microscope images. TAIM used microscope slide images with the grids, whereas AMFinder used those without the grids. Regarding the difference, we mention two points from the technical perspective. The first is that the classification algorithm for TAM and AMFinder is not inseparable but can be plug in. It means that when a better algorithm is developed in the future, we can improve the classification performance by plugging in to the methods. The second is that it is even easier to use slide images without grids than with grids. The images with grids can be treated as follows. Training data can be created by randomly cropping original slide images from images without grids and annotating them. Using the training data, the network can be trained to classify the cropped images without grids into the categories of AMF infection. Moreover, it is possible to identify which part of the image is infected even without grids using a sliding window approach, which crops images by sliding a rectangle of a fixed size on an image and classifying each cropped image. In our future work, we would like to extend TAIM so that infection rates can be calculated regardless of the presence or absence of grids in slide images.

Although TAIM can potentially distinguish hyphae and arbuscles following the MI method, due to the annotation effort and data limitations, we were unable to perform an experiment that distinguishes hyphae and arbuscles. In the future, when we obtain the appropriate data, we would be able to perform such an experiment. Similarly, the difference in the number of classes classified in TAIM and AMFinder comes from the difference in the data used in the experiments. TAIM classified infected roots into two classes (Arbuscules and Vesicles) in our experiments, whereas AMFinder did into four classes (Arbuscules, Vesicles, Hyphopodia, and Intraradical hyphae). This difference does not mean superiority or inferiority of the classifiers themselves but the difference in the data used in the experiments. Therefore, if we can collect the appropriate training data, of which class labels are the same as those of AMFinder, TAIM would be able to identify infected roots in the same detail.

Since annotation is done manually, errors are inevitable, and the error affects the classification accuracy. For example, apart from plat science, there is a well-used handwritten digit image dataset in the computer vision field, called MNIST dataset2. The test data of the MNIST dataset, which consists of 10,000 handwritten digit images, has 0.15% of labels that are expected to be wrong (Northcutt et al., 2021). Our dataset consists of a smaller number of images and fewer classes than MNIST. However, the annotation task of our dataset is harder than that of the MNIST dataset. Hence, it is difficult to expect the number of annotation errors in our dataset. However, even if there are errors, it would not seriously affect the classification accuracy insofar as the errors occur randomly. Actually, there are two types of errors that should be considered. One is annotation errors that occur randomly due to human error, which is argued above. The other is a bias that occurs depending on annotators, which is caused by having different criteria. The effect of the latter is more critical when fewer annotators are involved. Since TAIM is currently trained on data annotated by one of the authors, it is expected that the dataset is biased. However, when the number of annotators increases, it is expected that the bias will decrease. TAIM proves the function that decreases bias because TAIM can be used by multiple users and the users can upload their own data, as described in Section 3.3.4. Therefore, when the number of users increases, TAIM is expected to provide better classification results than those manually annotated by a single person.

6. Conclusion

This article describes a web-based application called TAIM, which calculates the colonization degree of AMF automatically from microscopic images. As AMF is now available for axenic culture, AMF is expected to be used a microbial fertilizer. To evaluate of the effectiveness of the AMF as a microbial fertilizer, colonization degree of AMF is required. One impediment to such research is that estimation of the colonization degree of AMF is still presently conducted manually. Therefore, we developed TAIM to automate calculation of the extent of AMF colonization. Because TAIM is a web-based application, it can be used via a web browser and does not require users to set up a calculation environment. TAIM also has a function to collect new training data from users and retrains the classifier of colonization. This function will contribute to reduction in variation of the decision criteria by observers by combining data annotated by multiple observers. We evaluated the detection and classification accuracy of TAIM with an experimental soybean root dataset comprising cropped 5,002 intersection areas. The experimental results showed that TAIM detected the intersection regions with a mean IoU score of 0.86. If an area with an IoU score of 0.75 is considered to represent successful detection, TAIM can detect the intersection regions with 95% accuracy. TAIM classified the detected regions into four classes with 87.7% accuracy. The classification accuracy improved with increase in number of training data. Therefore, the estimation accuracy of AMF colonization degree is predicted to improve by using the data collection function. TAIM is expected to contribute to an improved understanding of the effect of AMF as a microbial fertilizer.

Funding

This research was funded by the Osaka Prefecture University Female Researcher Support Project.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Statements

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Author contributions

YU conceptualized the research, managed the project, and sourced funding. KM, YU, and MI developed the methodology. KM and YU developed the software and wrote the original draft of the manuscript. KM performed the validation, formal analysis, and investigation. AM and KK prepared the materials for the research. ST and KM were involved in data curation. MI and AM reviewed the manuscript. All authors have read and agreed to the final version of the manuscript.

Acknowledgments

We thank Robert McKenzie, Ph.D., from Liwen Bianji (Edanz) (www.liwenbianji.cn/) for editing the English text of a draft of this manuscript.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpls.2022.881382/full#supplementary-material

References

  • 1

    Al-KofahiY.LassouedW.LeeW.RoysamB. (2010). Improved automatic detection and segmentation of cell nuclei in histopathology images. IEEE Trans. Biomed. Eng. 57, 841852. 10.1109/TBME.2009.2035102

  • 2

    AveM.CarpenterA.StM. (2017). “Applying faster R-CNN for object detection on malaria images,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (Honolulu, HI), 5661.

  • 3

    BattiatoS.OrtisA.TrentaF.AscariL.PolitiM.SiniscalcoC. (2020). “Detection and classification of pollen grain microscope images,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (Virtual), 42204227. 10.1109/CVPRW50498.2020.00498

  • 4

    BourelB.MarchantR.de Garidel-ThoronT.TetardM.BarboniD.GallyY.et al. (2020). Automated recognition by multiple convolutional neural networks of modern, fossil, intact and damaged pollen grains. Comput. Geosci. 140:104498. 10.1016/j.cageo.2020.104498

  • 5

    BuggenthinF.MarrC.SchwarzfischerM.HoppeP. S.HilsenbeckO.SchroederT.et al. (2013). An automatic method for robust and fast cell detection in bright field images from high-throughput microscopy. BMC Bioinformatics14:297. 10.1186/1471-2105-14-297

  • 6

    ChenX.ZhouX.WongS. T. (2006). Automated segmentation, classification, and tracking of cancer cell nuclei in time-lapse microscopy. IEEE Trans. Biomed. Eng. 53, 762766. 10.1109/TBME.2006.870201

  • 7

    DebeirO.Van HamP.KissR.DecaesteckerC. (2005). Tracking of migrating cells under phase-contrast video microscopy with combined mean-shift processes. IEEE Trans. Med. Imaging24, 697711. 10.1109/TMI.2005.846851

  • 8

    DengJ.DongW.SocherR.LiL.-J.Kai LiLi Fei-Fei (2009). “ImageNet: a large-scale hierarchical image database,” in Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition (Miami), 248255. 10.1109/CVPR.2009.5206848

  • 9

    DevanK. S.WaltherP.von EinemJ.RopinskiT.KestlerH. A.ReadC. (2019). Detection of herpesvirus capsids in transmission electron microscopy images using transfer learning. Histochem. Cell Biol. 151, 101114. 10.1007/s00418-018-1759-5

  • 10

    DzyubachykO.Van CappellenW. A.EssersJ.NiessenW. J.MeijeringE. (2010). Advanced level-set-based cell tracking in time-lapse fluorescence microscopy. IEEE Trans. Med. Imaging29, 852867. 10.1109/TMI.2009.2038693

  • 11

    EvangelistiE.TurnerC.McDowellA.ShenhavL.YunusovT.GavrinA.et al. (2021). Deep learning-based quantification of arbuscular mycorrhizal fungi in plant roots. New Phytol. 232, 22072219. 10.1111/nph.17697

  • 12

    Gallardo-CaballeroR.García-OrellanaC. J.García-MansoA.González-VelascoH. M.Tormo-MolinaR.Macías-MacíasM. (2019). Precise pollen grain detection in bright field microscopy using deep learning techniques. Sensors19, 119. 10.3390/s19163583

  • 13

    GiovannettiM.MosseB. (1980). An evaluation of techniques for measuring vesicular arbuscular mycorrhizal infection in roots. New Phytol. 84, 489500. 10.1111/j.1469-8137.1980.tb04556.x

  • 14

    HeK.ZhangX.RenS.SunJ. (2016). “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas, NV), 770778. 10.1109/CVPR.2016.90

  • 15

    HuttunenM. J.HassanA.McCloskeyC. W.FasihS.UphamJ.VanderhydenB. C.et al. (2018). Automated classification of multiphoton microscopy images of ovarian tissue using deep learning. J. Biomed. Opt. 23:1. 10.1117/1.JBO.23.6.066002

  • 16

    KameokaH.TsutsuiI.SaitoK.KikuchiY.HandaY.EzawaT.et al. (2019). Stimulation of asymbiotic sporulation in arbuscular mycorrhizal fungi by fatty acids. Nat. Microbiol. 4, 16541660. 10.1038/s41564-019-0485-7

  • 17

    KorfhageN.MühlingM.RingshandlS.BeckerA.SchmeckB.FreislebenB. (2020). Detection and segmentation of morphologically complex eukaryotic cells in fluorescence microscopy images via feature pyramid fusion. PLoS Comput. Biol. 16:e1008179. 10.1371/journal.pcbi.1008179

  • 18

    KrizhevskyA.SutskeverI.HintonG. E. (2012). “Imagenet classification with deep convolutional neural networks,” in Proceedings of Advances in Neural Information Processing Systems (Barcelona), 10971105.

  • 19

    KrizhevskyA.SutskeverI.HintonG. E. (2017). Imagenet classification with deep convolutional neural networks. Commun. ACM60, 8490. 10.1145/3065386

  • 20

    KurmiY.ChaurasiaV.GaneshN.KesharwaniA. (2020). Microscopic images classification for cancer diagnosis. Signal Image Video Process. 14, 665673. 10.1007/s11760-019-01584-4

  • 21

    KushwahaA.GuptaS.BhanushaliA.DastidarT. R. (2020). “Rapid training data creation by synthesizing medical images for classification and localization,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (Virtual), 42724279. 10.1109/CVPRW50498.2020.00504

  • 22

    LandsmeerS. H.HendriksE. A.De WegerL.ReiberJ. H.StoelB. C. (2009). Detection of pollen grains in multifocal optical microscopy images of air samples. Microsc. Res. Techn. 72, 424430. 10.1002/jemt.20688

  • 23

    LempitskyV.ZissermanA. (2010). “Learning to count objects in images,” in Proceedings of Advances in Neural Information Processing Systems (Vancouver, BC), 13241332.

  • 24

    LiuD.ZhangD.SongY.ZhangF.O'DonnellL.HuangH.et al. (2021). PDAM: a panoptic-level feature alignment framework for unsupervised domain adaptive instance segmentation in microscopy images. IEEE Trans. Med. Imaging40, 154165. 10.1109/TMI.2020.3023466

  • 25

    McGonigleT.MillerM.EvansD.FairchildG.SwanJ. (1990). A new method which gives an objective measure of colonization of roots by vesicular arbuscular mycorrhizal fungi. New Phytol. 115, 495501. 10.1111/j.1469-8137.1990.tb00476.x

  • 26

    MutaK.TakataS.UtsumiY.MatsumuraA.IwamuraM.KiseK. (2020). “Automatic calculation of infection rate of arbuscular mycorrhizal fungi using deep CNN,” in Proceedings of ECCV 2020 Workshop on Computer Vision Problems in Plant Phenotyping (CVPPP 2020) (Virtual), 1.

  • 27

    MuthuP.Angeline KirubhaS. P. (2020). Computational intelligence on image classification methods for microscopic image data. J. Ambient Intell. Human. Comput. 10.1007/s12652-020-02406-z

  • 28

    NishimuraK.HayashidaJ.WangC.KerD. F. E.BiseR. (2020). “Weakly-supervised cell tracking via backward-and-forward propagation,” in Proceedings of European Conference on Computer Vision (Virtual), 104121. 10.1007/978-3-030-58610-2_7

  • 29

    NobleW. S.. (2006). What is a support vector machine?Nat. Biotechnol. 24, 15651567. 10.1038/nbt1206-1565

  • 30

    NorthcuttC. G.AthalyeA.MuellerJ. (2021). “Pervasive label errors in test sets destabilize machine learning benchmarks,” in Proceedings of the Neural Information Processing Systems (New Orleans; Virtual).

  • 31

    RazzakM. I.NazS. (2017). “Microscopic blood smear segmentation and classification using deep contour aware cnn and extreme machine learning,” in Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (Honolulu, HI), 801807. 10.1109/CVPRW.2017.111

  • 32

    RichardsonA. E.LynchJ. P.RyanP. R.DelhaizeE.SmithF. A.SmithS. E.et al. (2011). Plant and microbial strategies to improve the phosphorus efficiency of agriculture. Plant Soil349, 121156. 10.1007/s11104-011-0950-4

  • 33

    Rodríuez-DamiánM.CernadasE.FormellaA.Fernández-DelgadoM.De Sá-OteroP. (2006). Automatic detection and classification of grains of pollen based on shape and texture. IEEE Trans. Syst. Man Cybern. Part C36, 531542. 10.1109/TSMCC.2005.855426

  • 34

    RomeroI. C.KongS.FowlkesC. C.JaramilloC.UrbanM. A.Oboh-IkuenobeF.et al. (2020). Improving the taxonomy of fossil pollen using convolutional neural networks and superresolution microscopy. Proc. Natl. Acad. Sci. U.S.A. 117, 2849628505. 10.1073/pnas.2007324117

  • 35

    RussakovskyO.DengJ.SuH.KrauseJ.SatheeshS.MaS.et al. (2015). Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115, 211252. 10.1007/s11263-015-0816-y

  • 36

    SchmidtU.WeigertM.BroaddusC.MyersG. (2018). “Cell detection with star-convex polygons,” in Proceedings of 21st International Conference on Medical Image Computing and Computer Assisted Intervention 2018 (Granada), 265273. 10.1007/978-3-030-00934-2_30

  • 37

    SharmaM.SahaO.SriramanA.HebbalaguppeR.VigL.KarandeS. (2017). “Crowdsourcing for Chromosome Segmentation and Deep Classification,” in Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vol. 2017 (Honolulu, HI), 786793. 10.1109/CVPRW.2017.109

  • 38

    SimonyanK.ZissermanA. (2015). “Very deep convolutional networks for large-scale image recognition,” in Proceedings of International Conference on Learning Representations (San Diego, CA), 114.

  • 39

    SmithS. E.ReadD. J. (2010). Mycorrhizal SYMBIOSIS. Academic press. Available online at: https://www.sciencedirect.com/book/9780123705266/mycorrhizal-symbiosis#book-info

  • 40

    SmithS. E.SmithF. A.JakobsenI. (2004). Functional diversity in arbuscular mycorrhizal (AM) symbioses: the contribution of the mycorrhizal P uptake pathway is not correlated with mycorrhizal responses in growth or total P uptake. New Phytol. 162, 511524. 10.1111/j.1469-8137.2004.01039.x

  • 41

    SongY.LiQ.HuangH.FengD.ChenM.CaiW. (2017). Low dimensional representation of fisher vectors for microscopy image classification. IEEE Trans. Med. Imaging36, 16361649. 10.1109/TMI.2017.2687466

  • 42

    SuY. T.LuY.ChenM.LiuA. A. (2017). Spatiotemporal joint mitosis detection using CNN-LSTM network in time-lapse phase contrast microscopy images. IEEE Access5, 1803318041. 10.1109/ACCESS.2017.2745544

  • 43

    SunX. G.TangM. (2012). Comparison of four routinely used methods for assessing root colonization by arbuscular mycorrhizal fungi. Botany90, 10731083. 10.1139/b2012-084

  • 44

    TresederK. K.. (2013). The extent of mycorrhizal colonization of roots and its influence on plant growth and phosphorus content. Plant Soil371, 113. 10.1007/s11104-013-1681-5

  • 45

    Van Der HeijdenM. G.KlironomosJ. N.UrsicM.MoutoglisP.Streitwolf-EngelR.BollerT.et al. (1998). Mycorrhizal fungal diversity determines plant biodiversity, ecosystem variability and productivity. Nature396, 6972. 10.1038/23932

  • 46

    VuQ. D.GrahamS.KurcT.ToM. N. N.ShabanM.QaiserT.et al. (2019). Methods for segmentation and classification of digital microscopy tissue images. Front. Bioeng. Biotechnol. 7:53. 10.3389/fbioe.2019.00053

  • 47

    WeigertM.SchmidtU.HaaseR.SugawaraK.MyersG. (2020). “Star-convex polyhedra for 3D object detection and segmentation in microscopy,” in Proceedings of 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) (Snowmass Colorado), 36553662. 10.1109/WACV45572.2020.9093435

  • 48

    XiaoC.ChenX.XieQ.LiG.XiaoH.SongJ.et al. (2021). Virus identification in electron microscopy images by residual mixed attention network. Comput. Methods Prog. Biomed. 198:105766. 10.1016/j.cmpb.2020.105766

  • 49

    XieW.NobleJ. A.ZissermanA. (2018). Microscopy cell counting and detection with fully convolutional regression networks. Comput. Methods Biomech. Biomed. Eng. 6, 283292. 10.1080/21681163.2016.1149104

  • 50

    XingF.XieY.ShiX.ChenP.ZhangZ.YangL. (2019). Towards pixel-to-pixel deep nucleus detection in microscopy images. BMC Bioinformatics20:472. 10.1186/s12859-019-3037-5

  • 51

    YangS. Y.GrønlundM.JakobsenI.GrotemeyerM. S.RentschD.MiyaoA.et al. (2012). Nonredundant regulation of rice arbuscular mycorrhizal symbiosis by two members of the PHOSPHATE TRANSPORTER1 gene family. Plant Cell24, 42364251. 10.1105/tpc.112.104901

  • 52

    YuK. H.ZhangC.BerryG. J.AltmanR. B.C.RubinD. L.et al. (2016). Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nat. Commun. 7, 110. 10.1038/ncomms12474

  • 53

    ZhouB.KhoslaA.LapedrizaA.OlivaA.TorralbaA. (2016). “Learning deep features for discriminative localization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas, NV), 29212929. 10.1109/CVPR.2016.319

Summary

Keywords

arbuscular mycorrhizal fungi, magnified intersections method, computer vision, pattern recognition, deep convolutional neural networks, system development

Citation

Muta K, Takata S, Utsumi Y, Matsumura A, Iwamura M and Kise K (2022) TAIM: Tool for Analyzing Root Images to Calculate the Infection Rate of Arbuscular Mycorrhizal Fungi. Front. Plant Sci. 13:881382. doi: 10.3389/fpls.2022.881382

Received

22 February 2022

Accepted

31 March 2022

Published

03 May 2022

Volume

13 - 2022

Edited by

Sabine Dagmar Zimmermann, Délégation Languedoc Roussillon (CNRS), France

Reviewed by

Natacha Bodenhausen, Research Institute of Organic Agriculture (FiBL), Switzerland; Eva Nouri, Université de Fribourg, Switzerland; Andrea Genre, University of Turin, Italy; Daniel Leitner, University of Vienna, Austria

Updates

Copyright

*Correspondence: Yuzuko Utsumi

This article was submitted to Frontiers in Plant Science, a section of the journal Frontiers in Plant Science

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics