Skip to main content

ORIGINAL RESEARCH article

Front. Physiol., 30 August 2023
Sec. Computational Physiology and Medicine
This article is part of the Research Topic Machine Learning-based Disease Diagnosis in Physiology and Pathophysiology View all 5 articles

PDC-Net: parallel dilated convolutional network with channel attention mechanism for pituitary adenoma segmentation

Qile ZhangQile Zhang1Jianzhen Cheng
Jianzhen Cheng2*Chun Zhou
Chun Zhou1*Xiaoliang JiangXiaoliang Jiang3Yuanxiang ZhangYuanxiang Zhang3Jiantao ZengJiantao Zeng3Li LiuLi Liu4
  • 1Department of Rehabilitation, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People’s Hospital, Quzhou, China
  • 2Department of Rehabilitation, Quzhou Third Hospital, Quzhou, China
  • 3College of Mechanical Engineering, Quzhou University, Quzhou, China
  • 4Department of Thyroid and Breast Surgery, Kecheng District People’s Hospital, Quzhou, China

Accurate segmentation of the medical image is the basis and premise of intelligent diagnosis and treatment, which has a wide range of clinical application value. However, the robustness and effectiveness of medical image segmentation algorithms remains a challenging subject due to the unbalanced categories, blurred boundaries, highly variable anatomical structures and lack of training samples. For this reason, we present a parallel dilated convolutional network (PDC-Net) to address the pituitary adenoma segmentation in magnetic resonance imaging images. Firstly, the standard convolution block in U-Net is replaced by a basic convolution operation and a parallel dilated convolutional module (PDCM), to extract the multi-level feature information of different dilations. Furthermore, the channel attention mechanism (CAM) is integrated to enhance the ability of the network to distinguish between lesions and non-lesions in pituitary adenoma. Then, we introduce residual connections at each layer of the encoder-decoder, which can solve the problem of gradient disappearance and network performance degradation caused by network deepening. Finally, we employ the dice loss to deal with the class imbalance problem in samples. By testing on the self-established patient dataset from Quzhou People’s Hospital, the experiment achieves 90.92% of Sensitivity, 99.68% of Specificity, 88.45% of Dice value and 79.43% of Intersection over Union (IoU).

1 Introduction

As an important medical imaging technology, magnetic resonance imaging (MRI) has been widely used in the examination of pituitary adenoma because it can display the anatomical information of soft tissues (Gavirni et al., 2023). At present, the analysis and processing of medical images mainly rely on the visual discrimination of brain MR Images by clinicians. This process is not only inefficient and time-consuming, but also has significant subjective limitations. In addition, it may not be replicable. With the development of computer science and artificial intelligence, the introduction of multidisciplinary medical imaging technology can design accurate treatment plans and give more reliable factors to the diagnosis. Therefore, the research on the segmentation method of pituitary adenoma MR image has important theoretical significance and practical application value for the development of modern medical informatization and intelligent computing of medical image.

Over the past few decades, various medical image segmentation algorithm have been presented, which can be broadly grouped into thresholding (Jain and Singh, 2022; Rawas and El-Zaart, 2022), watershed (Mohanapriya and Kalaavathi, 2019; Sadegh et al., 2022), clustering (Xu et al., 2022; Zhou et al., 2022), conditional random field (Sun et al., 2020; Li et al., 2022), dictionary learning (Yang Y Y et al., 2020; Tang et al., 2021), graph cut (Gamechi et al., 2021; Zhu et al., 2021), region growing (Rundo et al., 2016; Biratu et al., 2021), active contour (Dake et al., 2019; Shahvaran et al., 2021), quantum-inspired computing (Sergioli et al., 2021; Amin et al., 2022), computational intelligence (Vijay et al., 2016; Zhang et al., 2022). These traditional methods rely on developers to design algorithms for specific applications. They are highly interpretable, require few hardware devices, and do not require extensive data annotation. However, some algorithms or models need complex parameter tuning during in the process of implementation, so their generalization ability and robustness will be affected for specific segmentation problems.

Currently, with the development of computer hardware, deep learning methods have brought tremendous changes to the field of medical image segmentation, especially the convolutional neural networks (CNNs) framework. Among these new CNNs architectures, the most famous is U-Net proposed by Ronneberger et al. (2015). The main innovation lies in the rational design of the upper and lower sampling layer and the jump connection, so that the spatial information lost in the contraction path and the spatial information lost in the expansion path can be fused to produce a higher resolution local output. Since then, many scholars have integrated many theories into the U-Net framework according to the application requirements, so that U-Net system has been greatly expanded. For example, Lu et al. (2021) proposed a WBC-Net that automatically screens for white blood cells from smear images. Firstly, the residual network is added to deepen the network structure and enhance the ability of feature extraction. Then, WBC-Net introduces a mixed jump path, which can better fuse the features of different levels in the encoding structure and decoding structure, and thus improve the segmentation accuracy. According to the structural characteristics of vestibular, Zhang et al. (2021) constructed a deep learning network architecture based on supervision. Based on an encoder-decoder network, the model fuses the characteristic information of different receiving domains and attention mechanisms, which greatly improves the accuracy of network segmentation. AboElenein et al. (2022) proposed IRDNU-Net for the automatic segmentation of brain tumours in MRI images. The network used convolution cores of different sizes in the encoding path and decoding path, then reduced the dimension of the channel through 1 × 1 convolution to reduce the computational complexity, and finally carried out superposition and output to the next layer of the network. Due to the different sizes of the convolution kernel, the network can effectively extract the image features of different regions. Zhang et al. (2022) used DenseNet blocks to replace the convolution blocks of the original U-Net for the segmentation of skin lesions. This operation makes the current layer not only related to the output of the previous layer but also dependent on all previous layers, significantly promoting the propagation of the gradient.

Although the above deep learning algorithms have achieved good results in medical data, there is still a big gap between it and clinical application. This primarily has two reasons. First, the generalization ability of most deep neural network models is limited, and their performance in the real data of hospitals will be greatly reduced, mainly due to the differences of individual patients, the diversity of data types and the difficulty in the characterization of diseases. Second, in the current network architecture, the lack of capturing multi-scale context information leads to the deficiency of feature learning ability.

Inspired by the above thoughts, we propose a new U-Net architecture for pituitary adenoma segmentation. Based on the classical encoder-decoder framework, our network contains three core strategies: parallel dilated convolutional module, channel attention mechanism and residual connections. The combination of these strategies provides good segmentation ability for pituitary adenoma, and the Dice value, Intersection over Union and F1-score reach 88.34%, 79.25%, and 91.52%, respectively. The contributions of this study can be summarized as follows:

(1) A parallel dilated convolutional neural network based on U-Net architecture is built for pituitary adenoma segmentation.

(2) Replacing the traditional standard convolution block with PDCM, to extract more abundant multi-level feature information. Furthermore, we integrate the channel attention mechanism into PDCM to further strengthen the capability of the network.

(3) The introduction of residual connections at each layer can eliminate gradient disappearance and improve segmentation precision by constructing deeper networks.

2 Materials and methods

2.1 Overview of the network

As shown in Figure 1, the pituitary adenoma lesions that need to be segmented take up a small part of the entire image, far less than the background area. The dataset consists of 38 brain MRI scans of patients. There are 1,200 cases in the training set, 400 cases in the validation set, and 400 cases in the test set. The annotation work was manually outlined by three medical experts, and then segmented and annotated by five computer annotators using Lableme. After the dataset was generated, it was finally reviewed by three medical experts to obtain the final dataset (Pusparani et al., 2023). Another problem is the shape is irregular and the contrast with its surrounding tissue is not obvious. For the above difficulties, the architecture of PDC-Net is derived from the U-Net network, as shown in Figure 2. Our PDC-Net consists of two parts: an encoder module and a decoder module. The encoder branch consists of five layers, each of which contains a basic convolution operation and a PDCM, connected by residuals. After that, a 2 × 2 max-pooling operation is used for downsampling. In each downsampling process, the image size will be reduced to half of the original size, and the number of feature channels will be doubled. Accordingly, in the decoder branch, an equal amount of upsampling processing is performed to obtain an output image with the same size as the input image. During the upsampling process, the image size is doubled using deconvolution operations and connected with the features of the same resolution from the encoder path, followed by a convolution layer and a PDCM layer. In this network structure, the application of a large number of batch normalization layers (Jha et al., 2021) and ReLU activation function (Bai et al., 2021) can not only accelerate the training speed, but also improve the network generalization performance and stability. In addition, the DropBlock strategy is introduced in the convolutional layer to prevent network over-fitting. To generate the final segmentation result, the 1 × 1 convolution and sigmoid functions are used at the last layer to get the desired categories. Detailed introduction can be found in the following subsections.

FIGURE 1
www.frontiersin.org

FIGURE 1. Examples of problems in segmenting pituitary adenoma from MR images. The first row: original images. The second row: the gold standard sketched by experts.

FIGURE 2
www.frontiersin.org

FIGURE 2. Network architecture of our PDC-Net.

2.2 Parallel dilated convolutional module

In the traditional convolutional neural network, continuous convolution and pooling operations will lead to a decrease in image resolution and the loss of spatial structure information, which has a great impact on the task of medical image segmentation, and directly lead to the blurred boundary and pixel classification errors after segmentation. With the appearance of atrous spatial pyramid pooling (ASSP) structure (Chen et al., 2018b), the above problems can be solved well, and many optimized models are proposed continuously (Ni et al., 2021; Xie et al., 2021; Lan et al., 2022; Liang et al., 2022). By connecting convolution layers with different expansion rates in parallel, ASSP can alleviate the problem of spatial information loss caused by downsampling, and enlarge the receptive field without increasing the amount of computation, to obtain more regional information. However, these methods ignore the boundary information when fusing multi-scale features, which is not effective for small objects. Therefore, a parallel dilated convolutional module integrated with the channel attention mechanism is proposed.

As shown in Figure 3, the PDCM consists of five parallel channels: four convolution layers (with different dilated rates) and a channel attention mechanism. Firstly, the first branch is a 1 × 1 convolution, which aims to maintain the original receptive field while reducing the number of channels in the feature map. The second to fourth branches use convolution of different dilated rates to obtain different receptive fields, to fuse the feature information of different scales. However, due to the very small proportion of pituitary tumour region in the whole MR image, too large dilated rate will cause discontinuity of the receptive field, so we set the dilated rate from {6,12,18} to {2,4,6}. The fifth branch is the channel attention mechanism, as shown in Figure 4. CAM module obtains the weight information on the channel by pooling, channel convolution, and sigmoid of the input feature, so it can not only retain the underlying features of the target boundary information, but also obtain more context information. In addition, we do not fuse five different scales of information simply. We incorporate the feature output from the previous branch into the input feature of the next branch, and then perform the scale transformation. For example, assuming that Fin is the input feature, and its output is F1 after the first branch. Then, we will take (F1+Fin) as the input of the second branch, followed by a convolution operation with a dilated rate of 2. Similarly, the inputs of the third to fifth branches are (F2+Fin), (F3+Fin) and (F4+Fin), respectively. Finally, the output feature Fout of PDCM is (F1+F2+F3+F4+F5). By using this connection option, the network can get a feature map with additional dimensions and scale context information, which will help it classify border pixels more accurately.

FIGURE 3
www.frontiersin.org

FIGURE 3. Structure of PDCM.

FIGURE 4
www.frontiersin.org

FIGURE 4. Structure of CAM.

2.3 Residual connections

The experimental results show that the performance of a deep network does not increase consistently with the increase in the number of layers. When there are too many layers in the network, the phenomenon of gradient disappearing and dispersion may occur. Therefore, to improve the stability of the network in the deep learning training process and alleviate the problem of gradient disappearance, the residual connections are introduced into our model. As shown in Figure 5, we add a shortcut branch outside the main network, and then connect the main branch input x with the main branch output Fx. After adding the residual connections, we could make full use of each feature graph before and after convolution, which solved the degradation problem and greatly improved the learning ability of the network.

FIGURE 5
www.frontiersin.org

FIGURE 5. Residual connections.

2.4 Loss function

Due to the small data set, different lesion sizes, uneven foreground and background, it is easy to produce class imbalance problem. Dice Loss function (Arora et al., 2021; Liu et al., 2021; Phan et al., 2021) can judge from a global perspective and reduce the influence of the foreground image, which is adopted to train the network in this paper, and its calculation formula is defined as follows:

LDice=12i=1Nyiy^ii=1Nyi2+i=1Ny^i2(1)

where LDice is the Dice loss, N denotes the number of pixels, y^i represents the prediction probability that the ith pixel belongs to the lesion, yi is the actual label of the ith pixel.

3 Results and discussion

All models were based on NVIDIA Quadro RTX 6000 GPU memory of 24 GB, CUDA 11.0.2, cuDNN v8.0.4, and TensorFlow 2.4.0. These models were trained with the batch size set to 16 and the epochs set to 300. In addition, the Adam optimizer (Ji and Wu, 2022) whose momentum was 0.999 and learning rate of 1e-5 was adopted. Besides, the kernel size was set to (3,3), and the dropout rate was set to 0.1. The validation loss was monitored at every epoch and the best weight of the model would be saved when the validation loss is smallest in the iterative process.

3.1 Evaluation metrics

To evaluate the performance of the PDC-Net on the dataset, Sensitivity (Shen et al., 2022; Siar and Teshnehlab, 2022), Specificity (Rai and Chatterjee, 2021; Ramachandran et al., 2021), Dice value (Yang Y et al., 2020; You et al., 2021), and Intersection over Union (Ahmed et al., 2021; Zou et al., 2021) are utilized as evaluation metrics, which can be defined as:

Sensitivity=TPTP+FP(2)
Specificity=TNFP+TN(3)
Dice=2TP2TP+FN+FP(4)
IoU=TPTP+FN+FP(5)

where TP represents the number of adenoma pixels judged to be adenoma, TN represents the number of non-adenoma pixels judged to be non-adenoma, FP represents the number of non-adenoma pixels judged to be adenoma, and FN represents the number of adenoma pixels judged to be non-adenoma.

3.2 Results of PDC-Net

As shown in Figure 6A, the change in loss value during training is demonstrated. After 220 steps, although the loss on the training set is in a slow decline state, the loss on the validation set gradually tends to be flat or even rises slowly, which indicates that the optimal weight can be obtained when the number of iterations of the model is set at 300 steps. Finally, the model obtained 0.9092, 0.9968, 0.8845, 0.7943 on the four metrics of Sensitivity, Specificity, Dice and IoU, respectively. The iterative change of evaluation metrics is shown in Figure 6B.

FIGURE 6
www.frontiersin.org

FIGURE 6. Iterative process parameters change. (A) Loss. (B) Evaluate metrics.

The example of the results of the PDC-Net model on the pituitary adenoma segmentation dataset is illustrated in Figure 7. The results show that the proposed model can segment the boundaries of pituitary adenomas from brain MR Images through different angles. Although the labelling process is carried out by several experts and reviewed repeatedly, it is difficult to accurately mark manually with human eyes. It is gratifying that for some segmentation results, PDC-Net is more accurate than manual labeling results, as shown in the third row of Figure 7.

FIGURE 7
www.frontiersin.org

FIGURE 7. Pituitary adenoma image segmentation results of PDC-Net model. The first row: original images. The second row: the gold standard sketched by experts. The third row: results of PDC-Net.

3.3 Ablation experiment

To verify the function of the suggested module and the associated structure, ablation experiments were conducted, and the relevant conclusions were discussed as follows.

3.3.1 PDCM module

To expand the receptive field, raise the capacity of the model to extract multi-scale information, and improve the network segmentation accuracy, the PDCM was proposed. The proposed PDCM modules are of two types, one without the previous branch and the other is the fusion of previous branch features (PDCM). As shown in Table 1, the segmentation results of several structures are presented. The results show that the model with the previous branch features has a slight improvement in every metric compared with the model without previous branch features. It is worth noting that the model incorporating small receptive field features has relatively large improvements in each evaluation metric.

TABLE 1
www.frontiersin.org

TABLE 1. Ablation experiment of the PDCM module.

As displayed in Figure 8, the segmentation results of the three structures in Table 1 are presented. By comparing the results in the second row of Figure 8, it can be found that the model without the PDCM module does not understand the global information enough, leading to false detection of the non-adenoma region. Although the simple PDCM connection concatenates the convolution results of different receptive fields, it lacks the connection between small receptive fields and large receptive fields. PDCM with previous branch can solve this problem well, as shown in Figures 8D, E.

FIGURE 8
www.frontiersin.org

FIGURE 8. Comparison of segmentation results of different PDCM modules. (A) Original images. (B) Label images. (C) No PDCM. (D) PDCM without previous branch. (E) PDCM with previous branch.

3.3.2 Residual connection

For deep learning, simply increasing the depth of the network to improve the learning ability of the network often leads to the problem of gradient disappearance. The feature of the residual network is to improve the learning ability and accuracy of the deep network by adding jump connections. As shown in Table 2, ablation experiments concerning residual connection were presented. The results show that when the PDCM module was not added, the addition of residual gave a small boost to the model due to the relatively shallow network. When PDCM with previous branch was added, the model with residual was improved by more than one percentage point in Dice and IoU due to the relatively deep network.

TABLE 2
www.frontiersin.org

TABLE 2. Ablation experiment of residual connection.

The segmentation results concerning the Residual structure were shown in Figure 9. As shown in Figures 9C, D, for the network with PDCM structure, the addition of residual structure affects the segmentation accuracy of the adenoma boundary, even if there is a slight improvement in the evaluation metrics. On the contrary, when the PDCM with previous branch structure is added to the network, the learning ability of the model is reduced due to the too-deep structure, as shown in Figure 9E. At this time, the addition of residual can effectively improve the segmentation accuracy of the model and provide a more accurate description of the adenoma boundary.

FIGURE 9
www.frontiersin.org

FIGURE 9. Comparison of segmentation results of residual structures. (A) Original images. (B) Label images. (C) No Residual + No PDCM. (D) Residual + No PDCM. (E) No Residual + PDCM with previous branch. (F) Residual + PDCM with previous branch.

3.4 Comparison with different models

To evaluate the performance of the model more comprehensively, the proposed model is compared with some classical and newly proposed models, and the evaluation metrics are shown in Table 3. The results illustrate that the performance of the proposed PDC-Net on the pituitary adenoma dataset is better than other models involved in the comparison.

TABLE 3
www.frontiersin.org

TABLE 3. The results of comparison with other models.

The segmentation results of different models are illustrated in Figure 10. Despite the fact that the site of the pituitary adenoma varies for distinct brain MR images, the location is almost certain for brain MR images taken in the same direction. Due to the lack of correlation between high and low semantic features, U-Net will produce missing pixels when facing some dermoscopic images with large background interference and complex color changes, as shown in Figure 10C. Since the attention mechanism is more for the localization of adenomas and lacks the connection of high and low-scale features, the network with the attention mechanism cannot better complete the description of the boundary contour, as shown in Figures 10D, J, K. Some network architectures cannot produce better results on the pituitary adenoma data set because of the complicated network connections and inappropriate depth of the network, as illustrated in Figures 10F–I. The experimental results show that the proposed network structure can accurately segment pituitary adenomas from brain MR images in all directions, which has important research significance for intelligent and automatic analysis of medical images.

FIGURE 10
www.frontiersin.org

FIGURE 10. Comparison of segmentation results of different models. (A) Original images. (B) Label images. (C) U-Net. (D) AttUNet. (E) SegNet. (F) ODSegmentation. (G) CLNet. (H) MMDC-Net. (I) DeepLabV3+. (J) SK-UNet. (K) SAR-UNet. (L) PDC-Net.

4 Conclusion

Taking pituitary adenomas as the research object, this study constructed a deep learning model based on U-Net architecture to segment the boundaries of adenomas, and the relevant conclusions were as follows: firstly, the proposed PDCM module with previous branch can effectively combine the feature information of different scales and describe the adenoma boundary more accurately. Secondly, the integration of residual connection makes the deep network structure combined with the PDCM module obtain stronger learning ability and improves the performance of the model on the dataset. The results show that the proposed method has research significance for the automatic and intelligent detection of pituitary adenoma. Finally, the proposed PDC-Net can accurately segment pituitary adenomas from brain MR images, and it determines 90.92% of Sensitivity, 99.68% of Specificity, 88.45% of Dice value and 79.43% of Intersection over Union.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.

Author contributions

QZ: Writing–original draft, Methodology. JC: Methodology, Writing–original draft. CZ: Formal analysis, Writing–original draft. XJ: Writing–review and editing. YZ: Supervision, Writing–review and editing. JZ and LL: Writing–review and editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by the National Natural Science Foundation of China (No. 62102227), Zhejiang Basic Public Welfare Research Project (No. LZY22E050001, LZY22D010001, LGG19E050013, LZY21E060001, TGS23E030001, and LTGC23E050001), Science and Technology Major Projects of Quzhou (2021K29, 2022K56, 2022K92).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

AboElenein, N. M., Songhao, P., and Afifi, A. (2022). IRDNU-net: inception residual dense nested u-net for brain tumor segmentation. Multimed. Tools Appl. 81, 24041–24057. doi:10.1007/s11042-022-12586-9

CrossRef Full Text | Google Scholar

Ahmed, I., Ahmad, M., and Jeon, G. (2021). A real-time efficient object segmentation system based on U-Net using aerial drone images. J. Real-Time Image Process 18, 1745–1758. doi:10.1007/s11554-021-01166-z

CrossRef Full Text | Google Scholar

Amin, J., Anjum, M. A., Gul, N., and Sharif, M. (2022). A secure two-qubit quantum model for segmentation and classification of brain tumor using MRI images based on blockchain. Neural comput. Appl. 34, 17315–17328. doi:10.1007/s00521-022-07388-x

CrossRef Full Text | Google Scholar

Arora, R., Saini, I., and Sood, N. (2021). Multi-label segmentation and detection of COVID-19 abnormalities from chest radiographs using deep learning. Optik 246, 167780. doi:10.1016/j.ijleo.2021.167780

PubMed Abstract | CrossRef Full Text | Google Scholar

Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017). SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Patt. Anal. Mach. Intell. 39, 2481–2495. doi:10.1109/TPAMI.2016.2644615

CrossRef Full Text | Google Scholar

Bai, X. Y., Hu, Y., Gong, G. Z., Yin, Y., and Xia, Y. (2021). A deep learning approach to segmentation of nasopharyngeal carcinoma using computed tomography. Biomed. Signal Proces. 64, 102246. doi:10.1016/j.bspc.2020.102246

CrossRef Full Text | Google Scholar

Biratu, E. S., Schwenker, F., Debelee, T. G., Kebede, S. R., Negera, W. G., and Molla, H. T. (2021). Enhanced region growing for brain tumor MR image segmentation. J. Imaging 7, 22. doi:10.3390/jimaging7020022

PubMed Abstract | CrossRef Full Text | Google Scholar

Byra, M., Jarosik, P., Szubert, A., Galperin, M., Ojeda-Fournier, H., Olson, L., et al. (2020). Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network. Biomed. Signal Proces. 61, 102027. doi:10.1016/j.bspc.2020.102027

CrossRef Full Text | Google Scholar

Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A. L. (2018a). DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40, 834–848. doi:10.1109/TPAMI.2017.2699184

PubMed Abstract | CrossRef Full Text | Google Scholar

Chen, L. C., Zhu, Y. K., Papandreou, G., Schroff, F., Adam, H., et al. (2018b). Encoder-decoder with atrous separable convolution for semantic image segmentation. https://arxiv.org/abs/1802.02611.

Google Scholar

Dake, S. S., Nguyen, M., Yan, W. Q., and Kazi, S. (2019). “Human tumor detection using active contour and region growing segmentation,” in International conference and workshops on recent advances and innovations in engineering (IEEE), 1–5. doi:10.1109/ICRAIE47735.2019.9037642

CrossRef Full Text | Google Scholar

Gamechi, Z. S., Arias-Lorza, A. M., Saghir, Z., Bos, D., and de Bruijne, M. (2021). Assessment of fully automatic segmentation of pulmonary artery and aorta on noncontrast CT with optimal surface graph cuts. Med. Phys. 48, 7837–7849. doi:10.1002/mp.15289

PubMed Abstract | CrossRef Full Text | Google Scholar

Gavirni, R., Gupta, D., Mishra, D., Gupta, A., and Viswamitra, S. (2023). Clinically relevant myocardium segmentation in cardiac magnetic resonance images. IEEE J. Biomed. Health. Inf. 27, 2423–2431. doi:10.1109/JBHI.2023.3250429 (

CrossRef Full Text | Google Scholar

Jain, L., and Singh, P. (2022). A novel wavelet thresholding rule for speckle reduction from ultrasound images. J. King Saud. Univ. Com. Inf. Sci. 34, 4461–4471. doi:10.1016/j.jksuci.2020.10.009

CrossRef Full Text | Google Scholar

Jha, D., Smedsrud, P. H., Johansen, D., de Lange, T., Johansen, H. D., Halvorsen, P., et al. (2021). A comprehensive study on colorectal polyp segmentation with ResUNet++, conditional random field and test-time augmentation. IEEE J. Biomed. Health Inf. 25, 2029–2040. doi:10.1109/JBHI.2021.3049304

CrossRef Full Text | Google Scholar

Ji, M. M., and Wu, Z. B. (2022). Automatic detection and severity analysis of grape black measles disease based on deep learning and fuzzy logic. Comput. Electron.Agr. 193, 106718. doi:10.1016/j.compag.2022.106718

CrossRef Full Text | Google Scholar

Jin, C. Y., Wang, K., Han, T., Lu, Y., Liu, A., and Liu, D. (2022). Segmentation of ore and waste rocks in borehole images using the multi-module densely connected U-Net. Comput. Geosci. 159, 105018. doi:10.1016/j.cageo.2021.105018

CrossRef Full Text | Google Scholar

Lan, K., Cheng, J. Z., Jiang, J. Y., Jiang, X., and Zhang, Q. (2022). Modified UNet++ with atrous spatial pyramid pooling for blood cell image segmentation. Math. Biosci. Eng. 20, 1420–1433. doi:10.3934/mbe.2023064

PubMed Abstract | CrossRef Full Text | Google Scholar

Li, Y. X., Wu, X. R., Li, C., Chen, H., Sun, C., Yao, Y., et al. (2022). A hierarchical conditional random field-based attention mechanism approach for gastric histopathology image classification. Appl. Intell. 52, 9717–9738. doi:10.1007/s10489-021-02886-2

CrossRef Full Text | Google Scholar

Liang, B. T., Tang, C., Xu, M., Wu, T., and Lei, Z. (2022). Fusion network based on the dual attention mechanism and atrous spatial pyramid pooling for automatic segmentation in retinal vessel images. J. Opt. Soc. Am. A 39, 1393–1402. doi:10.1364/JOSAA.459912

CrossRef Full Text | Google Scholar

Liu, X. M., Wang, S. C., Zhang, Y., Liu, D., and Hu, W. (2021). Automatic fluid segmentation in retinal optical coherence tomography images using attention based deep learning. Neurocomputing 452, 576–591. doi:10.1016/j.neucom.2020.07.143

CrossRef Full Text | Google Scholar

Lu, Y., Qin, X. J., Fan, H. Y., Lai, T., and Li, Z. (2021). WBC-net: A white blood cell segmentation network based on UNet++ and ResNet. Appl. Soft Comput. 101, 107006. doi:10.1016/j.asoc.2020.107006

CrossRef Full Text | Google Scholar

Mohanapriya, N., and Kalaavathi, B. (2019). Adaptive image enhancement using hybrid particle swarm optimization and watershed segmentation. Intell. Autom. Soft Comput. 25, 1–11. doi:10.31209/2018.100000041

CrossRef Full Text | Google Scholar

Ni, J. J., Wu, J. H., Tong, J., Wei, M., and Chen, Z. (2021). SSCA-net: simultaneous self- and channel-attention neural network for multiscale structure-preserving vessel segmentation. Biomed. Res. Int. 2021, 6622253. doi:10.1155/2021/6622253

PubMed Abstract | CrossRef Full Text | Google Scholar

Oktay, O., Schlemper, J., Folgoc, L. L., Lee, M., Heinrich, M., Misawa, K., et al. (2018). Attention U-Net: learning where to look for the pancreas. https://arxiv.org/abs/1804.03999v2.

Google Scholar

Phan, T. D. T., Kim, S. H., Yang, H. J., and Lee, G. S. (2021). Skin lesion segmentation by U-Net with adaptive skip connection and structural awareness. Appl. Sci. 11, 4528. doi:10.3390/app11104528

CrossRef Full Text | Google Scholar

Pusparani, Y., Lin, C. Y., Jan, Y. K., Lin, F. Y., Liau, B. Y., Ardhianto, P., et al. (2023). Diagnosis of Alzheimer’s disease using convolutional neural network with select slices by landmark on Hippocampus in MRI images. IEEE Access 11, 61688–61697. doi:10.1109/ACCESS.2023.3285115

CrossRef Full Text | Google Scholar

Rai, H. M., and Chatterjee, K. (2021). 2D MRI image analysis and brain tumor detection using deep learning CNN model LeU-Net. Multimed. Tools Appl. 80, 36111–36141. doi:10.1007/s11042-021-11504-9

CrossRef Full Text | Google Scholar

Ramachandran, S., Niyas, P., Vinekar, A., and John, R. (2021). A deep learning framework for the detection of Plus disease in retinal fundus images of preterm infants. Biocybern. Biomed. Eng. 41, 362–375. doi:10.1016/j.bbe.2021.02.005

CrossRef Full Text | Google Scholar

Rawas, S., and El-Zaart, A. (2022). Towards an early diagnosis of alzheimer disease: A precise and parallel image segmentation approach via derived hybrid cross entropy thresholding method. Multimed. Tools Appl. 81, 12619–12642. doi:10.1007/s11042-022-12575-y

CrossRef Full Text | Google Scholar

Ronneberger, O., Fischer, P., and Brox, T. (2015). “U-net: convolutional networks for biomedical image segmentation,” in International conference on medical image computing and computer-assisted intervention (Springer), 234–241. arXiv:1505.04597.

CrossRef Full Text | Google Scholar

Rundo, L., Militello, C., Vitabile, S., Casarino, C., Russo, G., Midiri, M., et al. (2016). Combining split-and-merge and multi-seed region growing algorithms for uterine fibroid segmentation in MRgFUS treatments. Med. Boil. Eng. Comput. 54, 1071–1084. doi:10.1007/s11517-015-1404-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Sadegh, G., Kayvan, G., and Hamid, G. (2022). Using marker-controlled watershed transform to detect baker's cyst in magnetic resonance imaging images: A pilot study. J. Med. Signals Sens. 12, 84–89. doi:10.4103/jmss.JMSS_49_20

PubMed Abstract | CrossRef Full Text | Google Scholar

Sergioli, G., Militello, C., Rundo, L., Minafra, L., Torrisi, F., Russo, G., et al. (2021). A quantum-inspired classifier for clonogenic assay evaluations. Sci. Rep. 11, 2830. doi:10.1038/s41598-021-82085-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Shahvaran, Z., Kazemi, K., Fouladivanda, M., Helfroush, M. S., Godefroy, O., and Aarabi, A. (2021). Morphological active contour model for automatic brain tumor extraction from multimodal magnetic resonance images. J. Neurosci. Meth. 362, 109296. doi:10.1016/j.jneumeth.2021.109296

CrossRef Full Text | Google Scholar

Shen, C., Roth, H. R., Hayashi, Y., Oda, M., Miyamoto, T., Sato, G., et al. (2022). A cascaded fully convolutional network framework for dilated pancreatic duct segmentation. Int. J. Comput. Ass. Rad. 17, 343–354. doi:10.1007/s11548-021-02530-x

CrossRef Full Text | Google Scholar

Siar, M., and Teshnehlab, M. (2022). A combination of feature extraction methods and deep learning for brain tumour classification. IET Image Process 16, 416–441. doi:10.1049/ipr2.12358

CrossRef Full Text | Google Scholar

Sun, C. H., Li, C., Zhang, J. H., Rahaman, M. M., Ai, S., Chen, H., et al. (2020). Gastric histopathology image segmentation using a hierarchical conditional random field. Biocybern. Biomed. Eng. 44, 1535–1555. doi:10.1016/j.bbe.2020.09.008

CrossRef Full Text | Google Scholar

Tang, H. Z., Mao, L. Z., Zeng, S. Y., Deng, S., and Ai, Z. (2021). Discriminative dictionary learning algorithm with pairwise local constraints for histopathological image classification. Med. Biol. Eng. Comput. 59, 153–164. doi:10.1007/s11517-020-02281-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Vijay, V., Kavitha, A. R., and Rebecca, S. R. (2016). Automated brain tumor segmentation and detection in MRI using enhanced Darwinian particle swarm optimization (EDPSO). Procedia Comput. Sci. 92, 475–480. doi:10.1016/j.procs.2016.07.370

CrossRef Full Text | Google Scholar

Wang J K, J. K., Lv, P. Q., Wang, H. Y., and Shi, C. (2021). SAR-U-Net: squeeze-and-excitation block and atrous spatial pyramid pooling based residual U-net for automatic liver segmentation in computed tomography. Comput. Meth. Prog. Bio. 208, 106268. doi:10.1016/j.cmpb.2021.106268

CrossRef Full Text | Google Scholar

Wang L, L., Gu, J., Chen, Y. Z., Liang, Y., Zhang, W., Pu, J., et al. (2021). Automated segmentation of the optic disc from fundus images using an asymmetric deep learning network. Pattern Recogn. 112, 107810. doi:10.1016/j.patcog.2020.107810

CrossRef Full Text | Google Scholar

Xie, H. Y., Tang, C., Zhang, W., Shen, Y., and Lei, Z. (2021). Multi-scale retinal vessel segmentation using encoder-decoder network with squeeze-and-excitation connection and atrous spatial pyramid pooling. Appl. Opt. 60, 239–249. doi:10.1364/AO.409512

PubMed Abstract | CrossRef Full Text | Google Scholar

Xu, C. Q., Sun, G. D., Liang, R. H., and Xu, X. F. (2022). Vector field streamline clustering framework for brain fiber tract segmentation. IEEE Trans. Cogn. Dev. Syst. 14, 1066–1081. doi:10.1109/TCDS.2021.3094555

CrossRef Full Text | Google Scholar

Yang, Y., Shao, F., Fu, Z. Q., and Fu, R. D. (2020). Discriminative dictionary learning for retinal vessel segmentation using fusion of multiple features. Signal Image Video Process 13, 1529–1537. doi:10.1007/s11760-019-01501-9

CrossRef Full Text | Google Scholar

Yang, Y. Y., Feng, C., and Wang, R. F. (2020). Automatic segmentation model combining U-Net and level set method for medical images. Expert Syst. Appl. 153, 113419. doi:10.1016/j.eswa.2020.113419

CrossRef Full Text | Google Scholar

You, J., Yu, P. L., Tsang, A. C., Tsui, E. L. H., Woo, P. P. S., Lui, C. S. M., et al. (2021). 3D dissimilar-siamese-U-Net for hyperdense middle cerebral artery sign segmentation. Comput. Med. Imag. Grap. 90, 101898. doi:10.1016/j.compmedimag.2021.101898

CrossRef Full Text | Google Scholar

Zhang, G. Z., and Wang, S. S. (2022). Dense and shuffle attention U-Net for automatic skin lesion segmentation. Int. J. Imag. Syst. Tech. 32, 2066–2079. doi:10.1002/ima.22774

CrossRef Full Text | Google Scholar

Zhang, R. C., Zhuo, L., Zhang, H., Chen, B., Yin, Y., and Li, S. (2021). Unifying neural learning and symbolic reasoning for spinal medical report generation. Comput. Med. Imag. Grap. 89, 101872. doi:10.1016/j.media.2020.101872

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, T. C., Zhang, J., Xue, T., and Rashid, M. H. (2022). A brain tumor image segmentation method based on quantum entanglement and wormhole behaved particle swarm optimization. Front. Med. 9, 794126. doi:10.3389/fmed.2022.794126

CrossRef Full Text | Google Scholar

Zheng, Z., Wan, Y., Zhang, Y. J., Xiang, S., Peng, D., and Zhang, B. (2021). CLNet: cross-layer convolutional neural network for change detection in optical remote sensing imagery. ISPRS J. Photogramm. 175, 247–267. doi:10.1016/j.isprsjprs.2021.03.005

CrossRef Full Text | Google Scholar

Zhou, W., Wang, L. M., Han, X. M., and Li, M. Y. (2022). A novel deviation density peaks clustering algorithm and its applications of medical image segmentation. IET Image Process 16, 3790–3804. doi:10.1049/ipr2.12594

CrossRef Full Text | Google Scholar

Zhu, C. L., Wang, X. Y., Chen, S. Y., Teng, Z., Bai, C., Huang, X., et al. (2021). Complex carotid artery segmentation in multi-contrast MR sequences by improved optimal surface graph cuts based on flow line learning. Med. Biol. Eng. Comput. 60, 2693–2706. doi:10.1007/s11517-022-02622-z

CrossRef Full Text | Google Scholar

Zou, K. L., Chen, X., Wang, Y. L., Zhang, C., and Zhang, F. (2021). A modified U-Net with a specific data argumentation method for semantic segmentation of weed images in the field. Comput. Electron.in Agr. 187, 106242. doi:10.1016/j.compag.2021.106242

CrossRef Full Text | Google Scholar

Keywords: pituitary adenoma, image segmentation, U-Net, parallel dilated convolutional, residual connections

Citation: Zhang Q, Cheng J, Zhou C, Jiang X, Zhang Y, Zeng J and Liu L (2023) PDC-Net: parallel dilated convolutional network with channel attention mechanism for pituitary adenoma segmentation. Front. Physiol. 14:1259877. doi: 10.3389/fphys.2023.1259877

Received: 17 July 2023; Accepted: 16 August 2023;
Published: 30 August 2023.

Edited by:

Ruizheng Shi, Central South University, China

Reviewed by:

Sungon Lee, Hanyang University, Erica, Republic of Korea
Hao Sun, Ludong University, China
Youdong Zhang, Chongqing University, China

Copyright © 2023 Zhang, Cheng, Zhou, Jiang, Zhang, Zeng and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jianzhen Cheng, Jianzhencheng@ocibe.com, 906989116@qq.com; Chun Zhou, zc21_21@163.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.