<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Comput. Neurosci.</journal-id>
<journal-title>Frontiers in Computational Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Comput. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-5188</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fncom.2023.1091180</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Technology and Code</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Deep learning on lateral flow immunoassay for the analysis of detection data</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Liu</surname> <given-names>Xinquan</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c002"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/2047712/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Du</surname> <given-names>Kang</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Lin</surname> <given-names>Si</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Wang</surname> <given-names>Yan</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Precision Instrument and Optoelectronics Engineering, Tianjin University</institution>, <addr-line>Tianjin</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Tianjin Boomscience Technology Co., Ltd.</institution>, <addr-line>Tianjin</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>Beijing Savant Biotechnology Co., Ltd.</institution>, <addr-line>Beijing</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Zhanhui Wang, China Agricultural University, China</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Liang Xue, Shanghai University of Electric Power, China; Shouyu Wang, Jiangnan University, China</p></fn>
<corresp id="c001">&#x002A;Correspondence: Yan Wang, <email>wangyan@tju.edu.cn</email></corresp>
<corresp id="c002">Xinquan Liu, <email>liuxinquantju@163.com</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>26</day>
<month>01</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>17</volume>
<elocation-id>1091180</elocation-id>
<history>
<date date-type="received">
<day>06</day>
<month>11</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>13</day>
<month>01</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2023 Liu, Du, Lin and Wang.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Liu, Du, Lin and Wang</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Lateral flow immunoassay (LFIA) is an important detection method <italic>in vitro</italic> diagnosis, which has been widely used in medical industry. It is difficult to analyze all peak shapes through classical methods due to the complexity of LFIA. Classical methods are generally some peak-finding methods, which cannot distinguish the difference between normal peak and interference or noise peak, and it is also difficult for them to find the weak peak. Here, a novel method based on deep learning was proposed, which can effectively solve these problems. The method had two steps. The first was to classify the data by a classification model and screen out double-peaks data, and second was to realize segmentation of the integral regions through an improved U-Net segmentation model. After training, the accuracy of the classification model for validation set was 99.59%, and using combined loss function (WBCE + DSC), intersection over union (IoU) value of segmentation model for validation set was 0.9680. This method was used in a hand-held fluorescence immunochromatography analyzer designed independently by our team. A Ferritin standard curve was created, and the T/C value correlated well with standard concentrations in the range of 0&#x2013;500 ng/ml (<italic>R</italic><sup>2</sup> = 0.9986). The coefficients of variation (CVs) were &#x2264; 1.37%. The recovery rate ranged from 96.37 to 105.07%. Interference or noise peaks are the biggest obstacle in the use of hand-held instruments, and often lead to peak-finding errors. Due to the changeable and flexible use environment of hand-held devices, it is not convenient to provide any technical support. This method greatly reduced the failure rate of peak finding, which can reduce the customer&#x2019;s need for instrument technical support. This study provided a new direction for the data-processing of point-of-care testing (POCT) instruments based on LFIA.</p>
</abstract>
<kwd-group>
<kwd>lateral flow immunoassay</kwd>
<kwd>data processing</kwd>
<kwd>point of care testing</kwd>
<kwd>deep learning</kwd>
<kwd>convolutional neural network</kwd>
<kwd>U-Net model</kwd>
</kwd-group>
<counts>
<fig-count count="9"/>
<table-count count="6"/>
<equation-count count="11"/>
<ref-count count="36"/>
<page-count count="13"/>
<word-count count="7773"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>1. Introduction</title>
<p><italic>In vitro</italic> diagnosis (IVD) generally refers to detecting targets in the blood, urine, sweat, saliva, tissue fluid, or tissue outside the body, and is mainly used to diagnose diseases, prevent infections, manage chronic diseases, track pathological changes, evaluate therapeutic effects, and other aspects of health care (<xref ref-type="bibr" rid="B31">Yang et al., 2021</xref>; <xref ref-type="bibr" rid="B19">Peng et al., 2022</xref>). Currently, the instruments used for IVD include biochemical, immunological, molecular, microbial, and blood diagnosis as well as point-of-care testing (POCT) (<xref ref-type="bibr" rid="B10">Haung and Ho, 1998</xref>; <xref ref-type="bibr" rid="B30">Xiao and Lin, 2015</xref>; <xref ref-type="bibr" rid="B3">Chen et al., 2017</xref>; <xref ref-type="bibr" rid="B27">Vila et al., 2017</xref>; <xref ref-type="bibr" rid="B14">Li et al., 2020</xref>; <xref ref-type="bibr" rid="B15">Liao et al., 2021</xref>). Compared with previous instruments, POCT has the characteristics of high speed, convenience, and low cost; therefore, it has received considerable attention from the medical industry (<xref ref-type="bibr" rid="B25">Singer et al., 2005</xref>; <xref ref-type="bibr" rid="B4">Damhorst et al., 2019</xref>).</p>
<p>Point-of-care testing is a patient-centered method for rapid sample detection using portable analytical instruments or simple reagents (<xref ref-type="bibr" rid="B16">Luppa et al., 2011</xref>; <xref ref-type="bibr" rid="B7">Florkowski et al., 2017</xref>). There are many kinds of POCT instruments, among which the lateral flow immunoassay (LFIA), based on paper-based and fluorescence detection technology, is increasingly being applied (<xref ref-type="bibr" rid="B2">Chen and Yang, 2015</xref>). It has the advantages of being cheap, lightweight, and easy to handle, and the fluorescence detection method can realize the quantitative detection of the sample. Both of them make LFIA highly competitive, especially for developing countries where budget is an important criterion, which is a good choice (<xref ref-type="bibr" rid="B29">Wu et al., 2018</xref>).</p>
<p>According to the published literature, LFIA technology has successfully realized the detection of biomarkers in many fields. Our research group combined many medical units using fluorescent microsphere labeling and immunochromatography technology to successfully detect COVID-19 and evaluated the analytical ability and clinical application of this technology (<xref ref-type="bibr" rid="B32">Zhang et al., 2020</xref>). <xref ref-type="bibr" rid="B11">Hu et al. (2016)</xref> developed a highly sensitive quantitative lateral flow analysis method for protein biomarkers using fluorescent nanospheres (FNs) as materials, which can be used to detect the concentration of CRP in the human body with a detection limit of 27.8 pM. Lee et al. developed a novel portable fluorescence sensor that integrates a lateral flow assay with quantum dots (Qdots) labeling and a mobile phone reader for the detection of Taenia solium T24H antibodies in human serum (<xref ref-type="bibr" rid="B13">Lee et al., 2019</xref>). <xref ref-type="bibr" rid="B12">Huang et al. (2020)</xref> used a double-antibody sandwich immunofluorescence method based on the combination of nano europium (EUNP) and lateral flow immunoassay (LFIA) to detect IL6 with a wide linear range (2&#x2013;500 pg/ml) and high sensitivity (0.37 pg/ml) (<xref ref-type="bibr" rid="B12">Huang et al., 2020</xref>). <xref ref-type="bibr" rid="B24">Shao et al. (2017)</xref> used the double-antibody sandwich immunofluorescence method combined with the time-resolved immunofluorescence (TRFIA) and lateral flow immunoassay (LFIA) to detect human procalcitonin with high sensitivity (0.08 ng/ml). <xref ref-type="bibr" rid="B8">Gong et al. (2019)</xref> developed a miniaturized and portable UCNP-LFA platform that can be used to detect small molecules (ochratoxin A, OTA), heavy metal ions (Hg2+), bacteria (Salmonella, SE), nucleic acids (hepatitis B virus, HBV), and proteins (growth-stimulating expressed gene 2, ST-2).</p>
<p>As shown in <xref ref-type="fig" rid="F1">Figure 1</xref>, there are two schemes of fluorescence detection technology for LFIA: a photoelectric scanning data acquisition platform based on Si photodiode, which is the current mainstream technology because of better performance, and a data acquisition platform based on CCD photography (<xref ref-type="bibr" rid="B23">Shao et al., 2019</xref>). The classical method of LFIA data processing is to obtain the C-/T- lines of the strip by peak-finding method. In this way, the normal peak and interference peak or noise peak cannot be distinguished, and wrong peak is easy to be regarded as normal peak, thus giving wrong detection result. These methods still perform poorly in effectively identifying weak and overlapping peaks while maintaining a low false-discovery rate. <xref ref-type="bibr" rid="B20">Qin et al. (2020)</xref> used a U-Net neural network, a variant of the convolutional neural network (CNN), to achieve the region of interest (ROI) containing T-/C-lines of test strips, and which was only used for CCD photography. In this study, we proposed a novel data processing method, which can be applied to both CCD photography and photoelectric scanning data acquisition platform. When applied to CCD photography, it only needed to convert the data to one dimension, which can be done by averaging the same row pixels parallel to the fluorescent band. This method greatly reduced the failure rate of peak finding, which can reduce the customer&#x2019;s need for instrument technical support, and provided a new direction for the data processing of POCT instruments based on LFIA.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Schematic diagram of LFIA. A hand-held fluorescence immunoassay analyzer which was used to measure fluorescent intensity controlled by a mobile phone <italic>via</italic> Bluetooth. Its sensor can be CCD or Si photodiode.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g001.tif"/>
</fig>
<p>Compared with the classical peak-finding method, method proposed in this study has the following advantages:</p>
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>Classical peak-finding methods combined with threshold-based techniques do not have the ability to identify peak shapes. They can only find local maxima according to certain rules, and cannot accurately identify certain noise signals as invalid data. For example, according to the setting rules in section &#x201C;3.4. Comparison with classical methods,&#x201D; they will misjudge peak 1 as C-peak in <xref ref-type="fig" rid="F2">Figures 2A&#x2013;G</xref>, and misjudge peak 2 as T-peak, resulting in incorrect detection results. They will also misjudge peak 1 as C-peak in <xref ref-type="fig" rid="F2">Figure 2H</xref>, and no T-peak can be found, resulting in a false concentration of 0. In fact, all the data listed in <xref ref-type="fig" rid="F2">Figure 2</xref> were judged invalid by the technician. Due to the diversity of sample types and detection items, coupled with some problems in user operation, various invalid data could be generated. The classification model based on deep learning proposed in this study has ability to distinguish peak shape, and it can identify these invalid data as noise (class 1) or only T-peak (class 3), thus solving this problem well.</p>
</list-item>
</list>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Noise (Class 1) of different shapes <bold>(A&#x2013;H)</bold>. The classical methods will misjudge peak 1 as C-peak in panels <bold>(A&#x2013;G)</bold>, and misjudge peak 2 as T-peak, resulting in incorrect detection results. They will also misjudge peak 1 as C-peak in panel <bold>(H)</bold>, and no T-peak can be found, resulting in a false concentration of 0.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g002.tif"/>
</fig>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>Panels <bold>(A&#x2013;D)</bold> are only C-peak (Class 2) of different shapes, and panels <bold>(E,F)</bold> are only T-peak (Class 3) of different shapes.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g003.tif"/>
</fig>
<list list-type="simple">
<list-item>
<label>(2)</label>
<p>Classical peak-finding methods cannot solve the problem of interference peaks, especially the interference peaks around weak T-peak, as shown in <xref ref-type="fig" rid="F4">Figure 4</xref>. Interfering peaks may appear anywhere, to the left or right of valid peak. Classic peak-finding methods combined with threshold-based techniques, such as setting an interval range for the positions of C-peak and T-peak or setting a threshold for the height of C-peak, are not completely reliable. Because the positions of C- and T- peaks will change with assembly position of nitrocellulose membrane, insertion position of test strip, difference between different instruments, sampling speed and so on, errors will occur when the set range is exceeded. For example, the classic peak-finding methods will misjudge peak 1 as C-peak in <xref ref-type="fig" rid="F4">Figure 4B</xref>, and misjudge peak 2 as T-peak in <xref ref-type="fig" rid="F4">Figure 4C</xref> and peak 1 or 2 as T-peak in <xref ref-type="fig" rid="F4">Figure 4D</xref>. In addition, the classic peak-finding methods perform poorly when looking for weak T-peak. They often fail to find T-peak and misjudge the tailing peak (peak 2 in <xref ref-type="fig" rid="F4">Figures 4E, F</xref>) as T-peak. Similarly, the improved U-net segmentation model proposed in this study has ability to distinguish shape of peaks, which can solve this problem well.</p>
</list-item>
</list>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>Double-peaks (Class 4) of different shapes <bold>(A&#x2013;F)</bold>. Peaks 1 and 2 are interference peaks. The classic methods will misjudge peak 1 as C-peak in panel <bold>(B)</bold>, misjudge peak 2 as T-peak in panel <bold>(C)</bold>, and peak 1 or 2 as T-peak in panel <bold>(D)</bold>. They often fail to find T-peak and misjudge the tailing peak [peak 2 in panels <bold>(E,F)</bold>] as T-peak.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g004.tif"/>
</fig>
<list list-type="simple">
<list-item>
<label>(3)</label>
<p>For classical methods, a minimum threshold is generally set for the height of C-peak. If the threshold is too small, accuracy will be greatly reduced due to presence of interference peaks or invalid data. If the threshold is too large, it will be unfavorable to process data with low height of C-peak in test strips of competition method. This is an unavoidable shortcoming of classical methods, but the method proposed in this study does not have this problem.</p>
</list-item>
<list-item>
<label>(4)</label>
<p>Method proposed in this study can enhance its generalization ability by constantly learning new type data, but classical algorithm obviously does not have this ability. They are only some fixed peak-finding rules and threshold judgments, and cannot accurately identify some noise peaks similar to valid peaks. In particular, the noise data is ever-changing, and it is difficult for classical methods to be suitable for every new type of data.</p>
</list-item>
</list>
</sec>
<sec id="S2" sec-type="materials|methods">
<title>2. Materials and methods</title>
<sec id="S2.SS1">
<title>2.1. Materials</title>
<p>The data used for training, validation, and testing in this study were obtained from Beijing Savant Biotechnology Co., Ltd. These data are the result of testing a variety of items. The detection items mainly included human ferritin, vitamin D, D-dimer, and C-reactive protein and so on. The sample types mainly included whole blood, serum, and plasma.</p>
</sec>
<sec id="S2.SS2">
<title>2.2. Principle of LFIA</title>
<p>A double-antibody sandwich test strip with fluorescent microspheres (FMS) as the carrier was used to illustrate the detection principle of LFIA. The double-antibody sandwich structure is shown in <xref ref-type="fig" rid="F1">Figure 1</xref>. The test strip was composed of a sample pad, conjugate pad, nitrocellulose membrane (NC membrane), absorbent pad, and plastic backing card. After the sample was dripped into the sample pad, it was subjected to immunochromatography under capillarity. The detection antibody-FMS (DAb-FMS) and rabbit IgG antibody-FMS (Rabbit-Ab-FMS) were placed on the conjugate pad. There are T and C lines on the NC membrane; the T line is coated with capture antibody (CAb), and the C line is coated with goat anti-rabbit IgG antibody (GAR-Ab). The absorbent pad causes liquid to flow <italic>via</italic> capillary action. The plastic backing card plays the role of fixing and supporting.</p>
<p>When the sample solution containing the analyte was added to the sample pad, it was laterally transferred along the NC membrane <italic>via</italic> capillary action. When the sample flowed through the conjugate pad, the Antigen in the sample reacted with DAb to form a DAb-FMS/Antigen complex. When the complex flows to the T line in the NC membrane, the Antigen and CAb on the T line are immunized to form a DAb-FMS/Antigen/CAb complex. Rabbit-Ab-FMS, which does not participate in the reaction, continues to flow forward to the C line and reacts with GAR-Ab.</p>
<p>Generally, the entire reaction process takes approximately 15 min. After immunochromatography is completed, the excitation light generated by the scanning mechanism irradiates the T and C lines, and fluorescence is generated. In the process of scanning the NC membrane, the fluorescence intensity produced at each point of the scan was recorded using a photodiode, and the peak data shown in <xref ref-type="fig" rid="F1">Figure 1</xref> were finally formed. The ratio of the fluorescence intensities of the two lines can be obtained by calculating the ratio of the peak areas of the T- and C-peaks. The concentration of the antigen detected in the sample was proportional to the T/C. By establishing a standard curve, the concentration of the antigen detected in the sample can be calculated.</p>
</sec>
<sec id="S2.SS3">
<title>2.3. Data augmentation</title>
<p>During the testing of clinical samples, four different peak shape data were obtained: noise (class 1), only C-peak (class 2), only T-peak (class 3), and double-peaks (C-peak and T-peak, class 4). Class 1 was generated by a fluorescence analyzer scanning the fouled NC membrane, whereas class 2 was generated by detecting the sample with a concentration of 0. However, the data of class 3 were very few, and were generally generated from the test strips with the disappearance of the C-peak. To better train the model, the C-peak of the double-peaks (class 4) was deleted and transformed into the background by a cubic spline interpolation method; thus, a large amount of data containing only the T-peak was generated manually.</p>
</sec>
<sec id="S2.SS4">
<title>2.4. Label annotation</title>
<p>To train the model, a large amount of labeled data is required. Data annotation is a complex process, and the quality of the annotation directly affects the results of the model training. This method includes two steps corresponding to a classification and a segmentation model, and the training data of the two models must be annotated separately. The labeled dataset was randomly divided into training and verification sets.</p>
<p>The entire dataset for the classification model includes approximately 4,100 detection data, including four types of peak shapes, namely, noise, only C-peak, only T-peak, and double-peaks. These four types of data were encoded according to one-hot, which were noise (class 1), only C-peak (class 2), only T-peak (class 3), and double-peaks (class 4), as shown in <xref ref-type="fig" rid="F2">Figures 2</xref>&#x2013;<xref ref-type="fig" rid="F4">4</xref>. There were approximately 900 data for noise (class 1), 900 for only C-peak (class 2), 900 for only T-peak (class 3), and 1,400 for double-peaks (class 4). The peak shape of the detection data is particularly complex and diverse, and only a few typical ones are selected for display here.</p>
<p>The dataset of the segmentation model includes approximately 1,400 pieces of detection data, that is, all the data of class 4. In this study, based on the Python language, software was designed to annotate the integral regions of the T-peak and C-peak, and the integral regions of the C-peak and T-peak of 1,400 fluorescence detection data were annotated.</p>
</sec>
<sec id="S2.SS5">
<title>2.5. Network architecture</title>
<p>Convolutional neural network is an artificial neural network specially designed to process data such as images or videos. It generally has three layers, namely, convolution, pooling and full connection layer. In the convolution layer, input samples are convolved with kernel, and the discrete convolution function is defined as:</p>
<disp-formula id="S2.Ex1">
<mml:math id="M1">
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mo>&#x002A;</mml:mo>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo rspace="5.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:munder>
<mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo>
<mml:mi mathvariant="normal">&#x03C4;</mml:mi>
</mml:munder>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mi>f</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi mathvariant="normal">&#x03C4;</mml:mi>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x22C5;</mml:mo>
<mml:mi>g</mml:mi>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>-</mml:mo>
<mml:mi mathvariant="normal">&#x03C4;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>f</italic> and <italic>g</italic> are two functions.</p>
<p>Pooling is used to extract high-dimensional features, and the most commonly used ones are maximum and average pooling. In a fully connected layer, all neurons in the current layer are interconnected with every neuron in the next layer.</p>
<p>As shown in <xref ref-type="fig" rid="F5">Figure 5</xref>, the entire data-processing flow consists of two steps. First, a classification model was used to classify the input data. Second, after analyzing the input data, if the output result was class 4 (double-peaks), the data were imported into the next segmentation model to realize the data segmentation of the C-peak and T-peak areas.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>Each blue box represents a feature map of a layer. Number above the blue box is the number of channels, whereas number in lower left corner is the number of data points. The arrows represent different operations. <bold>(A)</bold> Neural network architecture of the classification model. The first blue box represents the format of input data. After being processed by the classification model, the input data were finally classified into four classes, namely, Class 1 (Nosie), Class 2 (Only C-peak), Class 3 (Only T-peak), and Class 4 (double-peaks). <bold>(B)</bold> Neural network architecture of segmentation model, through which the ROI of test strip containing T-/C- peak can be extracted and obtained.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g005.tif"/>
</fig>
<p>The input of the classification model had two channels. Because the fluorescent signal has strong background noise, we subtracted the background and then normalized it as the first channel. It was achieved by the following formula.</p>
<disp-formula id="S2.Ex2">
<mml:math id="M2">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi mathvariant="normal">X</mml:mi>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>Y<sub>1</sub></italic> is the first channel, <italic>X</italic> is raw input data, <italic>x</italic><sub><italic>min</italic></sub> is minimum value, and <italic>x</italic><sub><italic>max</italic></sub> is maximum value of raw data. In order to make the model learn the peak shape rather than intensity, we performed a logarithmic operation on the signal which was deducted background as the second channel. It was achieved by the following formula.</p>
<disp-formula id="S2.Ex3">
<mml:math id="M3">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>Y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>g</mml:mi>
<mml:mn>10</mml:mn>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>X</mml:mi>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>g</mml:mi>
<mml:mn>10</mml:mn>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>Y<sub>2</sub></italic> is the second channel, <italic>X</italic> is raw input data, <italic>x</italic><sub><italic>min</italic></sub> is minimum value, and <italic>x</italic><sub><italic>max</italic></sub> is maximum value of raw data.</p>
<p>The network architecture of the classification model is illustrated in <xref ref-type="fig" rid="F5">Figure 5A</xref>. The entire network architecture consisted of 10 layers; the first seven layers were conv1d + ReLU + MaxPool1d (<xref ref-type="bibr" rid="B1">Acharya et al., 2017</xref>; <xref ref-type="bibr" rid="B9">Gu et al., 2018</xref>; <xref ref-type="bibr" rid="B33">Zhang et al., 2019</xref>), and the input data were extracted into four features of high-dimensional 1,024 channels. The eighth layer extended the number of channels to 2,048. Next, Max + Transposition was used to extract the maximum value from the four high dimension features (<xref ref-type="bibr" rid="B9">Gu et al., 2018</xref>). To improve the accuracy of classification, we used dropout layer before fully connected layers. The last layer (Dropout + Fully-connected + SoftMax) classified the data into one of four classes (<xref ref-type="bibr" rid="B26">Srivastava et al., 2014</xref>).</p>
<p>We designed an improved U-Net segmentation model with reference to the classic U-Net model (<xref ref-type="bibr" rid="B21">Ronneberger et al., 2015</xref>); the network architecture is shown in <xref ref-type="fig" rid="F5">Figure 5B</xref>. We changed the input data into two channels. This model had four parts: input unit, encoding structure, decoding structure, and output unit (<xref ref-type="bibr" rid="B21">Ronneberger et al., 2015</xref>; <xref ref-type="bibr" rid="B18">Oh et al., 2019</xref>; <xref ref-type="bibr" rid="B28">Wang et al., 2021</xref>; <xref ref-type="bibr" rid="B34">Zheng et al., 2021</xref>; <xref ref-type="bibr" rid="B36">Zunair and Ben Hamza, 2021</xref>). The encoding structure used four units to reduce the dimensions, and the number of feature maps was increased gradually. In order to reduce training time, we added batch normalization after each convolution (<xref ref-type="bibr" rid="B17">Melnikov et al., 2020</xref>). In the decoding structure, each step was symmetrical with the encoding part to recover data. The upsampling section allowed the network to propagate the context information to a higher-resolution layer. In the last layer, the discrimination of whether each point in fluorescence data belonged to an integral region was realized.</p>
</sec>
<sec id="S2.SS6">
<title>2.6. Loss function</title>
<p>The classification model classified the data into one of four classes, which is a problem of four classes. Multi-classification neural networks generally use cross-entropy loss as a loss function. The mathematical expression of this loss function in the program is:</p>
<disp-formula id="S2.Ex4">
<mml:math id="M4">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>C</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mpadded>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>M</mml:mi>
</mml:mfrac>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>j</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:munderover>
<mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>C</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>g</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>M</italic> is the batch size, <italic>C</italic> is the total number of classes (four), <italic>y</italic><sub><italic>ij</italic></sub> is the real label, and <italic>o</italic><sub><italic>ij</italic></sub> is the predictive output.</p>
<p>The Dice coefficient (also known as the Dice score or DSC) is a function of the set similarity measurement, which is usually used to calculate the similarity between two sets (<xref ref-type="bibr" rid="B22">Saeedizadeh et al., 2021</xref>), with values ranging from 0 to 1. Here, it was used to measure the overlap between the ground-truth and predicted masks, where 0 indicates no overlap and 1 indicates complete overlap.</p>
<disp-formula id="S2.Ex5">
<mml:math id="M5">
<mml:mrow>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>B</mml:mi>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2229;</mml:mo>
<mml:mi>B</mml:mi>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mi>B</mml:mi>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>A</italic> and <italic>B</italic> denote the predicted and ground-truth masks.</p>
<p>To minimize the loss function, we used the 1-DSC as the final loss function. The mathematical expression of this loss function in the program is:</p>
<disp-formula id="S2.Ex6">
<mml:math id="M6">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>D</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>S</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>C</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>M</mml:mi>
</mml:mfrac>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>j</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:munderover>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>M</italic> is the batch size, <italic>N</italic> is the number of sample data, <italic>y</italic><sub><italic>ij</italic></sub> is the ground-truth mask, and <italic>o</italic><sub><italic>ij</italic></sub> is the predictive mask.</p>
<p>For unbalanced sample data, weighted binary cross entropy can be used as the training loss function. Therefore, compared with the standard cross-entropy loss, better results can be obtained when the number of positive and negative points is unbalanced (<xref ref-type="bibr" rid="B35">Zhu et al., 2019</xref>). The mathematical expression of this loss function in the program is:</p>
<disp-formula id="S2.Ex7">
<mml:math id="M7">
<mml:mrow>
<mml:msub>
<mml:mi>L</mml:mi>
<mml:mrow>
<mml:mi>W</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>B</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>C</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>=</mml:mo>
<mml:mrow>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>M</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">&#x00D7;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:munderover>
<mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:mi>g</mml:mi>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mn>0</mml:mn>
</mml:msub>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo rspace="5.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>l</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>g</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>M</italic> is the batch size, <italic>N</italic> is the number of sample data, <italic>y</italic><sub><italic>ij</italic></sub> is the ground-truth mask, and <italic>o</italic><sub><italic>ij</italic></sub> is the predictive mask. <italic>w<sub>1</sub></italic> and <italic>w<sub>0</sub></italic> correspond to the weights labeled 1 and 0, respectively.</p>
<p>In this study, the mathematical expression for the weight parameter <italic>w<sub>c</sub></italic> is:</p>
<disp-formula id="S2.Ex8">
<mml:math id="M8">
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>w</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>-</mml:mo>
<mml:msub>
<mml:mi>N</mml:mi>
<mml:mi>c</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>N</italic> represents the total number of data points for each sample and <italic>N<sub>c</sub></italic> represents the number of data points in class <italic>c</italic>.</p>
</sec>
<sec id="S2.SS7">
<title>2.7. Model hyper-parameters of models</title>
<p>After labeling the data, we trained the model. The classification and segmentation models were trained separately. The training parameters of the classification and segmentation model are listed in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Important parameters used in two models training.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Network parameters</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Classification model</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Segmentation model</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Batch size</td>
<td valign="top" align="center">8</td>
<td valign="top" align="center">8</td>
</tr>
<tr>
<td valign="top" align="left">Epoch</td>
<td valign="top" align="center">30</td>
<td valign="top" align="center">100</td>
</tr>
<tr>
<td valign="top" align="left">Activation function</td>
<td valign="top" align="center">ReLU</td>
<td valign="top" align="center">ReLU</td>
</tr>
<tr>
<td valign="top" align="left">Padding mode</td>
<td valign="top" align="center">MaxPool</td>
<td valign="top" align="center">AvgPool</td>
</tr>
<tr>
<td valign="top" align="left">Pooling size</td>
<td valign="top" align="center">2</td>
<td valign="top" align="center">2</td>
</tr>
<tr>
<td valign="top" align="left">Optimizer</td>
<td valign="top" align="center">Adam</td>
<td valign="top" align="center">Adam</td>
</tr>
<tr>
<td valign="top" align="left">Learning rate</td>
<td valign="top" align="center">0.001</td>
<td valign="top" align="center">0.001</td>
</tr>
<tr>
<td valign="top" align="left">Convolution kernel</td>
<td valign="top" align="center">3</td>
<td valign="top" align="center">3</td>
</tr>
<tr>
<td valign="top" align="left">Upsample</td>
<td valign="top" align="center">&#x2013;</td>
<td valign="top" align="center">Nearest</td>
</tr>
<tr>
<td valign="top" align="left">Input size</td>
<td valign="top" align="center">512 &#x00D7; 2</td>
<td valign="top" align="center">512 &#x00D7; 2</td>
</tr>
</tbody>
</table></table-wrap>
</sec>
</sec>
<sec id="S3" sec-type="results">
<title>3. Results</title>
<sec id="S3.SS1">
<title>3.1. Evaluation metrics of models</title>
<p>Accuracy, which is the proportion of correctly predicted samples to the total number of samples, is generally used as the evaluation metric of a multi-classification model. The mathematical expression of accuracy in the program is:</p>
<disp-formula id="S3.Ex9">
<mml:math id="M9">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:mpadded width="+3.3pt">
<mml:mi>y</mml:mi>
</mml:mpadded>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo rspace="5.8pt" stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>M</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:munderover>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>M</italic> denotes the batch size, <italic>y<sub>i</sub></italic> denotes the real label, and <italic>o</italic><sub><italic>i</italic></sub> is the predictive output.</p>
<p>The intersection over union (IoU), also known as the Jaccard index, calculates the ratio of the intersection and union of the ground-truth and predicted segmentation masks (<xref ref-type="bibr" rid="B22">Saeedizadeh et al., 2021</xref>). It can be used to measure the similarity between the ground-truth and predicted segmentation masks; the higher the similarity, the higher the value.</p>
<disp-formula id="S3.Ex10">
<mml:math id="M10">
<mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>U</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>A</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>B</mml:mi>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x2229;</mml:mo>
<mml:mi>B</mml:mi>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>|</mml:mo>
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:mo>&#x222A;</mml:mo>
<mml:mi>B</mml:mi>
</mml:mrow>
<mml:mo>|</mml:mo>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>A</italic> and <italic>B</italic> denote the predicted and ground-truth masks. The mathematical expression of IoU in the program is:</p>
<disp-formula id="S3.Ex11">
<mml:math id="M11">
<mml:mrow>
<mml:mrow>
<mml:mi>I</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>U</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>o</mml:mi>
<mml:mo rspace="5.8pt">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>M</mml:mi>
</mml:mfrac>
<mml:mo>&#x2062;</mml:mo>
<mml:mrow>
<mml:munderover>
<mml:mo largeop="true" movablelimits="false" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>j</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:munderover>
<mml:mfrac>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>+</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
<mml:mo>-</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mo largeop="true" symmetric="true">&#x2211;</mml:mo>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:mi>i</mml:mi>
</mml:mpadded>
<mml:mo rspace="5.8pt">=</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>N</mml:mi>
</mml:msubsup>
<mml:mrow>
<mml:mpadded width="+3.3pt">
<mml:msub>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mpadded>
<mml:mo>&#x2062;</mml:mo>
<mml:msub>
<mml:mi>o</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x2062;</mml:mo>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>
<p>where <italic>M</italic> is the batch size, <italic>N</italic> is the number of sample data, <italic>y</italic><sub><italic>ij</italic></sub> is the ground-truth mask, and <italic>o</italic><sub><italic>ij</italic></sub> is the predictive mask.</p>
</sec>
<sec id="S3.SS2">
<title>3.2. Model hyper-parameters optimization of segmentation model</title>
<p>Both the weight coefficients of the weighted binary cross-entropy and cut-off threshold have a certain influence on the performance of the model. To obtain appropriate weights and cut-off thresholds, this study conducted cross experiments on weights and cut-off thresholds. As presented in <xref ref-type="table" rid="T2">Table 2</xref>, when <italic>w</italic><sub>0</sub> : <italic>w</italic><sub>1</sub> = 0.6 : 0.4 and the cut-off threshold = 0.6, the IoU achieved a maximum value of 0.9680. The other parameters used during the training are listed in <xref ref-type="table" rid="T1">Table 1</xref>.</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Cross experiments result on weights of the weighted binary cross entropy and cut-off threshold.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">w<sub>0</sub>:w<sub>1</sub></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">IoU<break/> (Cut-off = 0.3)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">IoU<break/> (Cut-off = 0.4)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">IoU<break/> (Cut-off = 0.5)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">IoU<break/> (Cut-off = 0.6)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">IoU<break/> (Cut-off = 0.7)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">0.3:0.7</td>
<td valign="top" align="center">0.9668</td>
<td valign="top" align="center">0.9652</td>
<td valign="top" align="center">0.9665</td>
<td valign="top" align="center">0.9677</td>
<td valign="top" align="center">0.9660</td>
</tr>
<tr>
<td valign="top" align="left">0.4:0.6</td>
<td valign="top" align="center">0.9674</td>
<td valign="top" align="center">0.9657</td>
<td valign="top" align="center">0.9658</td>
<td valign="top" align="center">0.9674</td>
<td valign="top" align="center">0.9674</td>
</tr>
<tr>
<td valign="top" align="left">0.5:0.5</td>
<td valign="top" align="center">0.9668</td>
<td valign="top" align="center">0.9674</td>
<td valign="top" align="center">0.9674</td>
<td valign="top" align="center">0.9665</td>
<td valign="top" align="center">0.9670</td>
</tr>
<tr>
<td valign="top" align="left">0.6:0.4</td>
<td valign="top" align="center">0.9677</td>
<td valign="top" align="center">0.9676</td>
<td valign="top" align="center">0.9666</td>
<td valign="top" align="center">0.9680</td>
<td valign="top" align="center">0.9674</td>
</tr>
<tr>
<td valign="top" align="left">0.7:0.3</td>
<td valign="top" align="center">0.9668</td>
<td valign="top" align="center">0.9674</td>
<td valign="top" align="center">0.9652</td>
<td valign="top" align="center">0.9669</td>
<td valign="top" align="center">0.9654</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p>When w<sub>0</sub>:w<sub>1</sub> = 0.6:0.4 and the cut-off threshold = 0.6, the IoU achieved a maximum value of 0.9680.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>We also compared the three loss functions of WBCE, DSC, and WBCE + DSC. When the other conditions were the same, the combined loss function (WBCE + DSC) was used to obtain the maximum IoU, as illustrated in <xref ref-type="table" rid="T3">Table 3</xref>.</p>
<table-wrap position="float" id="T3">
<label>TABLE 3</label>
<caption><p>Overall performance with different loss functions, <italic>w</italic><sub>0</sub>:<italic>w</italic><sub>1</sub> = 0.6 : 0.4 and cut-off threshold = 0.6.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Loss</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">w<sub>0</sub>:w<sub>1</sub></td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Cut-off</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">IoU</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">WBCE</td>
<td valign="middle" align="center" rowspan="3">0.6:0.4</td>
<td valign="middle" align="center" rowspan="3">0.6</td>
<td valign="top" align="center">0.9652</td>
</tr>
<tr>
<td valign="top" align="left">DSC</td>
<td valign="top" align="center">0.9586</td>
</tr>
<tr>
<td valign="top" align="left">WBCE + DSC</td>
<td valign="top" align="center">0.9680</td>
</tr>
</tbody>
</table></table-wrap>
</sec>
<sec id="S3.SS3">
<title>3.3. Training convergence analysis of models</title>
<p><xref ref-type="fig" rid="F6">Figure 6A</xref> shows the loss curves of different epochs during the classification model training process, and <xref ref-type="fig" rid="F6">Figure 6B</xref> shows the accuracy of the training and validation sets corresponding to different epochs. The maximum accuracy of the model validation set was 99.59%.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p><bold>(A)</bold> Loss of classification model during training. <bold>(B)</bold> Training and validation accuracy of classification model during training. <bold>(C)</bold> Confusion matrix showing the result of trained classification model for validation set. The row number reflects the predicted label, and column number reflects the true label.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g006.tif"/>
</fig>
<p>To analyze which samples were misclassified, we built confusion matrix. As in <xref ref-type="fig" rid="F6">Figure 6C</xref>, only five samples were misclassified, two class 1 and three class 2 data were misclassified as class 4. These five samples had the characteristics of two different classes, which leaded to misclassification. In general, such samples are rare.</p>
<p><xref ref-type="fig" rid="F7">Figure 7A</xref> shows the loss curves of different epochs during the segmentation model training process, and <xref ref-type="fig" rid="F7">Figure 7B</xref> shows the IoU of the training and validation sets corresponding to different epochs. The maximum IoU of the model validation set was 0.9680.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption><p><bold>(A)</bold> Loss of segmentation model during training. <bold>(B)</bold> Training and validation IoU of segmentation model during training.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g007.tif"/>
</fig>
</sec>
<sec id="S3.SS4">
<title>3.4. Comparison with classical methods</title>
<p>There are many types of peak detection methods, such as the direct peak location, Fourier transform, cumulative sum derivative, curve fitting, devolution, and wavelet transform (CWT) methods (<xref ref-type="bibr" rid="B5">Deng et al., 2021</xref>). The direct peak location according to the properties of peak and continuous wavelet transform are two classical methods in traditional methods. The principle of direct peak location is to find out all the local maxima of the signal through the simple comparison method, and then select the subset of these peaks according to the specified peak properties. The method principle of CWT is that the signal is first transformed by CWT in certain scales, and then the ridges are found in the CWT matrix. The positions of these ridges correspond to the positions of all peaks (<xref ref-type="bibr" rid="B6">Du et al., 2006</xref>). Using the verification set, method proposed in this paper was compared with the two traditional methods. These two methods have been implemented in SciPy library based on Python, so we directly used the related functions (find_peaks() and find_peaks_cwt()) in SciPy library.</p>
<p>Classical peak-finding methods can only find the local maxima of the signal, and do not have the ability to classify the signal. Here, after obtaining the local maxima through the classical methods, some subsequent processing steps were adopted to make it have the classification ability, and then compared with the classification model proposed in this study. These subsequent processing steps are as follows:</p>
<list list-type="simple">
<list-item>
<label>(1)</label>
<p>According to the characteristics of the strip, the data of 512 sample points are divided into C peak region (0&#x2013;220) and T peak region (221&#x2013;511).</p>
</list-item>
<list-item>
<label>(2)</label>
<p>Judge whether there are local maxima in the C peak region (0&#x2013;220), and if so, take the maximal local maximum as the C peak. Judge whether the height of the C peak is greater than 1,000, and if it is greater than 1,000, it is considered to be an effective C peak (according to the characteristics of the strip, the height of the C peak is usually greater than 1,000).</p>
</list-item>
<list-item>
<label>(3)</label>
<p>Judge whether there are local maxima in the T peak region (0&#x2013;220), and if so, take the maximal local maximum as the T peak.</p>
</list-item>
<list-item>
<label>(4)</label>
<p>According to the results of (2) and (3), the signal is classified to noise (class 1), only C-peak (class 2), only T-peak (class 3), and or double-peaks (class 4).</p>
</list-item>
</list>
<p>The comparison results are shown in <xref ref-type="table" rid="T4">Table 4</xref>. It can be seen that the performance of the two classical methods is similar in term of accuracy, one is 80.10%, the other is 80.76%. Accuracy of the method proposed in this study is 99.59%, which is much better than classical methods.</p>
<table-wrap position="float" id="T4">
<label>TABLE 4</label>
<caption><p>Comparison of classical peak-finding methods and proposed method performance in terms of accuracy, IoU, dice, recall and precision.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Method</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Accuracy</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">IoU</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Dice</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Recall</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Precision</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Direct peak location</td>
<td valign="top" align="center">80.10%</td>
<td valign="top" align="center">0.7753</td>
<td valign="top" align="center">0.8509</td>
<td valign="top" align="center">0.8801</td>
<td valign="top" align="center">0.8391</td>
</tr>
<tr>
<td valign="top" align="left">CWT</td>
<td valign="top" align="center">80.76%</td>
<td valign="top" align="center">0.7597</td>
<td valign="top" align="center">0.8423</td>
<td valign="top" align="center">0.8510</td>
<td valign="top" align="center">0.8541</td>
</tr>
<tr>
<td valign="top" align="left">Our method</td>
<td valign="top" align="center">99.59%</td>
<td valign="top" align="center">0.9680</td>
<td valign="top" align="center">0.9836</td>
<td valign="top" align="center">0.9857</td>
<td valign="top" align="center">0.9821</td>
</tr>
</tbody>
</table></table-wrap>
<p>For two classical methods, the function of peak_widths() in the SciPy library can be used to identify the integral region. Compared with the segmentation method in this study in terms of IoU, Dice, Recall and Precision. The results are shown in <xref ref-type="table" rid="T4">Table 4</xref>. As can be seen from the table, no matter which evaluation term it is, the method proposed in this paper is much better than two classical methods.</p>
</sec>
<sec id="S3.SS5">
<title>3.5. Test of the method</title>
<p>The method proposed in this study was tested using instrument test data. First, the ability of the segmentation model to segment various peak shapes was tested. Next, three most important indicators (standard curve, repeatability, and recovery) were tested.</p>
<p>After training, the method can classify raw input data into one of four classes and perform data segmentation on data belonging to Class 4. The segmentation model could effectively segment C- and T-peak regions from fluorescence intensity of 512 data points. <xref ref-type="fig" rid="F8">Figure 8</xref> shows examples of data segmentation results for some typical peak shapes, where the orange shaded areas are segmented C- and T-peak regions. <xref ref-type="fig" rid="F8">Figure 8A</xref> shows segmentation of the normal peak shape, and C -and T-peak regions were accurately extracted and obtained. <xref ref-type="fig" rid="F8">Figures 8B, C</xref> show that in the presence of overlapping and interference peaks, C- and T-peaks can be accurately segmented. <xref ref-type="fig" rid="F8">Figures 8D&#x2013;F</xref> show the segmentation results for weak T-peak with baseline drift, tailing or interference peak. As shown in the figure, baseline drift, tailing and interference peaks did not affect accurate segmentation of the data; the detection of weak T-peak region is also excellent. After data were imported into the segmentation model, they were first normalized. The network model only focused on learning the shape of entire data set and did not learn the value of fluorescence intensity. The experimental results indicate that it can meet the requirements of LFIA for data processing.</p>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption><p>The results of ROI extraction by segmentation model on different kinds of data. <bold>(A)</bold> Normal peak data, <bold>(B,C)</bold> overlapping and interference peak data, <bold>(D&#x2013;F)</bold> Weak T-peak with baseline drift, tailing or interference peak data.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g008.tif"/>
</fig>
<p>The method was tested using Ferritin. A standard curve was established using a range of concentrations (0, 15, 50, 200, 300, and 500 ng/ml) of the standards. Each concentration of the standard was tested three times using test strips. The detection data were processed using proposed method. First, data were classified, then segmented, and finally, the segmented regions were integrated and T/C was calculated. The method accurately classified the detection data of 0 ng/ml as class 2 (only C-peak), and the corresponding T/C values were 0. The remaining data were classified as class 4 (double-peaks) and then segmented. Using T/C as the ordinate and concentration as the abscissa, a standard curve was established using four parameters, as shown in <xref ref-type="fig" rid="F9">Figure 9</xref>. It can be observed that the T/C and concentration have a good correlation with a correlation coefficient of 0.9986. This shows that the method is effective in dealing with LFIA data.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption><p>Four parameter fitting line for ferritin detection in the range of 0&#x2013;500 ng/ml.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fncom-17-1091180-g009.tif"/>
</fig>
<p>Three concentrations (20, 220, and 400 ng/ml) of the reference standards were tested for repeatability using the same batch of test strips. Each concentration was tested 10 times, and the CV values were calculated. The data were processed using the method described in this study. The data for all the three concentrations were classified as class 4 (double-peaks). The data were segmented, and concentrations were calculated; the results are listed in <xref ref-type="table" rid="T5">Table 5</xref>. It can be observed that the CV values of three concentrations are all good, and the maximum does not exceed 1.37%. This shows that the stability of the method is good.</p>
<table-wrap position="float" id="T5">
<label>TABLE 5</label>
<caption><p>Precision results of ferritin test strips.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Mean (ng/ml)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">SD</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">CV (%)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">17.561</td>
<td valign="top" align="center">0.240</td>
<td valign="top" align="center">1.37</td>
</tr>
<tr>
<td valign="top" align="left">212.541</td>
<td valign="top" align="center">1.274</td>
<td valign="top" align="center">0.60</td>
</tr>
<tr>
<td valign="top" align="left">369.034</td>
<td valign="top" align="center">1.401</td>
<td valign="top" align="center">0.38</td>
</tr>
</tbody>
</table></table-wrap>
<p>Recovery was tested using samples of three concentrations (40, 100, and 150 ng/ml). Each sample was tested thrice. The method in this study was used to process the data, and all the data were classified as class 4 (double-peaks); the results are listed in <xref ref-type="table" rid="T6">Table 6</xref>. The calculated recovery rates were 105.07, 96.37, and 99.67%, respectively. This shows that concentration calculated by the method is very accurate.</p>
<table-wrap position="float" id="T6">
<label>TABLE 6</label>
<caption><p>Recovery rates results of Ferritin test strips.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<td valign="top" align="left" style="color:#ffffff;background-color: #7f8080;">Concentration (ng/ml)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Mean (ng/ml)</td>
<td valign="top" align="center" style="color:#ffffff;background-color: #7f8080;">Recovery rate (%)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">40</td>
<td valign="top" align="center">42.030</td>
<td valign="top" align="center">105.07</td>
</tr>
<tr>
<td valign="top" align="left">100</td>
<td valign="top" align="center">96.371</td>
<td valign="top" align="center">96.37</td>
</tr>
<tr>
<td valign="top" align="left">150</td>
<td valign="top" align="center">149.502</td>
<td valign="top" align="center">99.67</td>
</tr>
</tbody>
</table></table-wrap>
</sec>
</sec>
<sec id="S4" sec-type="discussion">
<title>4. Discussion</title>
<p>Because POCT instruments based on LFIA detection technology are used in a variety of situations and there are many different types of samples, the peak shape of the test data is complex. It is difficult for classical peak-finding methods to deal with all peak shapes. The data-processing method proposed in this study has several advantages.</p>
<p>First, through a classification network, the peak types were classified into four classes, and the peak types that needed to be calculated for the concentration were screened. In this manner, the data processing difficulty of the segmentation model is reduced, and the model can easily achieve better performance. Second, an improved U-Net-based segmentation model directly identifies the integration regions, replacing the operations of the peak finding, peak start and end location in the classical method, which makes the data processing process more accurate and convenient. It is very difficult to determine the starting point and ending point of the peak accurately by the traditional method. Our segmentation model can easily solve this problem. Third, through experiments, it was found that this U-Net -based segmentation method also performs well in effectively identifying weak and trailing peaks. Forth, the classical peak-finding methods can only find the local maxima of the signal, and do not have the ability to classify the signal. In this case, it is difficult to distinguish the noise peak from the effective peak. Our classification model has perfectly solved this problem.</p>
<p>The method was applied to the hand-held immunofluorescence analyzer developed by ourselves and good results were obtained. Interference peaks are the biggest obstacle in the use of hand-held instruments, and often lead to peak-finding errors. The use environment of hand-held instruments is flexible and changeable, which makes it inconvenient to provide technical support. This method greatly reduced the failure rate of peak finding, which can reduce the customer&#x2019;s need for instrument technical support. This is a great advantage for hand-held instruments sold in large quantities.</p>
</sec>
<sec id="S5" sec-type="conclusion">
<title>5. Conclusion</title>
<p>In this study, a deep-learning-based LFIA photoelectric scanning data-processing method was proposed. The entire method had two steps. The first step was to build a CNN classification model to classify the LFIA peak shape and screen out the data required to calculate the concentration. The second step was to build an improved 1D U-Net segmentation model to achieve the segmentation of C- and T-peak integration regions for data containing double-peaks and then perform calculations such as T/C and concentration. A large amount of experimental data were used to train the two models. The accuracy of classification model on validation set was 99.59% and the IoU of segmentation model on validation set was 0.9680. Using the data-processing method, a standard curve was established for Ferritin, and the CV and recovery rate, the two most relevant indicators in clinical testing, were tested. The CV values corresponding to the three concentrations of 20, 220, and 400 ng/ml were 1.37, 0.60, and 0.38%, respectively. The recovery rates corresponding to the three concentrations of 40, 100, and 150 ng/ml were 105.07, 96.37, and 99.67%, respectively. These experimental results show that the data-processing method proposed in this study can be used for the processing of LFIA photoelectric scanning data, and the obtained results are accurate and reliable, which proposes a new direction for POCT instrument data processing.</p>
</sec>
<sec id="S6" sec-type="data-availability">
<title>Data availability statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="S7" sec-type="author-contributions">
<title>Author contributions</title>
<p>XL and YW involved in the conception and research design. XL, KD, and SL collected the data and annotated the data. XL performed the statistical analysis and wrote the manuscript. XL, KD, SL, and YW revised it for publication. All authors contributed to the article and approved the submitted version.</p>
</sec>
</body>
<back>
<ack><p>We would like to thank the colleagues in Tianjin Boomscience Technology Co., Ltd., and Beijing Savant Biotechnology Co., Ltd., for their friendly help. We would also like to thank Editage (<ext-link ext-link-type="uri" xlink:href="http://www.editage.cn">www.editage.cn</ext-link>) for English language editing.</p>
</ack>
<sec id="S8" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>KD was employed by Tianjin Boomscience Technology Co., Ltd. SL was employed by Beijing Savant Biotechnology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="S9" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Acharya</surname> <given-names>U. R.</given-names></name> <name><surname>Oh</surname> <given-names>S. L.</given-names></name> <name><surname>Hagiwara</surname> <given-names>Y.</given-names></name> <name><surname>Tan</surname> <given-names>J. H.</given-names></name> <name><surname>Adam</surname> <given-names>M.</given-names></name> <name><surname>Gertych</surname> <given-names>A.</given-names></name><etal/></person-group> (<year>2017</year>). <article-title>A deep convolutional neural network model to classify heartbeats.</article-title> <source><italic>Comput. Biol. Med.</italic></source> <volume>89</volume> <fpage>389</fpage>&#x2013;<lpage>396</lpage>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2017.08.022</pub-id> <pub-id pub-id-type="pmid">28869899</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>A.</given-names></name> <name><surname>Yang</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Replacing antibodies with aptamers in lateral flow immunoassay.</article-title> <source><italic>Biosens. Bioelectron.</italic></source> <volume>71</volume> <fpage>230</fpage>&#x2013;<lpage>242</lpage>. <pub-id pub-id-type="doi">10.1016/j.bios.2015.04.041</pub-id> <pub-id pub-id-type="pmid">25912679</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>F.-H.</given-names></name> <name><surname>Li</surname> <given-names>N.</given-names></name> <name><surname>Zhang</surname> <given-names>W.</given-names></name> <name><surname>Zhang</surname> <given-names>Q.-Y.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Ma</surname> <given-names>Y.-Y.</given-names></name><etal/></person-group> (<year>2017</year>). <article-title>A comparison between China-made Mindray BS-2000M biochemical analyzer and Roche cobas702 automatic biochemical analyzer.</article-title> <source><italic>Front. Lab. Med.</italic></source> <volume>1</volume>:<fpage>98</fpage>&#x2013;<lpage>103</lpage>. <pub-id pub-id-type="doi">10.1016/j.flm.2017.06.006</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Damhorst</surname> <given-names>G. L.</given-names></name> <name><surname>Tyburski</surname> <given-names>E. A.</given-names></name> <name><surname>Brand</surname> <given-names>O.</given-names></name> <name><surname>Martin</surname> <given-names>G. S.</given-names></name> <name><surname>Lam</surname> <given-names>W. A.</given-names></name></person-group> (<year>2019</year>). <article-title>Diagnosis of acute serious illness: the role of point-of-care technologies.</article-title> <source><italic>Curr. Opin. Biomed. Eng.</italic></source> <volume>11</volume> <fpage>22</fpage>&#x2013;<lpage>34</lpage>. <pub-id pub-id-type="doi">10.1016/j.cobme.2019.08.012</pub-id> <pub-id pub-id-type="pmid">34079919</pub-id></citation></ref>
<ref id="B5"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>F.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Wang</surname> <given-names>R.</given-names></name> <name><surname>Yue</surname> <given-names>H.</given-names></name> <name><surname>Zhao</surname> <given-names>Z.</given-names></name> <name><surname>Duan</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>An improved peak detection algorithm in mass spectra combining wavelet transform and image segmentation.</article-title> <source><italic>Int. J. Mass Spectrom.</italic></source> <volume>465</volume>:<issue>116601</issue>. <pub-id pub-id-type="doi">10.1016/j.ijms.2021.116601</pub-id></citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Du</surname> <given-names>P.</given-names></name> <name><surname>Kibbe</surname> <given-names>W. A.</given-names></name> <name><surname>Lin</surname> <given-names>S. M.</given-names></name></person-group> (<year>2006</year>). <article-title>Improved peak detection in mass spectrum by incorporating continuous wavelet transform-based pattern matching.</article-title> <source><italic>Bioinformatics</italic></source> <volume>22</volume> <fpage>2059</fpage>&#x2013;<lpage>2065</lpage>. <pub-id pub-id-type="doi">10.1093/bioinformatics/btl355</pub-id> <pub-id pub-id-type="pmid">16820428</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Florkowski</surname> <given-names>C.</given-names></name> <name><surname>Don-Wauchope</surname> <given-names>A.</given-names></name> <name><surname>Gimenez</surname> <given-names>N.</given-names></name> <name><surname>Rodriguez-Capote</surname> <given-names>K.</given-names></name> <name><surname>Wils</surname> <given-names>J.</given-names></name> <name><surname>Zemlin</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Point-of-care testing (POCT) and evidence-based laboratory medicine (EBLM) &#x2013; does it leverage any advantage in clinical decision making?</article-title> <source><italic>Crit. Rev. Clin. Lab.</italic></source> <volume>54</volume> <fpage>471</fpage>&#x2013;<lpage>494</lpage>. <pub-id pub-id-type="doi">10.1080/10408363.2017.1399336</pub-id> <pub-id pub-id-type="pmid">29169287</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gong</surname> <given-names>Y.</given-names></name> <name><surname>Zheng</surname> <given-names>Y.</given-names></name> <name><surname>Jin</surname> <given-names>B.</given-names></name> <name><surname>You</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>X.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>A portable and universal upconversion nanoparticle-based lateral flow assay platform for point-of-care testing.</article-title> <source><italic>Talanta</italic></source> <volume>201</volume> <fpage>126</fpage>&#x2013;<lpage>133</lpage>. <pub-id pub-id-type="doi">10.1016/j.talanta.2019.03.105</pub-id> <pub-id pub-id-type="pmid">31122402</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gu</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Kuen</surname> <given-names>J.</given-names></name> <name><surname>Ma</surname> <given-names>L.</given-names></name> <name><surname>Shahroudy</surname> <given-names>A.</given-names></name> <name><surname>Shuai</surname> <given-names>B.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>Recent advances in convolutional neural networks.</article-title> <source><italic>Pattern Recognit.</italic></source> <volume>77</volume> <fpage>354</fpage>&#x2013;<lpage>377</lpage>. <pub-id pub-id-type="doi">10.1016/j.patcog.2017.10.013</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Haung</surname> <given-names>M. L.</given-names></name> <name><surname>Ho</surname> <given-names>C. H.</given-names></name></person-group> (<year>1998</year>). <article-title>Diagnostic value of an automatic hematology analyzer in patients with hematologic disorders.</article-title> <source><italic>Adv. Therapy</italic></source> <volume>15</volume>:<issue>137</issue>.</citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>J.</given-names></name> <name><surname>Zhang</surname> <given-names>Z. L.</given-names></name> <name><surname>Wen</surname> <given-names>C. Y.</given-names></name> <name><surname>Tang</surname> <given-names>M.</given-names></name> <name><surname>Wu</surname> <given-names>L. L.</given-names></name> <name><surname>Liu</surname> <given-names>C.</given-names></name><etal/></person-group> (<year>2016</year>). <article-title>Sensitive and quantitative detection of C-reaction protein based on immunofluorescent nanospheres coupled with lateral flow test strip.</article-title> <source><italic>Anal. Chem.</italic></source> <volume>88</volume> <fpage>6577</fpage>&#x2013;<lpage>6584</lpage>. <pub-id pub-id-type="doi">10.1021/acs.analchem.6b01427</pub-id> <pub-id pub-id-type="pmid">27253137</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>D.</given-names></name> <name><surname>Ying</surname> <given-names>H.</given-names></name> <name><surname>Jiang</surname> <given-names>D.</given-names></name> <name><surname>Liu</surname> <given-names>F.</given-names></name> <name><surname>Tian</surname> <given-names>Y.</given-names></name> <name><surname>Du</surname> <given-names>C.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Rapid and sensitive detection of interleukin-6 in serum via time-resolved lateral flow immunoassay.</article-title> <source><italic>Anal. Biochem.</italic></source> <volume>588</volume>:<issue>113468</issue>. <pub-id pub-id-type="doi">10.1016/j.ab.2019.113468</pub-id> <pub-id pub-id-type="pmid">31585097</pub-id></citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>C.</given-names></name> <name><surname>Noh</surname> <given-names>J.</given-names></name> <name><surname>O&#x2019;Neal</surname> <given-names>S. E.</given-names></name> <name><surname>Gonzalez</surname> <given-names>A. E.</given-names></name> <name><surname>Garcia</surname> <given-names>H. H.</given-names></name></person-group> <collab>Cysticercosis Working Group in Peru</collab> <etal/> (<year>2019</year>). <article-title>Feasibility of a point-of-care test based on quantum dots with a mobile phone reader for detection of antibody responses.</article-title> <source><italic>PLoS Negl. Trop. Dis.</italic></source> <volume>13</volume>:<issue>e0007746</issue>. <pub-id pub-id-type="doi">10.1371/journal.pntd.0007746</pub-id> <pub-id pub-id-type="pmid">31589612</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>N.</given-names></name> <name><surname>Wang</surname> <given-names>P.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Geng</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Gong</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>Molecular diagnosis of COVID-19: current situation and trend in China (Review).</article-title> <source><italic>Exp. Ther. Med.</italic></source> <volume>20</volume>:<issue>13</issue>. <pub-id pub-id-type="doi">10.3892/etm.2020.9142</pub-id> <pub-id pub-id-type="pmid">32934678</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liao</surname> <given-names>M.</given-names></name> <name><surname>Zheng</surname> <given-names>J.</given-names></name> <name><surname>Xu</surname> <given-names>Y.</given-names></name> <name><surname>Qiu</surname> <given-names>Y.</given-names></name> <name><surname>Xia</surname> <given-names>C.</given-names></name> <name><surname>Zhong</surname> <given-names>Z.</given-names></name><etal/></person-group> (<year>2021</year>). <article-title>Development of magnetic particle-based chemiluminescence immunoassay for measurement of human procalcitonin in serum.</article-title> <source><italic>J. Immunol. Methods</italic></source> <volume>488</volume>:<issue>112913</issue>. <pub-id pub-id-type="doi">10.1016/j.jim.2020.112913</pub-id> <pub-id pub-id-type="pmid">33189726</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luppa</surname> <given-names>P. B.</given-names></name> <name><surname>Muller</surname> <given-names>C.</given-names></name> <name><surname>Schlichtiger</surname> <given-names>A.</given-names></name> <name><surname>Schlebusch</surname> <given-names>H.</given-names></name></person-group> (<year>2011</year>). <article-title>Point-of-care testing (POCT): current techniques and future perspectives.</article-title> <source><italic>Trends Analyt. Chem.</italic></source> <volume>30</volume> <fpage>887</fpage>&#x2013;<lpage>898</lpage>. <pub-id pub-id-type="doi">10.1016/j.trac.2011.01.019</pub-id> <pub-id pub-id-type="pmid">32287536</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Melnikov</surname> <given-names>A. D.</given-names></name> <name><surname>Tsentalovich</surname> <given-names>Y. P.</given-names></name> <name><surname>Yanshole</surname> <given-names>V. V.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep learning for the precise peak detection in high-resolution LC-MS data.</article-title> <source><italic>Anal. Chem.</italic></source> <volume>92</volume> <fpage>588</fpage>&#x2013;<lpage>592</lpage>. <pub-id pub-id-type="doi">10.1021/acs.analchem.9b04811</pub-id> <pub-id pub-id-type="pmid">31841624</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oh</surname> <given-names>S. L.</given-names></name> <name><surname>Ng</surname> <given-names>E. Y. K.</given-names></name> <name><surname>Tan</surname> <given-names>R. S.</given-names></name> <name><surname>Acharya</surname> <given-names>U. R.</given-names></name></person-group> (<year>2019</year>). <article-title>Automated beat-wise arrhythmia diagnosis using modified U-net on extended electrocardiographic recordings with heterogeneous arrhythmia types.</article-title> <source><italic>Comput. Biol. Med.</italic></source> <volume>105</volume> <fpage>92</fpage>&#x2013;<lpage>101</lpage>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2018.12.012</pub-id> <pub-id pub-id-type="pmid">30599317</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Peng</surname> <given-names>P.</given-names></name> <name><surname>Liu</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>Z.</given-names></name> <name><surname>Xue</surname> <given-names>Z.</given-names></name> <name><surname>Mao</surname> <given-names>P.</given-names></name> <name><surname>Hu</surname> <given-names>J.</given-names></name><etal/></person-group> (<year>2022</year>). <article-title>Emerging ELISA derived technologies for in vitro diagnostics.</article-title> <source><italic>TrAC Trends Analyt. Chem.</italic></source> <volume>152</volume>:<issue>116605</issue>. <pub-id pub-id-type="doi">10.1016/j.trac.2022.116605</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qin</surname> <given-names>Q.</given-names></name> <name><surname>Wang</surname> <given-names>K.</given-names></name> <name><surname>Xu</surname> <given-names>H.</given-names></name> <name><surname>Cao</surname> <given-names>B.</given-names></name> <name><surname>Zheng</surname> <given-names>W.</given-names></name> <name><surname>Jin</surname> <given-names>Q.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Deep Learning on chromatographic data for segmentation and sensitive analysis.</article-title> <source><italic>J. Chromatogr. A</italic></source> <volume>1634</volume>:<issue>461680</issue>. <pub-id pub-id-type="doi">10.1016/j.chroma.2020.461680</pub-id> <pub-id pub-id-type="pmid">33221651</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ronneberger</surname> <given-names>O.</given-names></name> <name><surname>Fischer</surname> <given-names>P.</given-names></name> <name><surname>Brox</surname> <given-names>T.</given-names></name></person-group> (<year>2015</year>). <source><italic>U-Net: convolutional networks for biomedical image segmentation.</italic></source> <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>.</citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saeedizadeh</surname> <given-names>N.</given-names></name> <name><surname>Minaee</surname> <given-names>S.</given-names></name> <name><surname>Kafieh</surname> <given-names>R.</given-names></name> <name><surname>Yazdani</surname> <given-names>S.</given-names></name> <name><surname>Sonka</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>COVID TV-Unet: segmenting COVID-19 chest CT images using connectivity imposed Unet.</article-title> <source><italic>Comput. Methods Programs Biomed. Update</italic></source> <volume>1</volume>:<issue>100007</issue>. <pub-id pub-id-type="doi">10.1016/j.cmpbup.2021.100007</pub-id> <pub-id pub-id-type="pmid">34337587</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shao</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>L.</given-names></name> <name><surname>Li</surname> <given-names>S.</given-names></name> <name><surname>Zhang</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>Design and quantitative analysis of cancer detection system based on fluorescence immune analysis.</article-title> <source><italic>J. Healthc. Eng.</italic></source> <volume>2019</volume>:<issue>1672940</issue>. <pub-id pub-id-type="doi">10.1155/2019/1672940</pub-id> <pub-id pub-id-type="pmid">31934322</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shao</surname> <given-names>X. Y.</given-names></name> <name><surname>Wang</surname> <given-names>C. R.</given-names></name> <name><surname>Xie</surname> <given-names>C. M.</given-names></name> <name><surname>Wang</surname> <given-names>X. G.</given-names></name> <name><surname>Liang</surname> <given-names>R. L.</given-names></name> <name><surname>Xu</surname> <given-names>W. W.</given-names></name></person-group> (<year>2017</year>). <article-title>Rapid and sensitive lateral flow immunoassay method for procalcitonin (PCT) based on time-resolved immunochromatography.</article-title> <source><italic>Sensors</italic></source> <volume>17</volume>:<issue>480</issue>. <pub-id pub-id-type="doi">10.3390/s17030480</pub-id> <pub-id pub-id-type="pmid">28264502</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singer</surname> <given-names>A. J.</given-names></name> <name><surname>Ardise</surname> <given-names>J.</given-names></name> <name><surname>Gulla</surname> <given-names>J.</given-names></name> <name><surname>Cangro</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>Point-of-care testing reduces length of stay in emergency department chest pain patients.</article-title> <source><italic>Ann. Emerg. Med.</italic></source> <volume>45</volume> <fpage>587</fpage>&#x2013;<lpage>591</lpage>. <pub-id pub-id-type="doi">10.1016/j.annemergmed.2004.11.020</pub-id> <pub-id pub-id-type="pmid">15940089</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Srivastava</surname> <given-names>N.</given-names></name> <name><surname>Hinton</surname> <given-names>G.E.</given-names></name> <name><surname>Krizhevsky</surname> <given-names>A.</given-names></name> <name><surname>Sutskever</surname> <given-names>I.</given-names></name> <name><surname>Salakhutdinov</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Dropout: a simple way to prevent neural networks from overfitting</article-title>. <source><italic>J. Mach. Learn. Res</italic></source>. <volume>15</volume>, <fpage>1929</fpage>&#x2013;<lpage>1958</lpage>.</citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vila</surname> <given-names>J.</given-names></name> <name><surname>G&#x00F3;mez</surname> <given-names>M. D.</given-names></name> <name><surname>Salavert</surname> <given-names>M.</given-names></name> <name><surname>Bosch</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <article-title>Methods of rapid diagnosis in clinical microbiology: clinical needs.</article-title> <source><italic>Enferm. Infecc. Microbiol. Clin.</italic></source> <volume>35</volume> <fpage>41</fpage>&#x2013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1016/j.eimce.2017.01.014</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Zou</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>P. X.</given-names></name></person-group> (<year>2021</year>). <article-title>Hybrid dilation and attention residual U-Net for medical image segmentation.</article-title> <source><italic>Comput. Biol. Med.</italic></source> <volume>134</volume>:<issue>104449</issue>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2021.104449</pub-id> <pub-id pub-id-type="pmid">33993015</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wu</surname> <given-names>M.</given-names></name> <name><surname>Lai</surname> <given-names>Q.</given-names></name> <name><surname>Ju</surname> <given-names>Q.</given-names></name> <name><surname>Li</surname> <given-names>L.</given-names></name> <name><surname>Yu</surname> <given-names>H. D.</given-names></name> <name><surname>Huang</surname> <given-names>W.</given-names></name></person-group> (<year>2018</year>). <article-title>Paper-based fluorogenic devices for in vitro diagnostics.</article-title> <source><italic>Biosens. Bioelectron.</italic></source> <volume>102</volume> <fpage>256</fpage>&#x2013;<lpage>266</lpage>. <pub-id pub-id-type="doi">10.1016/j.bios.2017.11.006</pub-id> <pub-id pub-id-type="pmid">29153947</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xiao</surname> <given-names>Q.</given-names></name> <name><surname>Lin</surname> <given-names>J.-M.</given-names></name></person-group> (<year>2015</year>). <article-title>Advances and applications of chemiluminescence immunoassay in clinical diagnosis and foods safety.</article-title> <source><italic>Chin. J. Anal. Chem.</italic></source> <volume>43</volume> <fpage>929</fpage>&#x2013;<lpage>938</lpage>. <pub-id pub-id-type="doi">10.1016/s1872-2040(15)60831-3</pub-id></citation></ref>
<ref id="B31"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>J.</given-names></name> <name><surname>Cheng</surname> <given-names>Y.</given-names></name> <name><surname>Gong</surname> <given-names>X.</given-names></name> <name><surname>Yi</surname> <given-names>S.</given-names></name> <name><surname>Li</surname> <given-names>C.-W.</given-names></name> <name><surname>Jiang</surname> <given-names>L.</given-names></name><etal/></person-group> (<year>2021</year>). <article-title>An integrative review on the applications of 3D printing in the field of in vitro diagnostics.</article-title> <source><italic>Chin. Chem. Lett.</italic></source> <volume>33</volume> <fpage>2231</fpage>&#x2013;<lpage>2242</lpage>. <pub-id pub-id-type="doi">10.1016/j.cclet.2021.08.105</pub-id></citation></ref>
<ref id="B32"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>C.</given-names></name> <name><surname>Zhou</surname> <given-names>L.</given-names></name> <name><surname>Du</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Chen</surname> <given-names>L.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Foundation and clinical evaluation of a new method for detecting SARS-CoV-2 antigen by fluorescent microsphere immunochromatography.</article-title> <source><italic>Front. Cell Infect. Microbiol.</italic></source> <volume>10</volume>:<issue>553837</issue>. <pub-id pub-id-type="doi">10.3389/fcimb.2020.553837</pub-id> <pub-id pub-id-type="pmid">33330119</pub-id></citation></ref>
<ref id="B33"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Q.</given-names></name> <name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>T.</given-names></name> <name><surname>Sun</surname> <given-names>Z.</given-names></name> <name><surname>Ma</surname> <given-names>Y.</given-names></name> <name><surname>Yu</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>Recent advances in convolutional neural network acceleration.</article-title> <source><italic>Neurocomputing</italic></source> <volume>323</volume> <fpage>37</fpage>&#x2013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1016/j.neucom.2018.09.038</pub-id></citation></ref>
<ref id="B34"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>S.</given-names></name> <name><surname>Lin</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>W.</given-names></name> <name><surname>He</surname> <given-names>B.</given-names></name> <name><surname>Jia</surname> <given-names>S.</given-names></name> <name><surname>Wang</surname> <given-names>P.</given-names></name><etal/></person-group> (<year>2021</year>). <article-title>MDCC-Net: multiscale double-channel convolution U-Net framework for colorectal tumor segmentation.</article-title> <source><italic>Comput. Biol. Med.</italic></source> <volume>130</volume>:<issue>104183</issue>. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2020.104183</pub-id> <pub-id pub-id-type="pmid">33360107</pub-id></citation></ref>
<ref id="B35"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>Q.</given-names></name> <name><surname>Du</surname> <given-names>B.</given-names></name> <name><surname>Yan</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>Boundary-weighted domain adaptive neural network for prostate MR image segmentation.</article-title> <source><italic>arXiv [preprint]</italic></source> Available online at: <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.48550/arXiv.1902.08128">https://doi.org/10.48550/arXiv.1902.08128</ext-link> <pub-id pub-id-type="doi">10.1109/TMI.2019.2935018</pub-id> <pub-id pub-id-type="pmid">31425022</pub-id> <comment>(accessed August 15, 2019)</comment>.</citation></ref>
<ref id="B36"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zunair</surname> <given-names>H.</given-names></name> <name><surname>Ben Hamza</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Sharp U-Net: depthwise convolutional network for biomedical image segmentation.</article-title> <source><italic>Comput. Biol. Med.</italic></source> <volume>136</volume>: <issue>104699</issue> <pub-id pub-id-type="doi">10.1016/j.compbiomed.2021.104699</pub-id> <pub-id pub-id-type="pmid">34348214</pub-id></citation></ref>
</ref-list>
</back>
</article>