<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Mar. Sci.</journal-id>
<journal-title>Frontiers in Marine Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Mar. Sci.</abbrev-journal-title>
<issn pub-type="epub">2296-7745</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fmars.2022.845112</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Marine Science</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>A Pyramidal Feature Fusion Model on Swimming Crab <italic>Portunus trituberculatus</italic> Re-identification</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Zhang</surname> <given-names>Kejie</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1616102/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Xin</surname> <given-names>Yu</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1688479/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Shi</surname> <given-names>Ce</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<xref ref-type="corresp" rid="c002"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/640731/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Xie</surname> <given-names>Zhijun</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1658869/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ren</surname> <given-names>Zhiming</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1662414/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Faculty of Electrical Engineering and Computer Science, Ningbo University</institution>, <addr-line>Ningbo</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Key Laboratory of Aquaculture Biotechnology, Chinese Ministry of Education, Ningbo University</institution>, <addr-line>Ningbo</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>Collaborative Innovation Center for Zhejiang Marine High-Efficiency and Healthy Aquaculture</institution>, <addr-line>Ningbo</addr-line>, <country>China</country></aff>
<aff id="aff4"><sup>4</sup><institution>Marine Economic Research Center, Dong Hai Strategic Research Institute, Ningbo University</institution>, <addr-line>Ningbo</addr-line>, <country>China</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Ping Liu, Yellow Sea Fisheries Research Institute (CAFS), China</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Fang Wang, Ocean University of China, China; Li Li, Ocean University of China, China</p></fn>
<corresp id="c001">&#x002A;Correspondence: Yu Xin, <email>xinyu@nbu.edu.cn</email></corresp>
<corresp id="c002">Ce Shi, <email>shice3210@126.com</email></corresp>
<fn fn-type="other" id="fn004"><p>This article was submitted to Marine Fisheries, Aquaculture and Living Resources, a section of the journal Frontiers in Marine Science</p></fn>
</author-notes>
<pub-date pub-type="epub">
<day>22</day>
<month>03</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>9</volume>
<elocation-id>845112</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>12</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>08</day>
<month>02</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2022 Zhang, Xin, Shi, Xie and Ren.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Zhang, Xin, Shi, Xie and Ren</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract>
<p>Swimming crab <italic>Portunus trituberculatus</italic> is a vital crab species in coastal areas of China. In this study, an individual re-identification method based on Pyramidal Feature Fusion Model (PFFM) for <italic>P. trituberculatus</italic> was proposed. This method took the carapace texture of <italic>P. trituberculatus</italic> as a &#x201C;biological fingerprint&#x201D; and extracted carapace texture features, including global features and local features, to identify <italic>P. trituberculatus</italic>. Furthermore, this method utilized a weight adaptive module to improve re-identification (ReID) accuracy for the <italic>P. trituberculatus</italic> individuals with the incomplete carapace. To strengthen the discrimination of the extracted features, triplet loss was adopted in the model training process to improve the effectiveness of <italic>P. trituberculatus</italic> ReID. Furthermore, three experiments, i.e., PFFM on the effect of pyramidal model, <italic>P. trituberculatus</italic> features analysis, and comparisons to the State-of-the-Arts, were carried out to evaluate PFFM performance. The results showed that the mean average precision (mAP) and Rank-1 values of the proposed method reached 93.2 and 93% in the left half occlusion case, and mAP and Rank-1 values reached 71.8 and 75.4% in the upper half occlusion case. By using the experiments, the effectiveness and robustness of the proposed method were verified.</p>
</abstract>
<kwd-group>
<kwd>re-identification</kwd>
<kwd>deep learning</kwd>
<kwd>triplet loss</kwd>
<kwd>swimming crab</kwd>
<kwd>individual recognition</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="5"/>
<equation-count count="1"/>
<ref-count count="30"/>
<page-count count="10"/>
<word-count count="6241"/>
</counts>
</article-meta>
</front>
<body>
<sec id="S1" sec-type="intro">
<title>Introduction</title>
<p>The swimming crab (<italic>Portunus trituberculatus</italic>) is a marine economic crab on the coast of China. In 2017, the highest production of 561,000 tons was recorded (<xref ref-type="bibr" rid="B5">FAO, 2020</xref>). In recent years, with the rapid development of the industry, the subsequent quality and security issues of <italic>P. trituberculatus</italic> have raised many concerns, such as heavy metal residue in edible tissues of the swimming crab (<xref ref-type="bibr" rid="B3">Barath Kumar et al., 2019</xref>; <xref ref-type="bibr" rid="B26">Yu et al., 2020</xref>; <xref ref-type="bibr" rid="B2">Bao et al., 2021</xref>; <xref ref-type="bibr" rid="B25">Yang et al., 2021</xref>). The tracing system can recall the product in question in time and find counterfeit products to solve the food security issues. Therefore, it is integral to establish a <italic>P. trituberculatus</italic> tracing system. In terms of tracing technology, there are a number of Internet of Things (IoT)-based tracking and tracing infrastructures, such as Radio Frequency Identification (RFID) and Quick Response (QR) codes (<xref ref-type="bibr" rid="B13">Landt, 2005</xref>), which are primarily targeted for product identification. However, the RFID and QR codes are easily damaged during transportation. In contrast, the image can be conveniently captured, and the re-identification (ReID) method based on image processing can identify the product without the physical label. Thus, the image ReID method becomes a trend in product identification and tracing.</p>
<p>Traditional individual tracing methods have some limitations (<xref ref-type="bibr" rid="B22">Violino et al., 2019</xref>). For example, biological identification is time-consuming, chemical identification is not applicable on a massive scale dataset, and information identification cannot recognize artificially forged samples. All the methods above are complicated, and feature extraction cost is high on large-scale applications. In contrast, the State-of-the-Art artificial intelligence (AI)-based methods can overcome these challenges. AI has been applied to simplify the feature extraction process in the biometrics field (<xref ref-type="bibr" rid="B8">Hansen et al., 2018</xref>; <xref ref-type="bibr" rid="B15">Marsot et al., 2020</xref>). Specifically, AI-based methods mainly identify objects according to image features. There are many methods, including Histogram of Oriented Gradient (HOG) (<xref ref-type="bibr" rid="B4">Dalal and Triggs, 2005</xref>), Local Binary Pattern (LBP) (<xref ref-type="bibr" rid="B16">Ojala et al., 2002</xref>), and deep network. These methods can extract features such as bumps, grooves, ridges, and irregular spots on the carapace of <italic>P. trituberculatus</italic>. The extracted features seem like the &#x201C;biological fingerprint&#x201D; of <italic>P. trituberculatus</italic>, which provides initial features for object ReID.</p>
<p>In terms of ReID, the traditional ReID is mainly applied to pedestrian re-identification. The initial visual features are used to represent pedestrian (<xref ref-type="bibr" rid="B1">Bak et al., 2010</xref>; <xref ref-type="bibr" rid="B17">Oreifej et al., 2010</xref>; <xref ref-type="bibr" rid="B10">J&#x00FC;ngling et al., 2011</xref>). To measure the effect of ReID methods, Relative Distance Comparison (RDC) (<xref ref-type="bibr" rid="B30">Zheng et al., 2012</xref>) was proposed based on PRDC (<xref ref-type="bibr" rid="B29">Zheng et al., 2011</xref>). RDC utilized the AdaBoost mechanism to reduce the dependence of model training on labeled samples.</p>
<p>With the development of deep learning, many scholars introduced deep learning into ReID and focused on the part-based method. The deeply learned representations had a high discriminative ability, especially when aggregated from deeply learned part features. In 2018, the Part-based Convolutional Baseline (PCB) model (<xref ref-type="bibr" rid="B21">Sun et al., 2018</xref>) was proposed. The PCB model divides pedestrians into separate blocks to extract fine-grained features and receive promising results. The use of the part-based method is effective to ReID. Meanwhile, a more detailed part-based method (<xref ref-type="bibr" rid="B7">Fu et al., 2019</xref>) with the combination of the divided part features as individual &#x201C;biological fingerprint&#x201D; was proposed. The detailed part-based method performed better than PCB, and the part-based strategy could further improve the ReID accuracy, such as multiple granularity network (MGN) (<xref ref-type="bibr" rid="B24">Wang et al., 2018</xref>) and pyramidal model (<xref ref-type="bibr" rid="B27">Zheng et al., 2019</xref>). Based on the part-based methods, triplet loss (<xref ref-type="bibr" rid="B19">Schroff et al., 2015</xref>) was adopted to minimize the feature distance of <italic>P. trituberculatus</italic> individuals with the same identification (ID) and maximize the feature distance of <italic>P. trituberculatus</italic> individuals with a different ID. The triplet loss is another way to deal with ReID task. Later, triplet loss is widely used in ReID.</p>
<p>In the field of biological ReID, the deep convolutional neural network (DCNN) is an efficient deep learning method that provides extract features to solve ReID problems (<xref ref-type="bibr" rid="B8">Hansen et al., 2018</xref>) in a low-cost and scalable way (<xref ref-type="bibr" rid="B15">Marsot et al., 2020</xref>). Actually, the deep learning method requires large amounts of labeled pictures to train the DCNN model (<xref ref-type="bibr" rid="B6">Ferreira et al., 2020</xref>). <xref ref-type="bibr" rid="B11">Korschens and Denzler (2019)</xref> introduced an elephant dataset for elephant ReID. The elephant dataset contained 276 elephant individuals and provided a baseline approach for elephant ReID. Based on the elephant dataset, the approach used You Only Look Once (YOLO) (<xref ref-type="bibr" rid="B18">Redmon et al., 2016</xref>) detector to recognize elephant individuals. In 2020, a ReID method for Southern Rock Lobster (SRL) by convolutional neural networks (CNNs) (<xref ref-type="bibr" rid="B23">Vo et al., 2020</xref>) was proposed. The lobster ReID method used a contrastive loss function to distinguish lobsters based on carapace images. This method showed that the loss function also contributed to ReID. In addition, the standard cross-entropy loss with a pairwise Kullback-Leibler (KL) divergence loss was used to enforce consistent semantically constrained deep representations explicitly and showed competitive results on the Wild ReID task (<xref ref-type="bibr" rid="B20">Shukla et al., 2019</xref>). In terms of part-based methods, a part-pose guided model was proposed for tiger ReID (<xref ref-type="bibr" rid="B14">Liu et al., 2019</xref>). The model consisted of two-part branches and a full branch. The part branches were used as regulators to constrain full branch feature training on original tiger images. Part-based methods are proven to be efficient in the biological field.</p>
<p>There are many approaches using machine learning and computer vision technology to identify individuals. All the individual ReID methods need to design a computer vision model according to the individual surface characteristics. To identify <italic>P. trituberculatus</italic> individuals, a pyramidal feature fusion model (PFFM) method was developed according to the carapace characteristics of <italic>P. trituberculatus</italic>. The PFFM could extract <italic>P. trituberculatus</italic> features on local and global perspectives and effectively match <italic>P. trituberculatus</italic> individuals. This study aims to develop a method to extract image features for <italic>P. trituberculatus</italic> individual identification. The extracted image feature is treated as a product label, by which the crab in question can be retrieved and traced. Furthermore, the proposed method could also be potentially applied to reidentify other crabs with apparent characteristics on the carapace.</p>
</sec>
<sec id="S2" sec-type="materials|methods">
<title>Materials and Methods</title>
<sec id="S2.SS1">
<title>Experimental Animal</title>
<p>We collected 211 adult <italic>P. trituberculatus</italic> from a crab farm in Ningbo in March 2020. The appendages of the adult <italic>P. trituberculatus</italic> were intact. The average body weight of the experimental crab was 318.90 &#x00B1; 38.07 g (mean &#x00B1; SD), the full carapace width was 16.73 &#x00B1; 0.75 cm, and the length was 8.65 &#x00B1; 0.41 cm. After numbering the <italic>P. trituberculatus</italic>, the carapaces were pictured with a mobile phone (Huawei P30 rear camera).</p>
</sec>
<sec id="S2.SS2">
<title>Experimental Design</title>
<p>The <italic>P. trituberculatus</italic> were numbered from 1 to 211, and the carapace images were used to compose a dataset named crab-back-211. The carapace pictures were taken in multiple scenes to augment the diversity of dataset crab-back-211. The description of the scenes is shown in <xref ref-type="table" rid="T1">Table 1</xref>. In <xref ref-type="table" rid="T1">Table 1</xref>, &#x201C;25&#x201D; represents the ID of <italic>P. trituberculatus</italic> in crab-back-211, and &#x201C;0&#x2013;11&#x201D; are the IDs of 12 scenes. In the 12 scenes, ID 0 is a standard scene without any processing. By diversity augmentation, the crab-back-211 was expanded to 2,532 images. <xref ref-type="fig" rid="F1">Figure 1</xref> shows the carapace images of the 25th <italic>P. trituberculatus</italic> in 0&#x2013;11 scenes, where 25_0 is the standard scene.</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>The scene description in practice.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">ID</td>
<td valign="top" align="left">Description</td>
<td valign="top" align="center">ID</td>
<td valign="top" align="left">Description</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">25_0</td>
<td valign="top" align="left">Standard scene</td>
<td valign="top" align="center">25_6</td>
<td valign="top" align="left">Rotate <italic>P. trituberculatus</italic> counterclockwise</td>
</tr>
<tr>
<td valign="top" align="left">25_1</td>
<td valign="top" align="left">Noisy camera</td>
<td valign="top" align="center">25_7</td>
<td valign="top" align="left">Lower left part</td>
</tr>
<tr>
<td valign="top" align="left">25_2</td>
<td valign="top" align="left">Spinning <italic>P. trituberculatus</italic> clockwise</td>
<td valign="top" align="center">25_8</td>
<td valign="top" align="left">Upper right part</td>
</tr>
<tr>
<td valign="top" align="left">25_3</td>
<td valign="top" align="left">Camera with low resolution</td>
<td valign="top" align="center">25_9</td>
<td valign="top" align="left">Dark</td>
</tr>
<tr>
<td valign="top" align="left">25_4</td>
<td valign="top" align="left">Camera with high resolution</td>
<td valign="top" align="center">25_10</td>
<td valign="top" align="left">Overexposed</td>
</tr>
<tr>
<td valign="top" align="left">25_5</td>
<td valign="top" align="left">Low viewing angle</td>
<td valign="top" align="center">25_11</td>
<td valign="top" align="left">Simultaneous rotation at low viewing angle</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption><p>Pictures of different scenes of <italic>Portunus trituberculatus</italic>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fmars-09-845112-g001.tif"/>
</fig>
<p>We divided the crab-back-211 into the training set and test set, including 154 crabs and 57 crabs, respectively. The training set was used to train ReID model, while the test set was used to evaluate the performance of the trained model. The training set contained 154 <italic>P. trituberculatus</italic> with a total of 1,848 images, and the test set had 57 <italic>P. trituberculatus</italic> with a total of 684 images. The test set consisted of a query set and a gallery set. <italic>P. trituberculatus</italic> ReID aimed at matching images of a specified <italic>P. trituberculatus</italic> in gallery set, given a query image in the query set. The correct matches could be found by the similarities of carapace features obtained by Euclidean distance. First, carapace features of <italic>P. trituberculatus</italic> in query or gallery set were obtained by pyramid-based ReID (PR). Then, all the similarities of carapace features between query images and gallery images could be obtained by Euclidean distances. By the similarities, the match of a query image was found from the gallery images. In this experiment, 57 query images were selected as the query set to find the correct match across 627 gallery images in the gallery set. <xref ref-type="table" rid="T2">Table 2</xref> shows the configuration of the experiment platform.</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p><italic>Portunus trituberculatus</italic> ReID experiment platform.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Hardware</td>
<td valign="top" align="center">Type</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">operating system</td>
<td valign="top" align="center">Linux</td>
</tr>
<tr>
<td valign="top" align="left">CPU</td>
<td valign="top" align="center">Intel (R) Core (TM) i7-10700K CPU @ 3.80 GHz</td>
</tr>
<tr>
<td valign="top" align="left">GPU</td>
<td valign="top" align="center">NVIDIA GeForce RTX 3090</td>
</tr>
<tr>
<td valign="top" align="left">RAM</td>
<td valign="top" align="center">16G</td>
</tr>
<tr>
<td valign="top" align="left">Memory</td>
<td valign="top" align="center">24G</td>
</tr>
<tr>
<td valign="top" colspan="2"><hr/></td>
</tr>
<tr>
<td valign="top" align="left"><bold>Softwave</bold></td>
<td valign="top" align="center"><bold>Type</bold></td>
</tr>
<tr>
<td valign="top" colspan="2"><hr/></td>
</tr>
<tr>
<td valign="top" align="left">CUDA</td>
<td valign="top" align="center">CUDA-11.0</td>
</tr>
<tr>
<td valign="top" align="left">cuDNN</td>
<td valign="top" align="center">8.1.0</td>
</tr>
<tr>
<td valign="top" align="left">Python</td>
<td valign="top" align="center">3.6.13</td>
</tr>
<tr>
<td valign="top" align="left">Pytorch</td>
<td valign="top" align="center">1.7.1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>We tested the proposed method on crab-back-211. The experiment consisted of two stages, namely, the model training stage and the model inference stage.</p>
<sec id="S2.SS2.SSS1">
<title>Model Training</title>
<p>Resnet18, Resnet50, and Resnet101 were used as backbone models. These models were pretrained on ImageNet (<xref ref-type="bibr" rid="B12">Krizhevsky et al., 2012</xref>). Random horizontal flipping and cropping methods were adopted to augment carapace images in the data preparation process. ID loss and triplet loss were combined into global objective function in model training, and the ratios of ID loss and triplet loss were 0.3 and 0.7, respectively. In this experiment, the margin of triplet loss was set to 0.2. Totally, in this study, the proposed PFFM trained 60 epochs. In each epoch, a mini-batch of 60 images of 15 <italic>P. trituberculatus</italic> was sampled from crab-back-211, where each <italic>P. trituberculatus</italic> contained four images. We used Adam Optimizer as a training optimizer, with an initial learning rate of 10<sup>&#x2212;5</sup>, and shrunk this learning rate by a factor of 0.1 at 30. The scale of input images was 382 &#x00D7; 128.</p>
</sec>
<sec id="S2.SS2.SSS2">
<title>Model Inference</title>
<p>In this stage, 57 query images were selected as the query set to find the matches across 627 gallery images. Three experiments were designed and adopted matching accuracy to evaluate PFFM matching accuracy. The three experiments were the effect of pyramidal model, <italic>P. trituberculatus</italic> features analysis, and comparisons to the State-of-the-Arts.</p>
<list list-type="simple">
<list-item>
<label>(1)</label>
<p><bold>Effect of pyramidal model:</bold> the effect of PFFM and the effect of the pyramidal structure size on performance were empirically studied.</p>
</list-item>
<list-item>
<label>(2)</label>
<p><bold><italic>Portunus trituberculatus</italic> features analysis</bold>: the Euclidean distance between the query images and gallery images was visualized to show the discrimination of the extracted features.</p>
</list-item>
<list-item>
<label>(3)</label>
<p><bold>Comparisons to the State-of-the-Arts</bold>: The PFFM and other ReID methods were compared.</p>
</list-item>
</list>
<p>All three experiments utilized mAP (<xref ref-type="bibr" rid="B28">Zheng et al., 2015</xref>) and cumulative match characteristics (CMC) at rank-1 and rank-5 as evaluation indicators.</p>
</sec>
</sec>
<sec id="S2.SS3">
<title><italic>Portunus trituberculatus</italic> ReID Algorithm</title>
<p>For <italic>P. trituberculatus</italic> ReID, the traditional ReID method takes the whole <italic>P. trituberculatus</italic> carapace image as input, to extract the whole feature. This extracted feature represents <italic>P. trituberculatus</italic> individual. By the feature similarity between query images and gallery images, the matching result of a query image can be found in the gallery set. However, these traditional methods overemphasize the global features of the <italic>P. trituberculatus</italic> individual and ignore some insignificant detailed features. These methods could fail to distinguish similar <italic>P. trituberculatus</italic> individuals. Therefore, many pieces of research comprehensively consider global features and local features and focus on the contribution of key local areas to the whole feature. To extract local features, an image should be divided into fixed parts. For example, in the field of pedestrian ReID, a pedestrian image is divided into three fixed parts, namely, head, upper body, and lower body, and the divided local features are extracted, respectively. In terms of part-based methods, different part-based frameworks were adopted to improve the performance of ReID (<xref ref-type="bibr" rid="B21">Sun et al., 2018</xref>; <xref ref-type="bibr" rid="B7">Fu et al., 2019</xref>). The critical point of part-based methods is to align the divided fixed parts.</p>
<p>For <italic>P. trituberculatus</italic>, there were many carapace characteristics, such as protrusions, grooves, ridges, and irregular spots. <xref ref-type="fig" rid="F2">Figure 2</xref> shows two <italic>P. trituberculatus</italic> carapaces. The carapace texture and spots have significant individual discrimination. Therefore, we focused on the local texture and spot characteristics of <italic>P. trituberculatus</italic> carapaces to improve ReID accuracy.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption><p>Pictures of two <italic>P. trituberculatus</italic> carapaces.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fmars-09-845112-g002.tif"/>
</fig>
<sec id="S2.SS3.SSS1">
<title>Pyramid-Based ReID</title>
<p>Pyramid-based ReID considered a local feature by dividing original features into 18 groups with an alignment to strengthen the role of the detailed feature. Previous part-based methods, such as PCB, used several part-level features to achieve ReID. However, these methods did not consider the continuity of separate parts. We proposed a pyramid-based ReID with a multilevel feature based on PCB to focus on the continuity of separate parts and enhance detailed features. This multilevel framework effectively not only avoided the problem of &#x201C;feature separation&#x201D; caused by part division but also smoothly merged the local and global features. In addition, a combination of ID loss and triplet loss to train the ReID model was used to strengthen the feature discrimination between <italic>P. trituberculatus</italic> individuals.</p>
<list list-type="simple">
<list-item><p><bold>(1) Part-based strategy</bold></p>
</list-item>
</list>
<p>To strengthen the local detailed features on the carapace of <italic>P. trituberculatus</italic>, we divided the carapace image into fixed separate parts. Based on the shape of the carapace, many carapace division strategies could be designed. <xref ref-type="fig" rid="F3">Figure 3</xref> shows a 6-square grid division plan on carapace image, the divided fixed six parts, numbered 1&#x2013;6. Two problems for the part-based strategy are as follows: (a) How to weigh the contribution of the divided parts to the whole feature and (b) How to solve the &#x201C;feature separation&#x201D; problem on the separate parts. Thus, the PFFM model, which could deeply fuse the features of the separate carapace parts, was proposed.</p>
<list list-type="simple">
<list-item><p><bold>(2) Pyramid-based strategy</bold></p>
</list-item>
</list>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption><p>Six-square grids plan on <italic>P. trituberculatus</italic>. The numbers in the picture indicate the divided parts.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fmars-09-845112-g003.tif"/>
</fig>
<p>We designed a pyramid-based strategy to fuse the local features obtained by a part-based strategy. As shown in <xref ref-type="fig" rid="F4">Figure 4</xref>, the pyramid-based strategy composed the six fixed parts in <xref ref-type="fig" rid="F3">Figure 3</xref> into 18 groups. For instance, the fixed part group (1, 2, 4, and 5) was composed of part-1, part- 2, part-4, and part-5. The 18 groups were divided into four levels forming a feature pyramid. In level-1, six basic groups were formed by the divided six separate parts. Seven groups composed of the six basic groups in level-2, while four groups in level-3 composed of the seven groups in level-2. In level-4, there was only one group, which was the whole carapace image representing the global feature. Each level represented a granularity on feature extraction, and the four levels extracted the carapace feature at multi-granularity. Therefore, this pyramid-based strategy extracted global and local features and also provided a feature integration strategy on feature extraction.</p>
<list list-type="simple">
<list-item><p><bold>(3) Pyramidal Feature Fusion Model</bold></p>
</list-item>
</list>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption><p>Pyramid-based strategy. There are 18 groups in the four levels, covering the whole <italic>P. trituberculatus</italic> carapace characteristics locally and globally.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fmars-09-845112-g004.tif"/>
</fig>
<p><xref ref-type="fig" rid="F5">Figure 5</xref> shows the architecture of the PFFM model. PFFM model was mainly composed of feature extraction backbone, pyramid-based module, and basic convolution block module. Each module is described in the following sections.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption><p>The architecture of pyramidal feature fusion model (PFFM) model. It is composed of backbone, pyramid, and basic block.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fmars-09-845112-g005.tif"/>
</fig>
</sec>
</sec>
<sec id="S2.SS4">
<title>Backbone</title>
<p>Backbone was a network for feature extraction. The objective of the backbone was to extract original features and feed the features into the next. ResNet framework (<xref ref-type="bibr" rid="B9">He et al., 2016</xref>) is an effective backbone with a strong feature extraction ability on image processing, such as object classification and segmentation. Therefore, we used ResNet to extract original features on <italic>P. trituberculatus</italic> carapace. Furthermore, many backbone networks, such as Resnet-50 and Resnet-101, can also be used as the basic network, and these backbones were compared with each other in the experiments.</p>
</sec>
<sec id="S2.SS5">
<title>Feature Pyramid</title>
<p>To extract the global and local features of <italic>P. trituberculatus</italic>, a part-based strategy was used to divide the original features into six fixed parts as shown in <xref ref-type="fig" rid="F3">Figure 3</xref>, after backbone. In <xref ref-type="fig" rid="F5">Figure 5</xref>, the input of the feature pyramid was the original features extracted by the backbone, and then 18 groups in 4 levels by the six fixed parts could be obtained by the pyramid-based strategy.</p>
</sec>
<sec id="S2.SS6">
<title>Basic Block</title>
<p>The basic block used in this study is shown in <xref ref-type="fig" rid="F5">Figure 5</xref>, which includes the pooling layer, convolutional layer, batch normalization (BN), rectified linear units (ReLU), full connection (FC), and softmax layers. There were 18 blocks in the basic block, which could process the 18 features of 18 groups, after the feature pyramid. For each block in the basic block, global average pooling (GAP) and global maximum pooling (GMP) were used to capture the characteristics of different channels, such as protrusions, grooves, ridges, and irregular spots. Then, two features in the same channel were added into a vector. Later, a convolution layer was followed by BN and ReLU activation. The features by 18 blocks were concatenated, as shown in <xref ref-type="fig" rid="F6">Figure 6</xref>. ID loss and triplet loss were adopted to train the PFFM model and to discriminate the subtle discrimination of overall features. In the inference stage, the concatenated feature was used to identify <italic>P. trituberculatus</italic>.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption><p>Feature integration module. Features in different blocks are fused, and the fused feature could be used to identify <italic>P. trituberculatus</italic>.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fmars-09-845112-g006.tif"/>
</fig>
</sec>
<sec id="S2.SS7">
<title>Triplet Loss</title>
<p>When triplet loss for model training was used, each <italic>P. trituberculatus</italic> individual was selected as an anchor. For each anchor, the sample with the same ID and the lowest feature similarity to the anchor was selected as a positive sample. Conversely, the sample with the different ID and the highest feature similarity was selected as a negative sample. By using the above process, the selected three samples (i.e., anchor sample, positive sample, and negative sample) formed a triplet tuple. <xref ref-type="fig" rid="F7">Figure 7</xref> shows the positive and negative sample selection processes.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption><p>Example of triplet loss, where the lines represent the positive sample pair and the negative sample pair. The similarity is measured by Euclidean distance.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fmars-09-845112-g007.tif"/>
</fig>
<p>As shown in <xref ref-type="fig" rid="F7">Figure 7</xref>, each <italic>P. trituberculatus</italic> was photographed in different scenes. <italic>x</italic><sub><italic>j</italic></sub> represents the pictures of <italic>P. trituberculatus j</italic>, where <inline-formula><mml:math id="INEQ2"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>j</mml:mi><mml:mi>i</mml:mi></mml:msubsup></mml:math></inline-formula> is the picture of <italic>P. trituberculatus j</italic> in scene <italic>i</italic>. When <inline-formula><mml:math id="INEQ3"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup></mml:math></inline-formula> was selected as anchor sample, <inline-formula><mml:math id="INEQ4"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>p</mml:mi></mml:msubsup></mml:math></inline-formula> was a sample of ID <italic>i</italic>. If <inline-formula><mml:math id="INEQ5"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>p</mml:mi></mml:msubsup></mml:math></inline-formula> had the lowest similarity with <inline-formula><mml:math id="INEQ6"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup></mml:math></inline-formula>, <inline-formula><mml:math id="INEQ7"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>p</mml:mi></mml:msubsup></mml:math></inline-formula> was selected as positive sample. As shown in <xref ref-type="fig" rid="F7">Figure 7</xref>, the ID of <inline-formula><mml:math id="INEQ8"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:msubsup></mml:math></inline-formula> was <italic>k</italic>, and if <inline-formula><mml:math id="INEQ9"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:msubsup></mml:math></inline-formula> had the highest similarity with <inline-formula><mml:math id="INEQ10"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup></mml:math></inline-formula>, <inline-formula><mml:math id="INEQ11"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:msubsup></mml:math></inline-formula> was selected as negative sample. The selected <inline-formula><mml:math id="INEQ12"><mml:mrow><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>p</mml:mi></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:msubsup><mml:mrow><mml:mo>(</mml:mo><mml:mi>a</mml:mi><mml:mo>&#x2260;</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>i</mml:mi><mml:mo>&#x2260;</mml:mo><mml:mi>k</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> composed a triplet tuple. The tuple composed by anchor and positive sample, denoted as <inline-formula><mml:math id="INEQ13"><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>p</mml:mi></mml:msubsup><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>, was positive sample pair, while the tuple by anchor and negative sample, denoted as <inline-formula><mml:math id="INEQ14"><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup><mml:mo>,</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:msubsup><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>, was negative sample pair. The purpose of triplet loss was to minimize the feature distance of the positive sample pair in each scene and maximize the feature distance of the negative sample pair. Triplet loss was expressed in the following:</p>
<disp-formula id="S2.Ex1"><mml:math id="M1"><mml:mrow><mml:mi class="ltx_font_mathcaligraphic">&#x2112;</mml:mi><mml:mo>=</mml:mo><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:mrow><mml:mrow><mml:mrow><mml:mo fence="true">||</mml:mo><mml:mrow><mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>-</mml:mo><mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>p</mml:mi></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo fence="true">||</mml:mo></mml:mrow><mml:mo>-</mml:mo><mml:mrow><mml:mo fence="true">||</mml:mo><mml:mrow><mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow><mml:mo>-</mml:mo><mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow><mml:mo fence="true">||</mml:mo></mml:mrow></mml:mrow><mml:mo>+</mml:mo><mml:mi>&#x03B1;</mml:mi></mml:mrow><mml:mo>,</mml:mo><mml:mn>0</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:mrow></mml:math></disp-formula>
<p>where <italic>f</italic>(&#x22C5;) is the feature extraction function, ||&#x22C5;|| is the Euclidean distance function, <inline-formula><mml:math id="INEQ17"><mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="INEQ18"><mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>p</mml:mi></mml:msubsup><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="INEQ19"><mml:mrow><mml:mi>f</mml:mi><mml:mrow><mml:mo stretchy="false">(</mml:mo><mml:msubsup><mml:mi>x</mml:mi><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:msubsup><mml:mo stretchy="false">)</mml:mo></mml:mrow></mml:mrow></mml:math></inline-formula> denote the features of anchor sample <inline-formula><mml:math id="INEQ20"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>a</mml:mi></mml:msubsup></mml:math></inline-formula>, positive sample <inline-formula><mml:math id="INEQ21"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>p</mml:mi></mml:msubsup></mml:math></inline-formula>, and negative sample <inline-formula><mml:math id="INEQ22"><mml:msubsup><mml:mi>x</mml:mi><mml:mi>k</mml:mi><mml:mi>n</mml:mi></mml:msubsup></mml:math></inline-formula>, respectively, &#x03B1; is a hyperparameter, and the value here is 0.2.</p>
</sec>
</sec>
<sec id="S3" sec-type="results">
<title>Results</title>
<p><xref ref-type="table" rid="T3">Table 3</xref> shows the evaluation of PFFM using different backbones (i.e., Resnet18, Resnet50, and Resnet101). <xref ref-type="table" rid="T4">Table 4</xref> shows the performance of PFFM with different division strategies, such as grid 2&#x00D7;3 and grid 2&#x00D7;4. <xref ref-type="fig" rid="F8">Figure 8</xref> visualizes the discrimination of the <italic>P. trituberculatus</italic> features by PFFM with different backbones (i.e., Resnet18, Resnet50, and Resnet101). <xref ref-type="table" rid="T5">Table 5</xref> compares the proposed PFFM with the State-of-the-Arts on crab-back-211.</p>
<table-wrap position="float" id="T3">
<label>TABLE 3</label>
<caption><p>The evaluation of pyramidal feature fusion model (PFFM) using different backbones.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Backbone</td>
<td valign="top" align="center">PFFM</td>
<td valign="top" align="center">mAP (%)</td>
<td valign="top" align="center">Rank-1 (%)</td>
<td valign="top" align="center">Rank-5 (%)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Resnet18</td>
<td valign="top" align="center">&#x2713;</td>
<td valign="top" align="center">91.5</td>
<td valign="top" align="center">93.0</td>
<td valign="top" align="center">98.2</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">60.9</td>
<td valign="top" align="center">73.7</td>
<td valign="top" align="center">91.2</td>
</tr>
<tr>
<td valign="top" align="left">Resnet50</td>
<td valign="top" align="center">&#x2713;</td>
<td valign="top" align="center">92.5</td>
<td valign="top" align="center">93.0</td>
<td valign="top" align="center">98.2</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">20.7</td>
<td valign="top" align="center">17.5</td>
<td valign="top" align="center">24.6</td>
</tr>
<tr>
<td valign="top" align="left">Resnet101</td>
<td valign="top" align="center">&#x2713;</td>
<td valign="top" align="center">92.7</td>
<td valign="top" align="center">96.5</td>
<td valign="top" align="center">98.2</td>
</tr>
<tr>
<td/>
<td/>
<td valign="top" align="center">13.7</td>
<td valign="top" align="center">12.3</td>
<td valign="top" align="center">17.5</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p><italic>&#x201C;&#x2713;&#x201D;7.5et1 that the PFFM is adopted. The metrics contain mAP, rank-1, and rank-5. If the metrics have a higher value, the PFFM performs better.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T4">
<label>TABLE 4</label>
<caption><p>The performance of pyramidal feature fusion model (PFFM) with different division strategies.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Backbone</td>
<td valign="top" align="center">Grid (<italic>r</italic> &#x00D7; <italic>c</italic>)</td>
<td valign="top" align="center">Model size</td>
<td valign="top" align="center">mAP (%)</td>
<td valign="top" align="center">Rank-1 (%)</td>
<td valign="top" align="center">Rank-5 (%)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Resnet18</td>
<td valign="top" align="center">2&#x00D7;3</td>
<td valign="top" align="center">15.44446M</td>
<td valign="top" align="center">91.5</td>
<td valign="top" align="center">93.0</td>
<td valign="top" align="center">98.2</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">2&#x00D7;4</td>
<td valign="top" align="center">18.28975M</td>
<td valign="top" align="center">74.8</td>
<td valign="top" align="center">80.7</td>
<td valign="top" align="center">91.2</td>
</tr>
<tr>
<td valign="top" align="left">Resnet50</td>
<td valign="top" align="center">2&#x00D7;3</td>
<td valign="top" align="center">31.31492M</td>
<td valign="top" align="center">92.5</td>
<td valign="top" align="center">93.0</td>
<td valign="top" align="center">98.2</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">2&#x00D7;4</td>
<td valign="top" align="center">36.51951M</td>
<td valign="top" align="center">80.8</td>
<td valign="top" align="center">89.5</td>
<td valign="top" align="center">98.2</td>
</tr>
<tr>
<td valign="top" align="left">Resnet101</td>
<td valign="top" align="center">2&#x00D7;3</td>
<td valign="top" align="center">50.30705M</td>
<td valign="top" align="center">92.7</td>
<td valign="top" align="center">96.5</td>
<td valign="top" align="center">98.2</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">2&#x00D7;4</td>
<td valign="top" align="center">55.51164M</td>
<td valign="top" align="center">80.8</td>
<td valign="top" align="center">98.2</td>
<td valign="top" align="center">98.2</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption><p>The feature distance of <italic>P. trituberculatus</italic> in the query set to the other images in the gallery set.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fmars-09-845112-g008.tif"/>
</fig>
<table-wrap position="float" id="T5">
<label>TABLE 5</label>
<caption><p>The performance of Vgg16, Resnet50, PCB-2, and pyramidal feature fusion model (PFFM) on crab-back-211, where PCB-2 represents that feature is divided into 2 horizontal blocks using PCB.</p></caption>
<table cellspacing="5" cellpadding="5" frame="hsides" rules="groups">
<thead>
<tr>
<td valign="top" align="left">Occlusion</td>
<td valign="top" align="center">Method</td>
<td valign="top" align="center">mAP (%)</td>
<td valign="top" align="center">Rank-1 (%)</td>
<td valign="top" align="center">Rank-5 (%)</td>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Left part</td>
<td valign="top" align="center">Vgg16</td>
<td valign="top" align="center">3.2</td>
<td valign="top" align="center">1.8</td>
<td valign="top" align="center">8.8</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">Resnet50</td>
<td valign="top" align="center">39.1</td>
<td valign="top" align="center">33.3</td>
<td valign="top" align="center">52.6</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">PCB-2</td>
<td valign="top" align="center">97.7</td>
<td valign="top" align="center">98.2</td>
<td valign="top" align="center">100</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">PFFM</td>
<td valign="top" align="center">93.2</td>
<td valign="top" align="center">93.0</td>
<td valign="top" align="center">98.2</td>
</tr>
<tr>
<td valign="top" align="left">Top part</td>
<td valign="top" align="center">Vgg16</td>
<td valign="top" align="center">3.2</td>
<td valign="top" align="center">1.8</td>
<td valign="top" align="center">8.8</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">Resnet50</td>
<td valign="top" align="center">16.4</td>
<td valign="top" align="center">8.8</td>
<td valign="top" align="center">14</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">PCB-2</td>
<td valign="top" align="center">62.2</td>
<td valign="top" align="center">49.1</td>
<td valign="top" align="center">80.7</td>
</tr>
<tr>
<td/>
<td valign="top" align="center">PFFM</td>
<td valign="top" align="center">71.8</td>
<td valign="top" align="center">75.4</td>
<td valign="top" align="center">87.7</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p><italic>PFFM uses Resnet50 as the backbone.</italic></p></fn>
</table-wrap-foot>
</table-wrap>
<p>For each backbone, the features of <italic>P. trituberculatus</italic> with the same ID are closer, while farther with different IDs.</p>
</sec>
<sec id="S4" sec-type="discussion">
<title>Discussion</title>
<sec id="S4.SS1">
<title>Effect of Pyramidal Feature Fusion Model</title>
<list list-type="simple">
<list-item><p><bold>(1) The benefit of PFFM</bold></p>
</list-item>
</list>
<p>The purpose of this experiment was to verify the effectiveness of PFFM. This experiment used three backbones (i.e., Resnet18, Resnet50, and Resnet101) to extract the original image feature and made six groups for the comparison methods, such as Resnet18, Resnet18-pyramid, Resnet18, Resnet50-pyramid, Resnet50, and Resnet101-pyramid. The groups of Resnet18-pyramid, Resnet50-pyramid, and Resnet101-pyramid adopted the proposed PFFM as the ReID model. To verify the robustness of each method, we occluded the left half part of the images in the query set and carried out the six comparison methods above on the retained right half part, to find the ReID target in the gallery set. The experimental results are shown in <xref ref-type="table" rid="T3">Table 3</xref>. The methods using PFFM, such as Resnet18-pyramid, Resnet50-pyramid, and Resnet101-pyramid, performed better than those not using PFFM. The mAP and rank-1 of Resnet18-pyramid increased by 39.6 and 19.3% compared with Resnet18. Among these comparison methods, the methods using PFFM performed best on mAP and rank-1.</p>
<p>The Resnet18, Resnet50, and Resnet101 models mainly used global features of <italic>P. trituberculatus</italic> for ReID. When the query set was occluded, a large global feature deviation would occur, which led to a decrease in mAP, rank-1, and rank-5. The proposed PFFM utilized local features to compensate for global features, to alleviate the global feature deviation problem, and to strengthen its robustness. This experiment also showed that PFFM with Resnet101 as backbone performed better than other comparison methods.</p>
<list list-type="simple">
<list-item><p><bold>(2) Comparison of part division strategy</bold></p>
</list-item>
</list>
<p>In contrast, to verify the optimal division strategy of PFFM, this experiment used two-division strategies, i.e., grid 2 &#x00D7; 3 and grid 2 &#x00D7; 4. This experiment used three backbones (i.e., Resnet18, Resnet50, and Resnet101) to extract the original image feature, and the results obtained using grid 2 &#x00D7; 3 and grid 2 &#x00D7; 4 are shown in <xref ref-type="table" rid="T4">Table 4</xref>. The model with grid 2 &#x00D7; 4 had more parameters than the model with grid 2 &#x00D7; 3. In accuracy contrast, the model with grid 2 &#x00D7; 3 also had an advantage. For grid 2 &#x00D7; 4, the divided parts were too small to maintain the continuity of local features, which could affect ReID accuracy and increase the burden of model training. By using this experiment, the optimal division of PFFM was grid 2 &#x00D7; 3.</p>
</sec>
<sec id="S4.SS2">
<title><italic>Portunus trituberculatus</italic> Feature Analysis</title>
<p>To verify the compatibility of the PFFM, this experiment used different backbones with PFFM to analyze the feature distance of <italic>P. trituberculatus</italic>. This experiment calculated the feature distances of the samples with the same ID, and the feature distances of the samples with the different ID. <xref ref-type="fig" rid="F8">Figure 8</xref> shows the feature distance distribution of <italic>P. trituberculatus</italic> by resnet18, resnet50, and resnet101. In <xref ref-type="fig" rid="F8">Figure 8</xref>, the <italic>x</italic>-axis indicates <italic>P. trituberculatus</italic> IDs, and the <italic>y</italic>-axis is the distance that shows the distances between <italic>P. trituberculatus</italic>. The dark scatter points represent the feature distances among the samples with the same ID, and the scatter light points represent the feature distances among the samples with a different ID. The dark scatter points are mainly distributed at the bottom, indicating that the distances among the samples with the same ID were small. The distances of the samples with different IDs were greater and had a large fluctuation. By the above analysis, the samples with the same ID were closer, while the samples were farther with different IDs. In addition, the PFFM provided a high discriminative ability, especially when aggregated from fixed part features. The discriminating features reflected the specificity of each <italic>P. trituberculatus</italic>, which was more suitable for <italic>P. trituberculatus</italic> ReID. By using this experiment, the PFFM could achieve better results on crab-back-211 with various backbones, so the proposed PFFM had better compatibility with <italic>P. trituberculatus</italic> ReID.</p>
</sec>
<sec id="S4.SS3">
<title>Comparisons to the State-of-the-Arts</title>
<p>This experiment compared PFFM with other methods on crab-back-211. <xref ref-type="table" rid="T5">Table 5</xref> shows the comparison results. In this experiment, we occluded the left or top half of the images in the query set, to test the robustness of these methods. The occlusion was used to simulate the worse scenes in practice. We selected Vgg16, Resnet50, and PCB as comparison methods. For ReID models, the backbone was the preceding process used to generate original features, such as Vgg16 and Resnet50. The ReID models extracted features of <italic>P. trituberculatus</italic> from the original features. The Euclidean distance between the extracted features was used to identify <italic>P. trituberculatus</italic> individuals. For example, PCB was a ReID model that divided the original feature into two horizontal blocks for training and inference. Our study proposed PFFM using Resnet50 as the backbone. It was seen from <xref ref-type="table" rid="T5">Table 5</xref> that the mAPs of Vgg16 and Resnet50 were much lower than those of PCB and PFFM that implied the models using part-based strategy had better robustness. Therefore, the method that only considered global features could not accurately identify <italic>P. trituberculatus</italic> with the incomplete carapace. The mAP of PFFM used in this experiment was 9.6% higher than that of PCB in the upper half occlusion case, while the rank-1 was 26.3% higher and the rank-5 was 7% higher. Therefore, the PFFM proposed had better robustness and could effectively identify the <italic>P. trituberculatus</italic> individual with the incomplete carapace.</p>
</sec>
</sec>
<sec id="S5" sec-type="conclusion">
<title>Conclusion</title>
<p>In this study, a part-based PFFM model for <italic>P. trituberculatus</italic> ReID was designed. This model divided and merged the original features obtained by the backbone, extracting global and multilevel local features of <italic>P. trituberculatus</italic>. The proposed PFFM utilized local features to compensate for global feature, to alleviate the global feature deviation problem, and strengthen its robustness. In the model training process, ID loss and triplet loss were adopted to minimize the feature distance of <italic>P. trituberculatus</italic> individuals with the same ID and maximize the feature distance of <italic>P. trituberculatus</italic> individuals with a different ID. The experimental results showed that the PFFM had better performance when resnet50 was used as a backbone, and the best division strategy was grid 2 &#x00D7; 3. The PFFM maintained a high ReID accuracy for <italic>P. trituberculatus</italic> ReID in the incomplete carapace case. However, we discussed only adult <italic>P. trituberculatus</italic> ReID. There could be morphological changes of carapace characteristics in the growth cycle of <italic>P. trituberculatus</italic>. The proposed PFFM method is sensitive to the shape of the carapace. Therefore, the PFFM method is currently insufficient for <italic>P. trituberculatus</italic> across the growth cycle. In future research, the preparation of adequate training data is essential, and PFFM can be investigated to be applied to other animal ReID situations.</p>
</sec>
<sec id="S6" sec-type="data-availability">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="S7">
<title>Author Contributions</title>
<p>KZ: methodology, data curation, visualization, and writing&#x2014;original draft. YX and CS: conceptualization, writing&#x2014;review and editing, project administration, and funding acquisition. ZX: resources, and review and editing. ZR: resources. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec id="conf1" sec-type="COI-statement">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec id="pudiscl1" sec-type="disclaimer">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</body>
<back>
<sec id="S8" sec-type="funding-information">
<title>Funding</title>
<p>This study was supported by the National Key Research and Development Program of China (Project No. 2019YFD0901000), Special Research Funding from the Marine Biotechnology and Marine Engineering Discipline Group in Ningbo University (No. 422004582), the National Natural Science Foundation of China (Grant Nos. 41776164 and 31972783), the Natural Science Foundation of Ningbo (2019A610424), 2025 Technological Innovation for Ningbo (2019B10010), the National Key R&#x0026;D Program of China (2018YFD0901304), the Ministry of Agriculture of China and China Agriculture Research System (No. CARS48), K. C. Wong Magna Fund in Ningbo University and the Scientific Research Foundation of Graduate School of Ningbo University (IF2020145), the Natural Science Foundation of Zhejiang Province (Grant No. LY22F020001), and the 3315 Plan Foundation of Ningbo (Grant No. 2019B-18-G).</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bak</surname> <given-names>S.</given-names></name> <name><surname>Corvee</surname> <given-names>E.</given-names></name> <name><surname>Bremond</surname> <given-names>F.</given-names></name> <name><surname>Thonnat</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). &#x201C;<article-title>Person re-identification using haar-based and dcd-based signature</article-title>,&#x201D; in <source><italic>Proceedings of the 2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance</italic></source> (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>). <pub-id pub-id-type="doi">10.1109/AVSS.2010.68</pub-id></citation></ref>
<ref id="B2"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bao</surname> <given-names>C.</given-names></name> <name><surname>Cai</surname> <given-names>Q.</given-names></name> <name><surname>Ying</surname> <given-names>X.</given-names></name> <name><surname>Zhu</surname> <given-names>Y.</given-names></name> <name><surname>Ding</surname> <given-names>Y.</given-names></name> <name><surname>Murk</surname> <given-names>T. A. J.</given-names></name></person-group> (<year>2021</year>). <article-title>Health risk assessment of arsenic and some heavy metals in the edible crab (<italic>Portunus trituberculatus</italic>) collected from Hangzhou Bay, China.</article-title> <source><italic>Mar. Pollut. Bull.</italic></source> <volume>173(Pt A)</volume>:<issue>113007</issue>. <pub-id pub-id-type="doi">10.1016/j.marpolbul.2021.113007</pub-id> <pub-id pub-id-type="pmid">34607129</pub-id></citation></ref>
<ref id="B3"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barath Kumar</surname> <given-names>S.</given-names></name> <name><surname>Padhi</surname> <given-names>R. K.</given-names></name> <name><surname>Satpathy</surname> <given-names>K. K.</given-names></name></person-group> (<year>2019</year>). <article-title>Trace metal distribution in crab organs and human health risk assessment on consumption of crabs collected from coastal water of South East coast of India.</article-title> <source><italic>Mar. Pollut. Bull.</italic></source> <volume>141</volume> <fpage>273</fpage>&#x2013;<lpage>282</lpage>. <pub-id pub-id-type="doi">10.1016/j.marpolbul.2019.02.022</pub-id> <pub-id pub-id-type="pmid">30955735</pub-id></citation></ref>
<ref id="B4"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dalal</surname> <given-names>N.</given-names></name> <name><surname>Triggs</surname> <given-names>B.</given-names></name></person-group> (<year>2005</year>). &#x201C;<article-title>Histograms of oriented gradients for human detection</article-title>,&#x201D; in <source><italic>Proceedings of the 2005 IEEE Computer Society Conference On Computer Vision And Pattern Recognition (CVPR&#x2019;05)</italic></source>, <volume>Vol. 1</volume> (<publisher-loc>Piscataway, NJ</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>886</fpage>&#x2013;<lpage>893</lpage>.</citation></ref>
<ref id="B5"><citation citation-type="journal"><collab>FAO</collab> (<year>2020</year>). <source><italic>The State of World Fisheries and Aquaculture 2020: Sustainability in action.</italic></source> <publisher-loc>Rome</publisher-loc>: <publisher-name>FAO</publisher-name>.</citation></ref>
<ref id="B6"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ferreira</surname> <given-names>A. C.</given-names></name> <name><surname>Silva</surname> <given-names>L. R.</given-names></name> <name><surname>Renna</surname> <given-names>F.</given-names></name> <name><surname>Brandl</surname> <given-names>H. B.</given-names></name> <name><surname>Renoult</surname> <given-names>J. P.</given-names></name> <name><surname>Farine</surname> <given-names>D. R.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Deep learning-based methods for individual recognition in small birds.</article-title> <source><italic>Methods Ecol. Evol.</italic></source> <volume>11</volume> <fpage>1072</fpage>&#x2013;<lpage>1085</lpage>. <pub-id pub-id-type="doi">10.1111/2041-210X.13436</pub-id></citation></ref>
<ref id="B7"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fu</surname> <given-names>Y.</given-names></name> <name><surname>Wei</surname> <given-names>Y.</given-names></name> <name><surname>Zhou</surname> <given-names>Y.</given-names></name> <name><surname>Shi</surname> <given-names>H.</given-names></name> <name><surname>Huang</surname> <given-names>G.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name><etal/></person-group> (<year>2019</year>). <article-title>Horizontal pyramid matching for person re-identification.</article-title> <source><italic>Proc. AAAI Conf. Artif. Intell.</italic></source> <volume>33</volume> <fpage>8295</fpage>&#x2013;<lpage>8302</lpage>. <pub-id pub-id-type="doi">10.1609/aaai.v33i01.33018295</pub-id></citation></ref>
<ref id="B8"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hansen</surname> <given-names>M. F.</given-names></name> <name><surname>Smith</surname> <given-names>M. L.</given-names></name> <name><surname>Smith</surname> <given-names>L. N.</given-names></name> <name><surname>Salter</surname> <given-names>M. G.</given-names></name> <name><surname>Baxter</surname> <given-names>E. M.</given-names></name> <name><surname>Farish</surname> <given-names>M.</given-names></name><etal/></person-group> (<year>2018</year>). <article-title>Towards on-farm pig face recognition using convolutional neural networks.</article-title> <source><italic>Comput. Ind.</italic></source> <volume>98</volume> <fpage>145</fpage>&#x2013;<lpage>152</lpage>. <pub-id pub-id-type="doi">10.1016/j.compind.2018.02.016</pub-id></citation></ref>
<ref id="B9"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Ren</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>Deep residual learning for image recognition</article-title>,&#x201D; in <source><italic>Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition</italic></source>, <publisher-loc>Las Vegas, NV</publisher-loc>, <fpage>770</fpage>&#x2013;<lpage>778</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2016.90</pub-id></citation></ref>
<ref id="B10"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>J&#x00FC;ngling</surname> <given-names>K.</given-names></name> <name><surname>Bodensteiner</surname> <given-names>C.</given-names></name> <name><surname>Arens</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). &#x201C;<article-title>Person re-identification in multi-camera networks</article-title>,&#x201D; in <source><italic>Proceedings of the CVPR 2011 Workshops</italic></source> (<publisher-loc>Colorado Springs, CO</publisher-loc>: <publisher-name>IEEE</publisher-name>). <pub-id pub-id-type="doi">10.1109/CVPRW.2011.5981771</pub-id></citation></ref>
<ref id="B11"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Korschens</surname> <given-names>M.</given-names></name> <name><surname>Denzler</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Elpephants: a -grained dataset for elephant re-identification</article-title>,&#x201D; in <source><italic>Proceedings Of The IEEE/CVF International Conference on Computer Vision Workshops</italic></source>, <publisher-loc>Seoul</publisher-loc>. <pub-id pub-id-type="doi">10.1109/ICCVW.2019.00035</pub-id></citation></ref>
<ref id="B12"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Krizhevsky</surname> <given-names>A.</given-names></name> <name><surname>Sutskever</surname> <given-names>I.</given-names></name> <name><surname>Hinton</surname> <given-names>G. E.</given-names></name></person-group> (<year>2012</year>). <article-title>Imagenet classification with deep convolutional neural networks.</article-title> <source><italic>Adv. Neural Inf. Process. Syst.</italic></source> <volume>25</volume> <fpage>1097</fpage>&#x2013;<lpage>1105</lpage>.</citation></ref>
<ref id="B13"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Landt</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>The history of rfid.</article-title> <source><italic>IEEE Potentials</italic></source> <volume>24</volume> <fpage>8</fpage>&#x2013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1109/mp.2005.1549751</pub-id></citation></ref>
<ref id="B14"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>C.</given-names></name> <name><surname>Zhang</surname> <given-names>R.</given-names></name> <name><surname>Guo</surname> <given-names>L.</given-names></name></person-group> (<year>2019</year>). &#x201C;<article-title>Part-pose guided amur tiger re-identification</article-title>,&#x201D; in <source><italic>Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops</italic></source>, <publisher-loc>Seoul</publisher-loc>. <pub-id pub-id-type="doi">10.1109/ICCVW.2019.00042</pub-id></citation></ref>
<ref id="B15"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Marsot</surname> <given-names>M.</given-names></name> <name><surname>Mei</surname> <given-names>J.</given-names></name> <name><surname>Shan</surname> <given-names>X.</given-names></name> <name><surname>Ye</surname> <given-names>L.</given-names></name> <name><surname>Feng</surname> <given-names>P.</given-names></name> <name><surname>Yan</surname> <given-names>X.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>An adaptive pig face recognition approach using Convolutional Neural Networks.</article-title> <source><italic>Comput. Electron. Agric.</italic></source> <volume>173</volume>:<issue>105386</issue>. <pub-id pub-id-type="doi">10.1016/j.compag.2020.105386</pub-id></citation></ref>
<ref id="B16"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ojala</surname> <given-names>T.</given-names></name> <name><surname>Pietikainen</surname> <given-names>M.</given-names></name> <name><surname>Maenpaa</surname> <given-names>T.</given-names></name></person-group> (<year>2002</year>). <article-title>Multiresolution gray-scale and rotation invariant texture classification with local binary patterns.</article-title> <source><italic>IEEE Trans. Pattern Anal. Machine Intell.</italic></source> <volume>24</volume> <fpage>971</fpage>&#x2013;<lpage>987</lpage>. <pub-id pub-id-type="doi">10.1109/tpami.2002.1017623</pub-id></citation></ref>
<ref id="B17"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oreifej</surname> <given-names>O.</given-names></name> <name><surname>Mehran</surname> <given-names>R.</given-names></name> <name><surname>Shah</surname> <given-names>M.</given-names></name></person-group> (<year>2010</year>). &#x201C;<article-title>Human identity recognition in aerial images</article-title>,&#x201D; in <source><italic>Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition</italic></source> (<publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>). <pub-id pub-id-type="doi">10.1109/CVPR.2010.5540147</pub-id></citation></ref>
<ref id="B18"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Redmon</surname> <given-names>J.</given-names></name> <name><surname>Divvala</surname> <given-names>S.</given-names></name> <name><surname>Girshick</surname> <given-names>R.</given-names></name> <name><surname>Farhadi</surname> <given-names>A.</given-names></name></person-group> (<year>2016</year>). &#x201C;<article-title>You only look once: unified, real-time object detection</article-title>,&#x201D; in <source><italic>Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition</italic></source>, <publisher-loc>Las Vegas, NV</publisher-loc>, <fpage>779</fpage>&#x2013;<lpage>788</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2016.91</pub-id></citation></ref>
<ref id="B19"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schroff</surname> <given-names>F.</given-names></name> <name><surname>Kalenichenko</surname> <given-names>D.</given-names></name> <name><surname>Philbin</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). &#x201C;<article-title>Facenet: a unified embedding for face recognition and clustering</article-title>,&#x201D; in <source><italic>Proceedings of the IEEE Conference On Computer Vision And Pattern Recognition</italic></source>, <publisher-loc>Boston, MA</publisher-loc>. <pub-id pub-id-type="doi">10.1109/CVPR.2015.7298682</pub-id></citation></ref>
<ref id="B20"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shukla</surname> <given-names>A.</given-names></name> <name><surname>Sigh Cheema</surname> <given-names>G.</given-names></name> <name><surname>Gao</surname> <given-names>P.</given-names></name> <name><surname>Onda</surname> <given-names>S.</given-names></name> <name><surname>Anshumaan</surname> <given-names>D.</given-names></name> <name><surname>Anand</surname> <given-names>S.</given-names></name><etal/></person-group> (<year>2019</year>). &#x201C;<article-title>A hybrid approach to tiger re-identification</article-title>,&#x201D; in <source><italic>Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops</italic></source>, <publisher-loc>Seoul</publisher-loc>. <pub-id pub-id-type="doi">10.1109/ICCVW.2019.00039</pub-id></citation></ref>
<ref id="B21"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sun</surname> <given-names>Y.</given-names></name> <name><surname>Zheng</surname> <given-names>L.</given-names></name> <name><surname>Yang</surname> <given-names>Y.</given-names></name> <name><surname>Tian</surname> <given-names>Q.</given-names></name> <name><surname>Wang</surname> <given-names>S.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Beyond part models: person retrieval with refined part pooling (and a strong convolutional baseline)</article-title>,&#x201D; in <source><italic>Proceedings of the European Conference On Computer Vision (ECCV)</italic></source> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>).</citation></ref>
<ref id="B22"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Violino</surname> <given-names>S.</given-names></name> <name><surname>Antonucci</surname> <given-names>F.</given-names></name> <name><surname>Pallottino</surname> <given-names>F.</given-names></name> <name><surname>Cecchini</surname> <given-names>C.</given-names></name> <name><surname>Figorilli</surname> <given-names>S.</given-names></name> <name><surname>Costa</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>Food traceability: a term map analysis basic review.</article-title> <source><italic>Eur. Food Res. Technol.</italic></source> <volume>245</volume> <fpage>2089</fpage>&#x2013;<lpage>2099</lpage>. <pub-id pub-id-type="doi">10.1007/s00217-019-03321-0</pub-id></citation></ref>
<ref id="B23"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vo</surname> <given-names>S. A.</given-names></name> <name><surname>Scanlan</surname> <given-names>J.</given-names></name> <name><surname>Turner</surname> <given-names>P.</given-names></name> <name><surname>Ollington</surname> <given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>Convolutional neural networks for individual identification in the southern rock lobster supply chain.</article-title> <source><italic>Food Control</italic></source> <volume>118</volume>:<issue>107419</issue>. <pub-id pub-id-type="doi">10.1016/j.foodcont.2020.107419</pub-id></citation></ref>
<ref id="B24"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>G.</given-names></name> <name><surname>Yuan</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Zhou</surname> <given-names>X.</given-names></name></person-group> (<year>2018</year>). &#x201C;<article-title>Learning discriminative features with multiple granularities for person re-identification</article-title>,&#x201D; in <source><italic>Proceedings of the 26th ACM International Conference On Multimedia</italic></source>, <publisher-loc>Seoul</publisher-loc>. <pub-id pub-id-type="doi">10.1145/3240508.3240552</pub-id></citation></ref>
<ref id="B25"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Xin</surname> <given-names>C.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>Ren</surname> <given-names>X.</given-names></name> <name><surname>Guo</surname> <given-names>M.</given-names></name><etal/></person-group> (<year>2021</year>). <article-title>An analysis of the heavy element distribution in edible tissues of the swimming crab (<italic>Portunus trituberculatus</italic>) from Shandong Province, China and its human consumption risk.</article-title> <source><italic>Mar. Pollut. Bull.</italic></source> <volume>169</volume>:<issue>112473</issue>. <pub-id pub-id-type="doi">10.1016/j.marpolbul.2021.112473</pub-id> <pub-id pub-id-type="pmid">34022561</pub-id></citation></ref>
<ref id="B26"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>D.</given-names></name> <name><surname>Peng</surname> <given-names>X.</given-names></name> <name><surname>Ji</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>F.</given-names></name> <name><surname>Wu</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Metal pollution and its biological effects in swimming crab <italic>Portunus trituberculatus</italic> by NMR-based metabolomics.</article-title> <source><italic>Mar. Pollut. Bull.</italic></source> <volume>15</volume>:<issue>111307</issue>. <pub-id pub-id-type="doi">10.1016/j.marpolbul.2020.111307</pub-id> <pub-id pub-id-type="pmid">32469745</pub-id></citation></ref>
<ref id="B27"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>F.</given-names></name> <name><surname>Deng</surname> <given-names>C.</given-names></name> <name><surname>Sun</surname> <given-names>X.</given-names></name> <name><surname>Jiang</surname> <given-names>X.</given-names></name> <name><surname>Guo</surname> <given-names>X.</given-names></name> <name><surname>Yu</surname> <given-names>Z.</given-names></name><etal/></person-group> (<year>2019</year>). &#x201C;<article-title>Pyramidal person re-identification via multi-loss dynamic training</article-title>,&#x201D; in <source><italic>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</italic></source>, <publisher-loc>Long Beach, CA</publisher-loc>. <pub-id pub-id-type="doi">10.1109/CVPR.2019.00871</pub-id></citation></ref>
<ref id="B28"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>L.</given-names></name> <name><surname>Shen</surname> <given-names>L.</given-names></name> <name><surname>Tian</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>S.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Tian</surname> <given-names>Q.</given-names></name></person-group> (<year>2015</year>). &#x201C;<article-title>Scalable person re-identification: a benchmark</article-title>,&#x201D; in <source><italic>Proceedings of the IEEE Interna- Tional Conference On Computer Vision</italic></source>, <publisher-loc>Santiago</publisher-loc>, <fpage>1116</fpage>&#x2013;<lpage>1124</lpage>. <pub-id pub-id-type="doi">10.1109/ICCV.2015.133</pub-id></citation></ref>
<ref id="B29"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>W.-S.</given-names></name> <name><surname>Gong</surname> <given-names>S.</given-names></name> <name><surname>Xiang</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). &#x201C;<article-title>Person re-identification by probabilistic relative distance comparison</article-title>,&#x201D; in <source><italic>Proceedings Of The IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011</italic></source> (<publisher-loc>Colorado Springs, CO</publisher-loc>: <publisher-name>IEEE</publisher-name>). <pub-id pub-id-type="doi">10.1109/CVPR.2011.5995598</pub-id></citation></ref>
<ref id="B30"><citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>W.-S.</given-names></name> <name><surname>Gong</surname> <given-names>S.</given-names></name> <name><surname>Xiang</surname> <given-names>T.</given-names></name></person-group> (<year>2012</year>). <article-title>Reidentification by relative distance comparison.</article-title> <source><italic>IEEE Trans. Pattern Anal. Machine Intell.</italic></source> <volume>35</volume> <fpage>653</fpage>&#x2013;<lpage>668</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2012.138</pub-id> <pub-id pub-id-type="pmid">22732661</pub-id></citation></ref>
</ref-list>
</back>
</article>
