CORRECTION article

Front. Mar. Sci., 21 September 2022

Sec. Marine Megafauna

Volume 9 - 2022 | https://doi.org/10.3389/fmars.2022.998145

Corrigendum: Rise of the machines: Best practices and experimental evaluation of computer-assisted dorsal fin image matching systems for bottlenose dolphins

  • 1. Chicago Zoological Society’s Sarasota Dolphin Research Program, c/o Mote Marine Laboratory, Sarasota, FL, United States

  • 2. Duke University Marine Laboratory, Nicholas School of the Environment, Duke University, Beaufort, NC, United States

  • 3. Wild Me, Portland, OR, United States

  • 4. Valdez AI Consulting, Laramie, WY, United States

Article metrics

View details

972

Views

460

Downloads

Error in Figure/Table

In the published article, there were errors in Figure 6 and Figure 7 as published. The image for Figure 6 was labeled as Figure 7 and the image for Figure 7 was labeled as Figure 6. In addition, there were values referenced in the caption of Figure 6 which were not applicable to that Figure. The corrected Figure 6 and Figure 7 and their captions appear below.

Figure 6

Figure 6

The percentage of images correctly matched by each algorithm and combination of algorithms (Flukebook algorithms only) and their cumulative rank position for the ideal tests in the (A) one-to-many annotations comparisons and (B) the one-to-many names comparisons; as well as the percentage of images of varying image quality and fin distinctiveness correctly matched by the independent algorithms for the ideal tests in the (C) one-to-many annotations comparisons and (D) the one-to-many names comparisons. For reference, Q1 = excellent quality image, Q2 = average quality image, D1 = very distinctive fin, and D2 = average amount of distinctive features on fin (Urian et al., 1999; Urian et al. 2014).

Figure 7

Figure 7

The percentage of images correctly matched by each algorithm and combination of algorithms (Flukebook algorithms only) and their cumulative rank position for the equal matchability tests in the (A) one-to-many annotations comparisons and (B) the one-to-many names comparisons; as well as the percentage of images of varying image quality and fin distinctiveness correctly matched by the independent algorithms for the equal matchability tests in the (C) one-to-many annotations comparisons and (D) the one-to-many names comparisons. For reference, Q1 = excellent quality image, Q2 = average quality image, Q3 = poor quality image, D1 = very distinctive fin, D2 = average amount of distinctive features on fin, D3 = low distinctiveness, and D4 = not distinct fin (Urian et al., 1999;Urian et al. 2014).

In the published article, there was an error in Table 1 as published. A few of the percentages listed of images in the first ranked position for the ideal tests in the one-to-many names comparisons were incorrect. The corrected Table 1 and its caption appear below.

Table 1

Algorithm(s) evaluatedComprehensive Test (N = 604)
One-to-many annotationsOne-to-many names
finFindR R Application86.09% top 5069.21% first position90.07% top 5071.03% first position
Flukebook - finFindR78.31% top 5066.39% first position79.14% top 5052.32% first position
Flukebook - CurvRank82.95% top 5070.20% first position82.28% top 5070.03% first position
Flukebook - CurvRank V288.08% top 5072.85% first position88.58% top 5075.17% first position
Flukebook - finFindR + CurvRank89.57% top 5077.98% first position89.57% top 5073.51% first position
Flukebook - finFindR + CurvRank V291.56% top 5079.64% first position92.05% top 5076.99% first position
Flukebook - CurvRank + CurvRank V290.56% top 5079.14% first position89.74% top 5078.81% first position
Flukebook - CurvRank + CurvRank V2 + finFindR92.55% top 5081.62% first position92.38% top 5079.80% first position
Ideal Test (N = 186)
Algorithm(s) evaluatedOne-to-many annotationsOne-to-many names
finFindR R Application98.92% top 5091.94% first position98.92% top 5091.94% first position
Flukebook - finFindR88.71% top 5083.87% first position89.25% top 5070.43% first position
Flukebook - CurvRank98.92% top 5095.70% first position100.00% top 4796.77% first position
Flukebook - CurvRank V299.46% top 5093.01% first position99.46% top 5096.77% first position
Flukebook - finFindR + CurvRank100.00% top 4096.77% first position100.00% top 1796.77% first position
Flukebook - finFindR + CurvRank V299.46% top 5096.77% first position99.46% top 1197.31% first position
Flukebook - CurvRank + CurvRank V2100.00% top 3397.85% first position100.00% top 1798.39% first position
Flukebook - CurvRank + CurvRank V2 + finFindR100.00% top 3398.39% first position100.00% top 1798.39% first position
Equal Matchability Test (N = 2,485)
Algorithm(s) evaluatedOne-to-many annotationsOne-to-many names
finFindR R Application81.88% top 4954.10% first position81.88% top 4954.10% first position
Flukebook - finFindR71.67% top 4955.09% first position71.71% top 4917.46% first position
Flukebook - CurvRank76.41% top 4960.95% first position76.23% top 4961.15% first position
Flukebook - CurvRank v284.32% top 4964.08% first position84.90% top 4964.74% first position
Flukebook - finFindR + CurvRank88.06% top 4971.99% first position87.78% top 4963.25% first position
Flukebook - finFindR + CurvRank v291.21% top 4974.27% first position91.31% top 4966.28% first position
Flukebook - CurvRank + CurvRank v288.84% top 4972.98% first position88.85% top 4973.19% first position
Flukebook - CurvRank + CurvRank v2 + finFindR93.01% top 4978.43% first position92.80% top 4973.92% first position

The percentage of correct matches within the top-X ranked positions and the first position for each dataset comparison test (i.e., comprehensive, ideal, and equal matchability tests for the one-to-many annotations and one-to-many names comparisons) and each algorithm evaluated (the finFindR R application, and the CurvRank, CurvRank v2, and finFindR algorithms and their combinations integrated into Flukebook).

Note the comprehensive and ideal tests evaluated the top-50 ranked positions, while the equal matchability tests evaluated the top 49-ranked positions.

Text Correction

In the published article, there was an error in the text. In the fourth sentence of the first paragraph of the Discussion, the authors incorrectly refer to the wrong Figure panels.

A correction has been made to the Discussion, paragraph one. This sentence previously stated:

“For example, match success was over 98.92% in the top 50-ranked positions for the finFindR R application, and the CurvRank and CurvRank v2 algorithms within Flukebook in both the one-to-many annotations comparisons and the one-to-many names comparisons of Q1, Q2 and D1, D2 images (Table 1, Figures 7A, B)”.

The corrected sentence appears below:

“For example, match success was over 98.92% in the top 50-ranked positions for the finFindR R application, and the CurvRank and CurvRank v2 algorithms within Flukebook in both the one-to-many annotations comparisons and the one-to-many names comparisons of Q1, Q2 and D1, D2 images (Table 1, Figures 6A, B)”.

The authors apologize for these errors and state that this does not change the scientific conclusions of the article in any way. The original article has been updated.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Summary

Keywords

photographic-identification, photo-id, computer vision, finFindR, CurvRank, Flukebook, bottlenose dolphin, tursiops truncates

Citation

Tyson Moore RB, Urian KW, Allen JB, Cush C, Parham JR, Blount D, Holmberg J, Thompson JW and Wells RS (2022) Corrigendum: Rise of the machines: Best practices and experimental evaluation of computer-assisted dorsal fin image matching systems for bottlenose dolphins. Front. Mar. Sci. 9:998145. doi: 10.3389/fmars.2022.998145

Received

19 July 2022

Accepted

30 August 2022

Published

21 September 2022

Volume

9 - 2022

Edited and reviewed by

Lars Bejder, University of Hawaii at Manoa, United States

Updates

Copyright

*Correspondence: Reny B. Tyson Moore,

This article was submitted to Marine Megafauna, a section of the journal Frontiers in Marine Science

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics