AUTHOR=Robinson W. Douglas , Hallman Tyler A. , Hutchinson Rebecca A. TITLE=Benchmark Bird Surveys Help Quantify Counting Accuracy in a Citizen-Science Database JOURNAL=Frontiers in Ecology and Evolution VOLUME=Volume 9 - 2021 YEAR=2021 URL=https://www.frontiersin.org/journals/ecology-and-evolution/articles/10.3389/fevo.2021.568278 DOI=10.3389/fevo.2021.568278 ISSN=2296-701X ABSTRACT=The growth of biodiversity data sets generated by citizen scientists continues to accelerate. Yet, error, bias, and noise continue to be serious concerns for analysts. Counts of birds contributed to eBird, the world’s largest biodiversity online database, present a potentially useful resource for tracking trends over time and space in species’ abundances. We quantified counting errors in a sample of 1406 eBird checklists by comparing numbers contributed by birders (N=246) who visited a popular birding location in Oregon, USA, with numbers generated by a professional ornithologist engaged in a long-term study creating benchmark (reference) measurements of daily waterbird counts. We focused on waterbirds, which are easily visible at this site. We evaluated potential predictors of count differences, including characteristics of contributed checklists, of each species, and of time of day and year. Count differences were biased toward undercounts, with 76% of counts being below the daily benchmark value. When only checklists that actually reported a species known to be present were included, median count errors were -29.1% (range: 0 to -42.8 %; N=20 species). Model sets revealed an important influence of each species’ reference count, which varied seasonally as waterbird numbers fluctuated, and of percent of species known to be present each day that were included on each checklist. That is, checklists indicating a more thorough survey of the species richness at the site also had, on average, lower counting errors. However, even on checklists with the most thorough species lists, counts were biased low and exceptionally variable in their accuracy. To improve utility of such bird count data, we suggest three strategies. 1) assess additional options for analytically determining how to select checklists that have the highest probability of including less biased count data, and explore options for correcting bias during the analysis stage. 2) add options for users to provide additional information in checklists, such as tagging checklists where they focused on obtaining accurate counts. 3) explore opportunities to effectively calibrate citizen-science bird count data by establishing a formalized network of marquis sites where dedicated observers regularly contribute carefully collected benchmark data.