ORIGINAL RESEARCH article
Front. Vet. Sci.
Sec. One Health
A Systematic Audit of Transparency and Validation Disclosure in Commercial Veterinary Artificial Intelligence
Provisionally accepted- Department of Surgical Sciences, School of Veterinary Medicine, University of Wisconsin-Madison, Madison, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Objective: To systematically identify the commercial market for clinical artificial intelligence (AI) products in veterinary medicine and audit their public documentation for transparency using a standardized, evidence-based instrument. Methods: A cross-sectional systematic audit of commercial AI tools was completed via a multi-channel search. Inclusion criteria required commercially available products with explicit AI claims and clinical functionality; administrative and direct-to-consumer tools were excluded. Publicly available documentation was archived and evaluated using a 25-point framework adapted from FDA and GMLP guidelines to assess data provenance, validation, safety, and usability. Results: Seventy-one AI products, available in the North American market were included, comprising Generative & Ambient (n = 47), Diagnostic Imaging (n = 19), and Specialized tools (n = 5). The mean unweighted transparency score across the cohort was 6.4%. Notably, 63.3% (n = 45) of vendors failed to disclose a single metric. Diagnostic Imaging tools achieved a higher mean risk-weighted transparency score (13.1%) compared to Generative & Ambient tools (1.8%). While 36.8% of imaging vendors provided peer-reviewed or internal validation evidence, only 2.1% of generative vendors did so. Only one vendor (1.4%) disclosed training data signalment (species, breed, age, sex) or subgroup performance. Conclusions: The commercial veterinary AI market operates with systemic opacity. This audit reveals a significant 'Transparency Gap'—a divergence where the sophisticated clinical capabilities marketed to veterinarians far exceed the publicly available evidence required to validate them. A significant gap exists between maturing imaging applications and unvalidated generative tools. The universal failure to report training demographics renders independent assessment of algorithmic bias impossible. Clinical Relevance: Veterinarians currently bear the legal and ethical burden of validating AI tools without access to necessary performance data. The implementation of standardized transparency frameworks is urgently required to support evidence-based product selection and prevent patient harm from unvalidated technologies.
Keywords: Clinical Decision Support Systems, Diagnostic imaging AI, generative AI (GenAI), Good Machine Learning Practice, Veterinary Artificial Intelligence
Received: 04 Dec 2025; Accepted: 05 Feb 2026.
Copyright: © 2026 Brundage. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: David Brundage
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.