Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Virtual Real.

Sec. Augmented Reality

Volume 6 - 2025 | doi: 10.3389/frvir.2025.1652074

This article is part of the Research TopicEnabling the Medical Extended Reality ecosystem - Advancements in Technology, Applications and Regulatory ScienceView all 13 articles

A critical appraisal of computer vision in orthodontics

Provisionally accepted
Elie  W AmmElie W Amm1Melih  MotroMelih Motro1Marc  FisherMarc Fisher2Jeffery  PottsJeffery Potts3Christian  El AmmChristian El Amm4*Suhair  MaqusiSuhair Maqusi4
  • 1Boston University, Boston, United States
  • 2Stanford University, Stanford, United States
  • 3The University of Oklahoma, Norman, United States
  • 4The University of Oklahoma Health Sciences, Oklahoma City, United States

The final, formatted version of the article will be published soon.

Objective: To evaluate the precision of a computer vision (CV) and augmented reality (AR) pipeline for orthodontic applications, specifically in direct bonding and temporary anchorage device (TAD) placement, by quantifying system accuracy in six degrees of freedom (6DOF) pose estimation.Methods: A custom keypoint detection model (YOLOv8n-pose) was trained using over 1.5 million synthetic images and a supplemental manually annotated dataset. Thirty anatomical landmarks were defined across maxillary and mandibular arches to maximize geometric reliability and visual detectability. The system was deployed on a Microsoft HoloLens 2 headset and tested using a fixed typodont setup at 55 cm. Pose estimation was performed in "camera space" using Perspective-n-Point (PnP) methods and transformed into "world space" via AR spatial tracking. Thirty-four poses were collected and analyzed. Errors in planar and depth estimation were modeled and experimentally measured.Results: Rotational precision remained below 1°, while planar pose precision was sub-millimetric (X: 0.46 mm, Y: 0.30 mm), except for depth (Z), which showed a standard deviation of 5.01 mm. These findings aligned with theoretical predictions based on stereo vision and time-of-flight sensor limitations. Integration of headset and object pose led to increased Y-axis variability, possibly due to compounded spatial tracking error. Sub-pixel accuracy of keypoint detection was achieved, confirming high performance of the trained detector.The proposed CV-AR system demonstrated high precision in planar pose estimation, enabling potential use in clinical orthodontics for tasks such as TAD placement and bracket positioning. Depth estimation remains the primary limitation, suggesting the need for sensor fusion or multi-angle views. The system supports real-time deployment on mobile platforms and serves as a foundational tool for further clinical validation and AR-guided procedures in dentistry.

Keywords: augmented reality, Extended Reality (VR/AR/MR), Computer Vision, Orthodontics, temporary anchorage device (TAD), Pose estimation

Received: 23 Jun 2025; Accepted: 29 Aug 2025.

Copyright: © 2025 Amm, Motro, Fisher, Potts, El Amm and Maqusi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Christian El Amm, The University of Oklahoma Health Sciences, Oklahoma City, United States

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.