AUTHOR=Tumpa Tasmia Rahman , Gregor Jens , Acuff Shelley N. , Osborne Dustin R. TITLE=Deep learning based registration for head motion correction in positron emission tomography as a strategy for improved image quantification JOURNAL=Frontiers in Physics VOLUME=Volume 11 - 2023 YEAR=2023 URL=https://www.frontiersin.org/journals/physics/articles/10.3389/fphy.2023.1123315 DOI=10.3389/fphy.2023.1123315 ISSN=2296-424X ABSTRACT=Objectives: Positron emission tomography (PET) is affected by various kinds of patient movement. Frame-by-frame image registration is one of the most practiced motion correction techniques. Deep learning has shown a remarkable ability to quickly and accurately register images once trained. This paper studies the feasibility of using a deep learning framework to correct 3D PET image volumes for head motion in routine PET imaging to improve quantification. Materials & Methods: A neural network was trained with 3D PET image volumes in an unsupervised manner to predict transformation parameters required to perform image registration. A multi-step convolutional neural network (CNN) was combined with a spatial transform layer. Pairs of target and source images were used as input to the network. A single image volume was reconstructed for each static frame. The image reconstructed from the first static frame served as the target image for registration. The CNN predicted transformation parameters that could be used to perform frame-by-frame image-based motion correction but also enabled raw listmode PET data correction using individual line-of-responses repositioning. Performance and quantitative results between standard registration tools and the deep learning technique were compared using regions of interest and Dice indices. Results: Application of the algorithm to clinical data showed good performance with respect to registration accuracy as well as processing time. The neural network yielded a mean Dice index of ~0.87 which was similar to the ANTs algorithm and did so ~3x faster using a multi-core CPU and ~20x faster with a GPU. SUV analysis showed that quantitative results were 30-60% higher in the motion-corrected images, and the neural network performed better than or close to the ANTs. Conclusion: The aim of this work was to study the quantitative impact of using a data-driven deep learning motion correction technique for PET data and assess its performance. The results showed the technique is capable of producing high quality registrations that compensate for patient motion that occurs during a scan and improve quantitative accuracy.