AUTHOR=Liu Zhenzhen , Zhou Jin Peng , Weinberger Kilian Q. TITLE=Leveraging diffusion models for unsupervised out-of-distribution detection on image manifold JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 7 - 2024 YEAR=2024 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1255566 DOI=10.3389/frai.2024.1255566 ISSN=2624-8212 ABSTRACT=Most machine learning models expect that the data distributions at training time and test time are identical. If this condition is not met, algorithms can exhibit unexpected behaviors. This motivates the task of out of distribution detection. In the image domain, one hypothesis is that images lie on manifolds characterized by latent properties such as color, position, and shape.We can leverage this assumption to test whether a data point belongs to the training manifold.Recent advancement in generative models show that the diffusion models have strong ability to learn a mapping onto an image manifold corresponding to a training data set. Diffusion models involve a forward process of corrupting an image by iteratively adding noise, and learn the reverse manifold mapping of iteratively removing noise. Latter gives them the capability to generate new plausible images from noise, or to reconstruct images after corruption. We propose the use of pretrained diffusion models to identify images that are not from the training distribution.Concretely, we corrupt and denoise an image, which the diffusion model fails to do successfully if it is out-of-distribution. We show through extensive experiments that our method has consistent and strong performance on a variety of image datasets.