ORIGINAL RESEARCH article
Front. Artif. Intell.
Sec. Machine Learning and Artificial Intelligence
Volume 8 - 2025 | doi: 10.3389/frai.2025.1621025
This article is part of the Research TopicAdvances and Challenges in AI-Driven Visual Intelligence: Bridging Theory and PracticeView all 3 articles
Research on the robustness of the open-world test-time training model
Provisionally accepted- Chongqing Normal University, Chongqing, China
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Generalizing deep learning models to low latency unknown target domain distribution has motivated research into test-time training/adaptation (TTT/TTA). However, using TTT/TTA in an open-world environment is challenging mainly because it's difficult to distinguish between strong out-of-distribution(OOD) samples and regular weak OOD samples. In response to this challenge, several approaches to OWTTT have emerged. Despite its powerful functionality, it has a problem, that's the test-time poisoning attacks, which are substantially different from previous poisoning attacks that occur during the training time of ML models (i.e. adversaries cannot intervene in the training process). In this paper, we design a test-time poisoning method and tested on the open-world test-time training(OWTTT) model. Specifically, considering that the model gradients will change during testing, we design a single-step query attack data poisoning method to dynamically update perturbations and input them into the open-world test-time training model. The experimental results show that the OWTTT method is vulnerable to attacks. Our results demonstrate that OWTTT algorithms lacking a rigorous security assessment are unsuitable for deployment in real-life scenarios. As such, we advocate for the integration of defenses against test-time poisoning attacks into the design of open-world test-time training methods.
Keywords: adversarial attacks, testing time poisoning, robustness, open world learning, Test-time training/adaptation
Received: 30 Apr 2025; Accepted: 15 Jul 2025.
Copyright: © 2025 Pi and Pi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Shu Pi, Chongqing Normal University, Chongqing, China
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.