AUTHOR=Chu Shiyong , Chen Yuwei TITLE=Weaponizing cognitive bias in autonomous systems: a framework for black-box inference attacks JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 8 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1623573 DOI=10.3389/frai.2025.1623573 ISSN=2624-8212 ABSTRACT=Autonomous systems operating in high-dimensional environments increasingly rely on prioritization heuristics to allocate attention and assess risk, yet these mechanisms can introduce cognitive biases such as salience, spatial framing, and temporal familiarity that influence decision-making without altering the input or accessing internal states. This study presents Priority Inversion via Operational Reasoning (PRIOR), a black-box, non-perturbative diagnostic framework that employs structurally biased but semantically neutral scenario cues to probe inference-level vulnerabilities without modifying pixel-level, statistical, or surface semantic properties. Given the limited accessibility of embodied vision-based systems, we evaluate PRIOR using large language models (LLMs) as abstract reasoning proxies to simulate cognitive prioritization in constrained textual surveillance scenarios inspired by Unmanned Aerial Vehicle (UAV) operations. Controlled experiments demonstrate that minimal structural cues can consistently induce priority inversions across multiple models, and joint analysis of model justifications and confidence estimates reveals systematic distortions in inferred threat relevance even when inputs are symmetrical. These findings expose the fragility of inference-level reasoning in black-box systems and motivate the development of evaluation strategies that extend beyond output correctness to interrogate internal prioritization logic, with implications for dynamic, embodied, and visually grounded agents in real-world deployments.