Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Artif. Intell.

Sec. Machine Learning and Artificial Intelligence

This article is part of the Research TopicCausal AI: Integrating Causality and Machine Learning for Robust Intelligent SystemsView all 3 articles

The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence

Provisionally accepted
Hector  ZenilHector Zenil1*Jesper  Nils TegnerJesper Nils Tegner2*Felipe  S AbrahãoFelipe S Abrahão3Alexander  LavinAlexander Lavin4Vipin  KumarVipin Kumar5Jeremy  Graham FreyJeremy Graham Frey6Adrian  WellerAdrian Weller7Larisa  SoldatovaLarisa Soldatova8Alan  R BundyAlan R Bundy9Nicholas  R JenningsNicholas R Jennings10Takashi  IkegamiTakashi Ikegami11Lawrence  HunterLawrence Hunter12Sašo  DžeroskiSašo Džeroski13Andrew  BriggsAndrew Briggs14Frederick  D GregoryFrederick D Gregory15Carla  P GomesCarla P Gomes16Jon  RoweJon Rowe17James  Allen EvansJames Allen Evans18,19Hiroaki  KitanoHiroaki Kitano20Ross  KingRoss King21
  • 1Computational Medicine, Karolinska Institutet (KI), Solna, Sweden
  • 2King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
  • 3Universidade Estadual de Campinas, Campinas, Brazil
  • 4Simulation Institute, New York, United States
  • 5University of Minnesota Twin Cities, Minneapolis, United States
  • 6University of Southampton, Southampton, United Kingdom
  • 7University of Cambridge Department of Engineering, Cambridge, United Kingdom
  • 8Goldsmiths University of London, London, United Kingdom
  • 9The University of Edinburgh, Edinburgh, United Kingdom
  • 10Loughborough University, Loughborough, United Kingdom
  • 11Kabushiki Kaisha Riken, Kumagaya, Japan
  • 12University of Colorado Anschutz Medical Campus School of Medicine, Aurora, United States
  • 13Institut Jozef Stefan, Ljubljana, Slovenia
  • 14University of Oxford, Oxford, United Kingdom
  • 15US Army Combat Capabilities Development Command Army Research Laboratory, Adelphi, United States
  • 16Cornell University, Ithaca, United States
  • 17University of Birmingham, Birmingham, United Kingdom
  • 18The University of Chicago Chicago Center for Contemporary Theory, Chicago, United States
  • 19The University of Chicago Department of Sociology, Chicago, United States
  • 20The Systems Biology Institute, Tokyo, Japan
  • 21University of Cambridge Department of Chemical Engineering and Biotechnology, Cambridge, United Kingdom

The final, formatted version of the article will be published soon.

Artificial intelligence is approaching the point at which it can complete the scientific cycle, from hypothesis generation to experimental design and validation, within a closed loop that requires little human intervention. Yet, the loop is not fully autonomous: humans still curate data, set hyperparameters, adjudicate interpretability, and decide what counts as a satisfactory explanation. As models scale, they begin to explore regions of hypothesis and solution space that are inaccessible to human reasoning because they are too intricate or alien to our intuitions. Scientists may soon rely on AI strategies they do not fully understand, trusting goals and empirical pay-offs rather than derivations. This prospect forces a choice about how much control to relinquish to accelerate discovery while keeping outputs human relevant. The answer cannot be a blanket policy to deploy LLMs or any single paradigm everywhere. It demands principled matching of methods to domains, hybrid causal and neurosym-bolic scaffolds around generative models, and governance that preserves plurality and counters recursive bias. Otherwise, recursive training and uncritical reuse risk model collapse in AI and an epistemic collapse in science, as statistical inertia amplifies flaws and narrows the investigation. We argue for graded autonomy in AI-conducted science: systems that can close the loop at machine speed, while remaining anchored to human priorities, verifiable mechanisms, and domain-appropriate forms of understanding. Keywords: AI4Science; AI-conducted science; closed-loop discovery; epistemic singularity; human–machine collaboration; time complexity of AI + human; hypothesis generation; interpretability; model collapse; cognitive collapse; domain–method alignment; graded autonomy; neurosymbolic scaffolds.

Keywords: AI4Science, AI-conducted science, closed-loop discovery, cognitive collapse, domain–method alignment, epistemic singularity, graded autonomy, human–machine collaboration

Received: 02 Aug 2025; Accepted: 26 Jan 2026.

Copyright: © 2026 Zenil, Tegner, Abrahão, Lavin, Kumar, Frey, Weller, Soldatova, Bundy, Jennings, Ikegami, Hunter, Džeroski, Briggs, Gregory, Gomes, Rowe, Evans, Kitano and King. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Hector Zenil
Jesper Nils Tegner

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.