SYSTEMATIC REVIEW article

Front. Artif. Intell.

Sec. Medicine and Public Health

Volume 8 - 2025 | doi: 10.3389/frai.2025.1595291

This article is part of the Research TopicArtificial Intelligence-based Multimodal Imaging and Multi-omics in Medical ResearchView all 5 articles

Visible neural networks for multi-omics integration: a critical review

Provisionally accepted
David  SelbyDavid Selby1*Rashika  JakhmolaRashika Jakhmola1,2Maximilian  SprangMaximilian Sprang3Gerrit  GroßmannGerrit Großmann1Hind  RakiHind Raki1,4Niloofar  MaaniNiloofar Maani1,5Daria  PavliukDaria Pavliuk1,5Jan  EwaldJan Ewald6Sebastian  VollmerSebastian Vollmer7*
  • 1German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
  • 2Technical University of Munich, Munich, Bavaria, Germany
  • 3University Medical Centre, Johannes Gutenberg University Mainz, Mainz, Rhineland-Palatinate, Germany
  • 4Mohammed VI Polytechnic University, Ben Guerir, Morocco
  • 5Technical University of Kaiserslautern, Kaiserslautern, Rhineland-Palatinate, Germany
  • 6ScaDS.AI Leipzig, Leipzig, Lower Saxony, Germany
  • 7University of Kaiserslautern-Landau (RPTU), Kaiserslautern, Germany

The final, formatted version of the article will be published soon.

Biomarker discovery and drug response prediction are central to personalized medicine, driving demand for predictive models that also offer biological insights. Biologically informed neural networks (BINNs), also known as visible neural networks (VNNs), have recently emerged as a solution to this goal. BINNs or VNNs are neural networks whose inter-layer connections are constrained based on prior knowledge from gene ontologies pathway databases. These sparse models enhance interpretability by embedding prior knowledge into their architecture, ideally reducing the space of learnable functions to those that are biologically meaningful. In this systematic review—the first of its kind—we identify 86 recent papers implementing such models and highlight key trends in architectural design decisions, data sources and methods for evaluation. Growth in popularity of the approach is apparently mitigated by a lack of standardized terminology, tools and benchmarks.

Keywords: Multi-omics integration, deep learning, Explainable AI, machine learning, Interpretable models, Gene Regulatory Networks, pathways, neural networks

Received: 17 Mar 2025; Accepted: 26 May 2025.

Copyright: © 2025 Selby, Jakhmola, Sprang, Großmann, Raki, Maani, Pavliuk, Ewald and Vollmer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
David Selby, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
Sebastian Vollmer, University of Kaiserslautern-Landau (RPTU), Kaiserslautern, Germany

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.