ORIGINAL RESEARCH article
Front. Comput. Neurosci.
Volume 19 - 2025 | doi: 10.3389/fncom.2025.1628115
This article is part of the Research TopicTowards Sustainable AI: Energy and Data Efficiency in Biological and Artificial IntelligenceView all 3 articles
AN AI METHODOLOGY TO REDUCE TRAINING INTENSITY, ERROR RATES. AND SIZE OF NEURAL NETWORKS - Thaddeus J. Kobylarz, Ph.D
Provisionally accepted- Nokia Bell Laboratories, Murray Hill, United States
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Abstract - Massive computing systems are required to train neural networks. The prodigious amount of consumed energy makes the creation of AI applications significant polluters. Despite the enormous training effort, neural network error rates limit its use for medical applications, because errors can lead to intolerable morbidity and mortality. Two reasons contribute to the excessive training requirements and high error rates; an iterative reinforcement process (tuning) that does not guarantee convergence and the deployment of neuron models only capable of realizing linearly separable switching functions. tuning procedures require tens of thousands of training iterations. In addition, linearly separable neuron models have severely limited capability; which leads to large neural nets. For seven inputs, the ratio of total possible switching functions to linearly separable switching functions is 41 octillion. Addressed here is the creation of neuron models for the application of disease diagnosis. Algorithms are described that perform direct neuron creation.This results in far fewer training steps than that of current AI systems. The design algorithms result in neurons that do not manufacture errors (hallucinations). The algorithms utilize a template to create neuron models that are capable of performing any type of switching function. The algorithms show that a neuron model capable of performing both linearly and nonlinearly separable switching functions is vastly superior to the neuron models currently being used. Included examples illustrate use of the template for determining disease diagnoses (outputs) from symptoms (inputs),. The examples show convergence with a single training iteration..
Keywords: Non-linearly separable neurons, Far less training, Much smaller neural networks, Far less power for training, No network hallucinations
Received: 13 May 2025; Accepted: 01 Sep 2025.
Copyright: © 2025 Kobylarz. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Thaddeus Kobylarz, Nokia Bell Laboratories, Murray Hill, United States
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.