Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Artif. Intell.

Sec. Machine Learning and Artificial Intelligence

Volume 8 - 2025 | doi: 10.3389/frai.2025.1661444

This article is part of the Research TopicRobust and Secure AI Systems for Learning from Heterogeneous DataView all articles

Functional partitioning through competitive learning

Provisionally accepted
Marius  TackeMarius Tacke1*Matthias  BuschMatthias Busch2Kevin  LinkaKevin Linka2Christian  CyronChristian Cyron1Roland  AydinRoland Aydin1*
  • 1Helmholtz-Zentrum Hereon, Geesthacht, Germany
  • 2Technische Universitat Hamburg, Hamburg, Germany

The final, formatted version of the article will be published soon.

Datasets often incorporate various functional patterns related to different aspects or regimes, which are typically not equally present throughout the dataset. We propose a novel partitioning algorithm that utilizes competition between models to detect and separate these functional patterns. This competition is induced by multiple models iteratively submitting their predictions for the dataset, with the best prediction for each data point being rewarded with training on that data point. This reward mechanism amplifies each model’s strengths and encourages specialization in different patterns. The specializations can then be translated into a partitioning scheme. The amplification of each model’s strengths inverts the active learning paradigm: while active learning typically focuses the training of models on their weaknesses to minimize the number of required training data points, our concept reinforces the strengths of each model, thus specializing them. We validate our concept with datasets with clearly distinct functional patterns, such as mechanical stress and strain data in a porous structure. Our partitioning algorithm produces valuable insights into the datasets’ structure, which can serve various further applications. As a demonstration of one exemplary usage, we set up modular models consisting of multiple expert models, each learning a single partition, and compare their performance on more than twenty popular regression problems with single models learning all partitions simultaneously. Our results show significant improvements, with up to 56\% loss reduction, confirming our algorithm’s utility.

Keywords: partitioning, clustering, unsupervised learning, machine learning, Active Learning

Received: 07 Jul 2025; Accepted: 16 Oct 2025.

Copyright: © 2025 Tacke, Busch, Linka, Cyron and Aydin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence:
Marius Tacke, marius.tacke@hereon.de
Roland Aydin, roland.aydin@hereon.de

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.