<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Artificial Intelligence | Machine Learning and Artificial Intelligence section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/artificial-intelligence/sections/machine-learning-and-artificial-intelligence</link>
        <description>RSS Feed for Machine Learning and Artificial Intelligence section in the Frontiers in Artificial Intelligence journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-15T14:14:27.98+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1816684</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1816684</link>
        <title><![CDATA[Editorial: Advancing AI-driven code generation and synthesis: challenges, metrics, and ethical implications]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Sumeet Kaur Sehra</author><author>Sukhjit Singh Sehra</author><author>David S. Allison</author><author>Jaiteg Singh</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1751148</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1751148</link>
        <title><![CDATA[CompoundDenseNet: a novel approach for accurate recognition of Bangla handwritten compound characters]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nazia Alfaz</author><author>Talha Bin Sarwar</author><author>Fahmid Al Farid</author><author>Md Saef Ullah Miah</author><author>Sadia Afrin</author><author>Shakila Rahman</author><author>Jia Uddin</author><author>Hezerul bin Abdul Karim</author>
        <description><![CDATA[Bangla, one of the most widely spoken languages in the world, presents major challenges in handwritten character recognition because of its complex compound characters with intricate shapes, diverse writing styles, and structural similarities. These features make Bangla a representative example of complex scripts that remain difficult for conventional Optical Character Recognition (OCR) systems. This study focuses on improving the recognition of Bangla handwritten compound characters using a modified DenseNet architecture named CompoundDenseNet. The architecture enhances feature extraction and reuse to better capture the visual variations and fine structural details that existing models often struggle to handle. Its performance was evaluated on three benchmark datasets, BanglaLekha Isolated, Ekush, and CMATERdb, achieving recognition accuracies of 98.5%, 98%, and 96.2% respectively, surpassing previously reported methods. Misclassification analysis using a confusion matrix revealed that the Adam optimizer produced the most stable and accurate results with faster convergence compared to other optimizers tested. While the results demonstrate significant progress, the study also highlights the need for larger and more diverse datasets. Overall, CompoundDenseNet contributes to advancing Bangla handwritten compound character recognition and has the potential to enhance real-world applications such as education, legal documentation, and digital accessibility in Bangla language technologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1800302</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1800302</link>
        <title><![CDATA[Delegated agency and moral responsibility in artificial intelligence]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Petar Radanliev</author>
        <description><![CDATA[IntroductionArtificial intelligence ethics is often framed as a response to unprecedented technical autonomy, with risks attributed to recent advances in machine learning and scale. This framing overlooks a recurring ethical structure: the delegation of moral authority to artificial agents. Ethical failures associated with AI are best understood as governance failures rooted in human design choices and accountability arrangements, even where opacity and limited control complicate responsibility attribution.MethodsA qualitative, interdisciplinary approach integrates historical–thematic analysis, comparative interpretation of technological artifacts, and visual–conceptual synthesis. Mythological figures (Talos, the Golem, Pygmalion), early mechanical automata, and foundational computational systems are analyzed as conceptual models of delegated artificial agency rather than technological precursors.ResultsAcross historical contexts, artificial agents exhibit consistent structural features: bounded autonomy, delegated authority, explicit override mechanisms, and dependence on human oversight. These features directly correspond to contemporary AI ethics concerns, including alignment failures, responsibility gaps, human-in-the-loop control, and system interruptibility.DiscussionThe analysis establishes that ethical risk in AI arises from the displacement of human responsibility rather than from machine autonomy. By situating AI within a longer history of artificial agency, the study provides a normative framework that locates moral responsibility unambiguously in human actors and institutions, with direct implications for AI governance and accountability.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1792860</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1792860</link>
        <title><![CDATA[Heart disease prediction using rough neutrosophic sets and dual-attention neural networks: RNS-OptiDANet]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>T. Ashika</author><author>G. Hannah Grace</author>
        <description><![CDATA[IntroductionHeart disease is a major global health problem that highlights the need for effective and accurate prediction methods.MethodsThis paper presents RNS-OptiDANet, a hybrid framework that combines rough set theory (RST), rough neutrosophic sets (RNS) and an optimized dual-attention neural network (OptiDANet) in order to predict heart disease. For feature selection, the QuickReduct method with the discernibility matrix (RST QRDM) was used. The features selected in RST were represented as RNS representations to deal with uncertainty in the classification process. The OptiDANet model implements Dual Attention Mechanisms such as Channel Attention (CAM) and Soft Attention Mechanism (SAM) to highlight the relevant patterns while reducing noise. The performance of the developed framework has been improved through Hyperparameter tuning using Optuna and overfitting has been avoided. Finally, classification is conducted using a Random Forest (RF) model.ResultsExperimental results demonstrate strong performance in terms of accuracy, precision, recall and F1-score across datasets.DiscussionAn eXplainable Artificial Intelligence (XAI) module is integrated to provide feature level interpretability and clinical transparency while ablation study validates the contribution of each framework component confirming the robustness and effectiveness of the proposed hybrid RNS-OptiDANet model.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1777258</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1777258</link>
        <title><![CDATA[Functional stability assessment and adaptation for critical infrastructure facilities]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Victor Perederyi</author><author>Eugene Borchik</author><author>Viacheslav Zosimov</author><author>Oleksandra Bulgakova</author>
        <description><![CDATA[IntroductionEnsuring functional stability of critical infrastructure facilities (CIFs) under conditions of uncertainty and dynamic threats remains a critical challenge. Existing approaches insufficiently integrate technical, cybersecurity, and human-related factors.MethodsThis study proposes an information-cognitive approach based on a hybrid model combining Bayesian Trust Networks and fuzzy logic. The model incorporates expert knowledge and evaluates the mutual influence of information security, cybersecurity, human factors, and vulnerability indicators. The Mamdani algorithm is used for probabilistic estimation under uncertainty.ResultsNumerical experiments conducted in the GeNIe environment demonstrate that the proposed model effectively supports decision-making. Scenario analysis shows that adjusting key cybersecurity and vulnerability factors increases the probability of achieving sufficient functional stability above the critical threshold.DiscussionThe proposed hybrid framework improves interpretability and adaptability of functional stability assessment. It enables flexible reasoning under uncertainty and supports real-time decision-making for critical infrastructure management. The approach can be applied across different categories of CIFs and extended with additional data-driven components.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1842850</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1842850</link>
        <title><![CDATA[Correction: Evaluation of large language models in generating and optimizing educational materials for neonatal home oxygen therapy]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Zhendong Liu</author><author>Xiaoping Yang</author><author>Yu Zhang</author><author>Yujing Xu</author><author>Yue Xiang</author><author>Hongyan Wang</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1835185</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1835185</link>
        <title><![CDATA[Correction: The association between national culture and AI readiness: a cross-national study]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Frontiers Production Office </author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1713747</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1713747</link>
        <title><![CDATA[Explainable neuro-symbolic artificial intelligence for automated interpretation of corneal topography and early keratoconus detection]]></title>
        <pubdate>2026-04-13T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mini Han Wang</author><author>Shuai Qin</author>
        <description><![CDATA[BackgroundEarly detection of keratoconus is essential for preventing postoperative complications in refractive surgery and preserving long-term visual function. Although artificial intelligence has demonstrated strong potential in ophthalmic image analysis, many existing models operate as black-box systems and provide limited clinical interpretability. Transparent decision support is therefore critical for safe deployment of AI in clinical practice.MethodsWe propose an explainable neuro-symbolic framework for automated interpretation of corneal topography reports and refractive surgery eligibility assessment. The proposed system integrates multimodal feature extraction, a symbolic corneal knowledge graph, probabilistic reasoning, and large language model (LLM)–based report generation. Quantitative biometric parameters and corneal curvature maps extracted from IOLMaster 700 reports were processed through a hybrid convolutional neural network–Vision Transformer (CNN–ViT) module to capture spatial corneal morphology. These representations were aligned with a clinically curated knowledge graph encoding relationships between corneal parameters, disease states, and surgical decision criteria. Bayesian probabilistic inference was then applied to estimate disease likelihoods, while an ensemble LLM module generated structured bilingual clinical reports explaining the reasoning process.ResultsIn a prospective pilot cohort of 20 eyes, the proposed framework demonstrated strong diagnostic performance for early keratoconus detection, achieving an area under the receiver operating characteristic curve (AUC) of approximately 0.95. Sensitivity and specificity remained high across decision thresholds, and the system achieved a balanced F1 score for refractive surgery eligibility classification. Expert evaluation indicated high interpretability and clinical usefulness of the generated reports. The end-to-end pipeline required approximately 95 ± 12 s per case, supporting near–real-time clinical decision support.ConclusionThe proposed neuro-symbolic framework combines deep representation learning, structured medical knowledge, and explainable language-based reporting to provide transparent AI-assisted corneal diagnostics. Although the current results are based on a pilot cohort, the framework demonstrates the potential of integrating neural networks, knowledge graphs, and large language models for interpretable ophthalmic AI systems. Future studies using larger multicenter datasets are required to further validate clinical performance and generalizability.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1763872</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1763872</link>
        <title><![CDATA[Hybrid neutrosophic enhanced MobileNetV2 model for leukemia blood cell classification]]></title>
        <pubdate>2026-04-10T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>V. B. Prahaladhan</author><author>Megha Suhanth</author><author>L. Jani Anbarasi</author><author>T. Ashika</author><author>V. Vishruth Reddy</author><author>R. Anvesh Reddy</author>
        <description><![CDATA[IntroductionLeukemia is a type of cancer that originates in the bone marrow, causing uncontrolled production of abnormal white blood cells that disrupt normal blood function and weaken the immune system. Manual inspection is time-consuming and error-prone, relying heavily on the expertise and experience of medical professionals.MethodsThe proposed study presents a hybrid model for classifying leukemia by integrating transfer learning and neutrosophic domain enhancement. Neutrosophic domain transformation splits the RGB channel image into Truth (T), Falsity (F), and Indeterminacy (I) components to address uncertainty, ambiguity, and poor contrast in blood cell representations. This enables the improvement of features more directly linked to leukemia identification. The images are augmented using wavelet sharpening and contrast-limited adaptive histogram equalization (CLAHE) on the T component, total variation minimization (TVM) on the F component, and wavelet shrinkage denoising on the I component.ResultsThis framework was trained and tested on the Leukemia Blood Cell Image Classification dataset, which included 3,256 peripheral blood smear (PBS) images across 4 classes: Benign, Early, Pre, and Pro. A transfer learning architecture based on MobileNetV2 was used for classification, and training was conducted using a 70:15:15 split for training, validation, and testing, respectively. The proposed neutrosophic-enhanced MobileNetV2 model achieved an overall testing accuracy of 98.36% and a macro F1-score of 0.98, demonstrating significant enhancement in multi-class leukemia classification.DiscussionsThe incorporation of the neutrosophic enhancement method significantly improves classifier performance, resulting in higher accuracy without increasing computational power.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1763101</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1763101</link>
        <title><![CDATA[Enhancing age and gender verification in OTT accounts using deep learning techniques]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>M. Sanjay</author><author>Pillaram Manoj</author><author>S. Graceline Jasmine</author><author>R. Sriganth</author><author>J. L. Febin Daya</author><author>Benin Russel</author>
        <description><![CDATA[Protecting children from inappropriate audio–visual content requires making sure they are exposed to age and gender-appropriate information. This research presents an innovative method for ascertaining user age in OTT (Over-The-Top) accounts with a tailored convolutional neural network (CNN) model, aimed at limiting access to inappropriate content. The proposed method aims to accurately evaluate the age appropriateness of content and restrict access to information that is not suitable for children, addressing the issue of children often utilizing their parents’ accounts. OTT platforms enhance their responsibilities by providing suitable content to customers through age identification, so creating a safer and more secure digital environment for all users. This solution not only promotes responsible content consumption among minors but also reduces parental concerns regarding their children’s OTT usage. The proposed model provides better accuracy of 91% for UTK Face dataset for the age and gender identification. A user interface (UI) for age verification is developed using OpenCV and Flask Framework. By addressing the crucial issue of child safety, prohibiting minors from accessing unsuitable information, and promoting responsible content consumption, the proposed solution establishes a new benchmark for OTT platforms.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1780967</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1780967</link>
        <title><![CDATA[TCMI-F-6D benchmark construction and quantitative assessment of interdisciplinary foundational competencies in traditional Chinese medicine informatics using large language models]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zhaohang Teng</author><author>Jing Chang</author><author>Yongxiang Xu</author><author>Hongxing Kan</author><author>Yangting Ou</author><author>Jili Hu</author><author>Zongyun Gu</author><author>Yinfeng Yang</author><author>Jianhua Shu</author>
        <description><![CDATA[IntroductionTraditional Chinese Medicine Informatics (TCMI), as an emerging interdisciplinary field, places high demands on foundational interdisciplinary competency assessment in its talent cultivation and research practices. However, Large Language Models (LLMs) currently lack a suitable quantitative assessment system tailored to the characteristics of TCMI.MethodsTo address this gap, this study, grounded in Cognitive Hierarchy Theory and Disciplinary Knowledge Structure Theory, selected six core disciplines closely related to TCMI from the Massive Multitask Language Understanding (MMLU) dataset, constructed an evaluation framework for foundational interdisciplinary competency in TCMI-related scenarios, and established the TCMI-F-6D (the TCMI-Foundation-6 Domain Benchmark) together with a composite metric system. Three experiments were conducted to evaluate the models’ baseline capability, learning gains, and performance stability. The experiments comprehensively assessed the competency of 20 LLMs across 8 categories, and selected 6 models with weaker overall performance for focused analysis of their interdisciplinary competency characteristics.ResultsThe results showed that, among the base models, ChatGLM3-6B performed best in interdisciplinary knowledge integration (43.97%), while DeepSeek-V3.1 achieved the best overall application performance (80.87%) among the chat models. Specifically, Qwen-14B-Chat also demonstrated stable and predictable learning performance under varying example conditions, with an average learning gain of 5.60% and a 95% confidence interval (CI) of [5.50%, 5.70%].DiscussionCollectively, this study clarifies the differences in foundational interdisciplinary competency among LLMs in this discipline, providing a quantifiable assessment framework, methodological support, and empirical evidence for TCMI’s educational, research tool selection, and the implementation of a standardized interdisciplinary competency assessment system.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1776546</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1776546</link>
        <title><![CDATA[AI-augmented reliability in CI/CD: a framework for predictive, adaptive, and self-correcting pipelines]]></title>
        <pubdate>2026-04-01T00:00:00Z</pubdate>
        <category>Hypothesis and Theory</category>
        <author>Rohit Dhawan</author><author>Mohit Dhawan</author>
        <description><![CDATA[Modern CI/CD pipelines face a critical challenge. While AI tools accelerate code generation, static pipelines have become the primary bottleneck to delivery velocity. Flaky tests and pipeline noise create a persistent challenge, with reported failure rates ranging from 11 to 27 percent for test flakiness and 5–16 percent for noise-induced build failures. This forces teams to spend more time investigating false failures than building features. As systems scale across regions and dependencies, these problems compound and threaten the fundamental promise of continuous delivery. We introduce a framework that transforms CI/CD pipelines from deterministic scripts into intelligent, adaptive systems. At its core is the Sense-Analyze-Predict-Act-Learn loop, which we call SAPAL. This loop extends classical adaptive models with CI/CD specific capabilities including flakiness characterization, dependency risk scoring, multi-region awareness, and developer feedback. We operationalize this loop through a five-layer architecture spanning data collection, reliability intelligence, predictive modeling, adaptive execution, and human-AI collaboration. Three novel metrics quantify pipeline intelligence. Pipeline Health Index measures overall reliability. Test Stability Score identifies flaky patterns. Failure Prediction Confidence validates model accuracy. Three scenarios demonstrate application to real CI/CD challenges. Intelligent retry strategies, grounded in empirical studies of flaky test detection and resolution, project 60 percent reduction in flaky-induced build failures. ML-based test selection techniques from recent literature suggest 50 to 80 percent reduction in feedback time. Stability-aware deployment orchestration adapts rollout strategies to regional reliability patterns. These projections synthesize findings from published component studies rather than measurements from unified framework deployment. By enabling pipelines to learn from executions, predict with calibrated confidence, and adapt to behavior patterns, this framework provides a practical path toward reliable delivery at scale where intelligence is essential, not optional.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1770922</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1770922</link>
        <title><![CDATA[Benchmark datasets for predictive maintenance challenges in steel manufacturing]]></title>
        <pubdate>2026-04-01T00:00:00Z</pubdate>
        <category>Data Report</category>
        <author>Jakub Jakubowski</author><author>Szymon Bobek</author><author>Grzegorz J. Nalepa</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1702756</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1702756</link>
        <title><![CDATA[A deep learning-based approach for detecting anomalous behavior in safety-critical spaces]]></title>
        <pubdate>2026-03-31T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Aqib Anees</author><author>Syed Asim Jalal</author><author>Hassan Jalil Hadi</author><author>Naveed Ahmad</author><author>Mohamad Ladan</author>
        <description><![CDATA[Wrong-turn violations in safety-critical spaces such as road roundabouts are a type of traffic violation that can lead to traffic congestion and increase the risk of road crashes. Although many researchers have focused on detecting various traffic violations, wrong-turn violations have not received enough attention. This may be due to a lack of relevant datasets. This study aims to address this gap. We developed a deep learning–based approach to detect wrong-turn traffic violations at roundabouts. The proposed system captures video from strategically placed cameras at roundabouts, which is then fed into an artificial intelligence (AI) model capable of detecting vehicles committing wrong-turn violations in real time. For this purpose, we utilized the popular You Only Look Once (YOLO) algorithm. Due to the absence of an existing dataset for this specific type of violation, we created our own. Images were collected and annotated from local roundabouts in Peshawar, Pakistan. The YOLO model was trained on this dataset and evaluated using standard performance metrics, including accuracy and recall. The results suggest that the proposed approach has strong potential for refinement and real-world implementation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1819135</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1819135</link>
        <title><![CDATA[Correction: LSML-SF: a lightweight stacked ML approach for spreading factor allocation in mobile IoT LoRaWAN networks]]></title>
        <pubdate>2026-03-30T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Arshad Farhad</author><author>Muhammad Ali Lodhi</author><author>Farhan Nisar</author><author>Hassan Jalil Hadi</author><author>Naveed Ahmad</author><author>Mohamad Ladan</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1754000</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1754000</link>
        <title><![CDATA[A federated multimodal deep learning framework for brain tumor classification using MRI]]></title>
        <pubdate>2026-03-30T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>K. Lakshmi Vasanthi</author><author>J. Sree Darshne</author><author>Pattabiraman Venkatasubbu</author><author>Parvathi Ramasubramanian</author>
        <description><![CDATA[IntroductionBrain tumor classification using MRI plays a critical role in early diagnosis and treatment planning. However, traditional centralized approaches require sharing sensitive medical data, which raises serious privacy concerns. Additionally, the distribution of data across multiple hospitals limits effective model training and utilization. Therefore, there is a strong need for privacy-preserving and distributed learning methods that ensure both security and accuracy in classification.MethodsIn this work, a federated learning framework is proposed to enable collaborative model training without sharing raw data. To improve efficiency, a layer skipping mechanism is applied, which reduces communication cost during training. The FedPropSAG aggregation method is used to enhance convergence and overall model performance. Furthermore, Differential Privacy (DP) and Secure Aggregation (SA) techniques are incorporated to ensure data privacy and secure communication.ResultsThe proposed model achieves high classification accuracy across distributed datasets, demonstrating its effectiveness. The communication cost is significantly reduced due to the implementation of the layer skipping mechanism. The model performs well even under non-IID data distributions, which are common in real-world scenarios. Importantly, the integration of privacy-preserving techniques does not degrade the overall model performance.DiscussionThe proposed approach provides an efficient and scalable solution for distributed medical data analysis. It ensures patient data privacy while still enabling collaborative learning across multiple institutions. The reduction in communication overhead makes the framework suitable for practical deployment in healthcare systems. Overall, the model successfully balances accuracy, efficiency, and privacy, making it a strong candidate for real-world applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1799522</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1799522</link>
        <title><![CDATA[AI algorithms and IoT platforms for anomaly and failure prediction in industrial machinery—systematic review]]></title>
        <pubdate>2026-03-26T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Mario Esteban Marín Vásquez</author><author>Juan Carlos Blandón Andrade</author><author>Alonso Toro Lazo</author><author>Jesús Alfonso López Sotelo</author>
        <description><![CDATA[Predictive Maintenance (PdM) focuses on anticipating potential failures in industrial machines by the monitoring key parameters. Artificial Intelligence (AI) provides algorithms that can be used for this purpose. Specialized literature mentions that some companies need to adopt more proactive and predictive strategies in managing of industrial maintenance. This study aims to conduct a Systematic Literature Review (SLR) on Artificial Intelligence algorithms and software and IoT platforms used for anomaly and failure prediction. The method includes of six main phases: (i) defining the research questions; (ii) conducting a search process; (iii) establishing exclusion and inclusion criteria; (iv) performing a quality assessment of studies; (v) collecting data; and (vi) analyzing the data. The findings show that the main AI techniques for PdM are classified as: (i) machine Learning-based methods; (ii) neural networks-based methods; and (iii) knowledge transfer-based methods. Nine software and IoT technologies were identified to support maintenance operations. Additionally, it is discussed how Machine Learning and Deep Learning algorithms perform well in fault classification, prediction, Remaining Useful Life (RUL) estimation, and diagnostic tasks. They can also be applied in earlier stages, such as data preprocessing and feature extraction. Finally, it is shown that knowledge transfer can improve AI algorithms when sudden changes occur in data and their relationships. In conclusion, the AI technologies identified can significantly contribute to predicting failures in industrial machinery.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1794271</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1794271</link>
        <title><![CDATA[Techniques for mitigating overfitting in machine learning: a comprehensive review, taxonomy, and practical guide]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Alexander Pearson Sheppert</author>
        <description><![CDATA[IntroductionOverfitting remains a persistent barrier to reliable machine learning, especially in modern overparameterized deep models.MethodsWe conducted a narrative review synthesizing approximately 95 core studies (1943–2026) identified through structured searches of IEEE Xplore, ACM Digital Library, arXiv, Google Scholar, and Semantic Scholar, extracting mechanisms, assumptions, and empirical evidence for prominent methods for mitigating overfitting.ResultsWe organize techniques into a unified five-family taxonomy (parameter-, training-, data-, ensemble-, and objective-based) and provide a practical decision framework that maps data regimes, model families, and real-world scenarios to actionable regularization strategies.ConclusionOverfitting mitigation benefits from coordinated choices in data, model capacity, optimization, and evaluation. Our taxonomy and decision framework help practitioners select complementary interventions and avoid common pitfalls such as leakage and over-regularization.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1777913</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1777913</link>
        <title><![CDATA[YOLOv11-Lite architecture for wildlife detection from drone images]]></title>
        <pubdate>2026-03-24T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Sherly Alphonse</author><author>Sahaya Beni Prathiba</author><author>Abhiram Sharma</author>
        <description><![CDATA[IntroductionDrones equipped with cameras are helpful in wildlife tracking. Deep learning has great potential for detecting wildlife, but is constrained by the challenge of detecting tiny objects, especially from higher altitudes.MethodsThese limitations are addressed by an enhanced You Only Look Once 11 (YOLOv11-Lite) model. YOLOv11-Lite is a lightweight, edge-friendly variant of YOLOv11 that reduces computational complexity while maintaining high detection accuracy. Standard Convolution + Batch Normalization + SiLU (CBS) blocks are replaced with Depthwise-CBS units, which reduce the number of parameters and FLOPs. The enhanced version employs a Spatial Reasoning-Enhanced Coordinate Attention-based Simple Attention Module (CA-SimAM) for improved feature representation, Dynamic Sampling (DySample) for adaptive sampling, and a bounding-box IoU for accurate localization. The C2 block with the Parallel Split Attention (C2PSA) module is also replaced with a Ghost-ELAN block, as it enables ghost feature generation and multi-branch ELAN aggregation, achieving good performance with fewer computations.ResultsThe multiscale detection head aids in detecting smaller animals. The enhanced model achieves an mAP@0.5 of 98.5% and an mAP@0.5:0.95 of 94.7% on the WAID dataset.DiscussionThe performance of the model is assessed through comparative tests, which demonstrate the superiority of the enhanced YOLOv11-Lite model over existing algorithms. The proposed approach supports UAV-based wildlife monitoring and improves detection performance and generalization under real-world conditions.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1772418</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1772418</link>
        <title><![CDATA[Toward LLM-aware software effort estimation: a conceptual framework]]></title>
        <pubdate>2026-03-23T00:00:00Z</pubdate>
        <category>Conceptual Analysis</category>
        <author>Feisal Alaswad</author><author>Eswaran Poovammal</author><author>Batoul Aljaddouh</author>
        <description><![CDATA[Software effort estimation has traditionally been grounded in the assumption that development cost is primarily driven by human labor, approximated through proxies such as code size, functional complexity, or perceived task difficulty. The increasing adoption of large language models (LLMs) as software development assistants challenges this assumption by automating substantial portions of reasoning, coding, and refactoring. In LLM-assisted workflows, effort increasingly shifts toward interaction management, validation, correction, and integration, leading to growing misalignment between established estimation techniques—such as COCOMO, Function Points, and Story Points—and actual development cost. This paper argues that the limitations of existing estimation models in LLM-mediated development are structural rather than parametric. When core development activities are delegated to automated reasoning systems. Through conceptual analysis supported by exploratory observations, we illustrate systematic mismatches between traditional effort estimates and LLM-assisted task execution, particularly in agile environments that rely on Story Points. To address this gap, we introduce a unified conceptual foundation for LLM-aware software effort estimation. We reconceptualize effort as Hybrid Intelligence Effort, emerging from the interaction between LLM cognitive complexity and human oversight effort. We further identify five core dimensions governing effort in LLM-assisted development: LLM reasoning complexity, context and information completeness, code transformation impact, iterative reasoning cycles, and human oversight effort. These dimensions capture cost drivers that are largely absent from conventional estimation theory. Rather than proposing a parametric estimation model, this work establishes a theoretical foundation for future empirical calibration and data-driven approaches. By redefining what constitutes effort in the presence of LLMs, the paper contributes a conceptual basis for estimation models aligned with contemporary AI-augmented software engineering practices.]]></description>
      </item>
      </channel>
    </rss>