<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Artificial Intelligence | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/artificial-intelligence</link>
        <description>RSS Feed for Frontiers in Artificial Intelligence | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-22T01:18:55.303+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1802559</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1802559</link>
        <title><![CDATA[The intelligent neonatal healthcare: a systematic review of machine learning architectures integrating the internet of medical things and blockchain]]></title>
        <pubdate>2026-04-21T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Sarlinraj Madhalaimuthu</author><author>R. Sujatha</author>
        <description><![CDATA[Background and motivationNeonatal healthcare involves managing extreme physiological vulnerability, rapid disease progression, and time-critical decision-making within Neonatal Intensive Care Units (NICUs). Recent developments in blockchain, the Internet of Medical Things (IoMT), and Machine Learning (ML) have provided new opportunities of enhanced health-data governance, non-stop physiological tracking, and prevention of risks. Although this has been achieved, the past studies have primarily evaluated these technologies separately or with regard to adult health care conditions with little consideration of their combined relevance to neonatal care.MethodsIn this study, a PRISMA-guided systematic review was conducted to examine intelligent neonatal healthcare systems that integrate ML, IoMT, and blockchain technologies. A systematic search of major scientific databases identified 122 records, of which 76 studies satisfied predefined inclusion criteria and were included for qualitative synthesis. The selected studies were discussed according to clinical areas of application, system architecture, evaluation practices, and implementation limitations peculiar to neonatal contexts.Synthesis of current evidence and identified research gapsThe review indicates that ML-based approaches are the most mature, particularly for early disease detection, mortality risk prediction, and clinical decision support. Continuous and remote physiological monitoring is the primary use of IoMT-based systems, but blockchain-based solutions are still mainly conceptual or prototype-based systems with primary concerns on data integrity, access control and trust. There are still fewer fully integrated ML–IoMT–Blockchain systems specifically for neonates, and there are persistent problems with interoperability, scalability, clinical validation, and AI lifecycle governance. Taking everything considered, this review combines disparate information, identifies important research gaps, and describes future research routes in line with Sustainable Development Goal 3 toward secure, trustworthy, and therapeutically useful intelligent infant healthcare systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1801342</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1801342</link>
        <title><![CDATA[A novel diffuse liver nodule detector via integrating semantic edge features and probabilistic uncertainty modeling]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lei Tian</author><author>Xiang Liu</author><author>Yunyu Shi</author><author>Yu Ji</author><author>Shuohong Wang</author>
        <description><![CDATA[IntroductionUltrasound image segmentation of diffuse liver fibrosis nodules confronts three critical challenges: boundary ambiguity caused by gradual tissue transitions, texture heterogeneity arising from fibrotic variations, and inadequate uncertainty quantification, which manifests as overconfident misclassifications at fibrotic nodules.MethodsTo address these challenges, this paper proposes the Edge-Semantics Probabilistic Dirichlet Network (ESPD-Net), which integrates Dirichlet evidential theory into diffuse lesion segmentation for nodule detection. ESPD-Net employs three synergistic components: (1) The Semantic-Probabilistic Dual Path Fusion (SPDF) bottleneck constructs parallel semantic and probabilistic pathways to capture local morphological and global distribution features. (2) The Dirichlet Evidential Guided Decoder (DEGD) reformulates segmentation as second-order probabilistic modeling under evidential theory, guiding the adaptive feature decoding process by outputting calibrated uncertainty distributions. (3) Guided by these localized high-uncertainty regions, the Dirichlet Boundary Aware Refinement (DBAR) module executes targeted geometric corrections to precisely align ambiguous lesion boundaries.ResultsEvaluations on murine and clinical datasets demonstrate that ESPD-Net significantly outperforms state-of-the-art methods. Specifically, it achieves a Dice of 0.855 (+0.043) and an IoU of 0.747 (+0.057). Furthermore, the model effectively minimizes calibration and boundary errors, reducing ECE to 3.85% and HD95 to 3.25.DiscussionThese findings demonstrate that the proposed ESPD-Net effectively addresses the challenges of diffuse lesion segmentation, thereby objectively confirming its clinical potential for computer-aided diagnosis.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1766576</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1766576</link>
        <title><![CDATA[Brief report: Artificial intelligence meets small cell lung cancer—integrating clinicopathological and wholeslide image data for prognostic prediction in SCLC]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Pedro Rocha</author><author>Joan Gibert</author><author>Silvía Menendez</author><author>Raúl del Rey-Vergara</author><author>Albert Iñañez</author><author>Laura Masfarré</author><author>Nil Navarro</author><author>Alejandro Ríos-Hoyo</author><author>Sandra Perez</author><author>Álvaro Taus</author><author>Mario Giner</author><author>Ana Rovira</author><author>Luis León-Mateos</author><author>Dolores Isla</author><author>Luis Paz-Ares</author><author>Jon Zugazagoitia</author><author>Cristina Martí Blanco</author><author>Rosario García-Campelo</author><author>Alberto Moreno-Vega</author><author>Ángel Callejo</author><author>Federico Rojo</author><author>Ignacio Sanchéz</author><author>Edurne Arriola</author>
        <description><![CDATA[IntroductionSmall-cell lung cancer (SCLC) represents a unique clinical challenge characterized by its aggressive nature, poor prognosis, and limited therapeutic options. Upfront prediction of survival outcomes in this disease could impact patient care by refining risk stratification and thus, personalizing treatment strategies. Here, we investigate the utility of a deep learning (DL) model using digital pathology to predict outcomes of patients diagnosed with SCLC.MethodsWe built a random forest (RF) model using clinical data and a DL based model using whole-slide image (WSI) as inputs from a total of 307 patients diagnosed with SCLC, including a training set of 263 patients, and a validation set comprising 44 patients who participated in the CANTABRICO phase IIIB clinical trial. Model performance was assessed using the area under the receiver operating characteristic curve (AUC) with 5-fold crossvalidation to minimize bias and variance of the performance. We report the mean and 95% confidence interval of the AUC values across the folds.ResultsIn the training set, the RF model achieved an AUC of 0.728 (95% CI: 0.662–0.792) for long-term overall survival (LT_OS) prediction, while the combined RF and DL model achieved an AUC of 0.744 (95% CI: 0.680–0.807). For long-term progression-free survival (LT_PFS) prediction, the RF model achieved an AUC of 0.689 (95% CI: 0.625–0.753), whereas the combined model achieved an AUC of 0.704 (95% CI: 0.640–0.767). Application of the combined RF and DL model to the validation cohort yielded an AUC for LT_OS of 0.604 (95% CI: 0.582–0.626) and an AUC for LT_PFS 0.690 (95% CI: 0.643–0.738), indicating potential clinical applicability.ConclusionOur results showcase the feasibility of integrating clinicopathological data with WSI through a deep learning model to predict outcomes in patients with SCLC. This approach holds promise in helping physicians to personalize treatment strategies that better suit individual patient needs.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1767465</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1767465</link>
        <title><![CDATA[Urban tourist volume forecasting using internet search trends and deep learning methods]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chenglin Song</author><author>Zhiming Wang</author>
        <description><![CDATA[Accurate forecasting of tourist arrivals in major urban destinations is critical for optimizing tourism resource allocation and formulating data-driven marketing strategies. To address this need, this study presents a novel prediction framework that integrates deep learning methodologies with online search behavior data. Specifically, we propose the DTN (Dynamic Tourism Network) model, which combines Disentangled Shape and Time series Normalization (Dish-TS) with Temporal Convolutional Networks (TCN), and utilizes Baidu Index data as a key indicator of online search trends to predict tourist arrivals in Sanya, China. Empirical validation across multiple evaluation metrics demonstrates that the DTN model consistently surpasses conventional deep learning approaches, achieving statistically significant improvements in predictive accuracy for tourist volume estimation. This advancement provides a robust analytical foundation for real‑time tourism demand forecasting in destination management systems. Notably, the proposed method has been evaluated only on a popular urban tourist destination with pronounced seasonality and available Baidu Index data; its applicability to other destination types or regions where different search engines dominate therefore requires further validation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1756665</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1756665</link>
        <title><![CDATA[Experience using artificial intelligence in the digital transformation of education: benefits and challenges]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Arturs Medveckis</author><author>Tamara Pigozne</author><author>Rita Birzina</author><author>Ivita Pelnena</author>
        <description><![CDATA[AimThe aim of the research is to analyse teachers and adult educators’ experiences of the application of artificial intelligence in the digital transformation of education.MethodsIn the study the quantitative data collection method—a questionnaire—“International Survey on Artificial Intelligence in Higher Education, Training and Adult Learning” and data processing methods for secondary data collection—descriptive statistics (Mean, Median, Mode, Standard Deviation), and the Mann–Whitney test to determine the statistical significance of differences between two independent target groups (teachers and adult educators) have been applied.ResultsThe research sample consisted of representatives of educational institutions of Latvia—34 teachers and 83 adult educators. The descriptive statistics and Mann–Whitney test results show that both teachers and adult educators similarly assess the impact of AI application on performance and work efficiency and productivity, decision-making, problem-solving, awareness formation and interdisciplinary concept application skills, mental health, learning outcomes and challenges related to AI use (p ≥ 0.05). Teachers, compared to adult educators, have a higher opinion of the application of artificial intelligence for work purposes. Adult educators have a higher opinion of the impact of AI on the development of learners’ awareness formation skills, learners’ employment and their work performance, and physical and social health (p ≤ 0.05).ConclusionAI is a new global reality that opens up new paths for knowledge acquisition, whereas the social environment is also facing new challenges. AI tools can be used by a wide range of users who have prior knowledge and skills in constantly changing IT application. Despite the inertia of the education system and the length of bureaucratized decision-making, proactive action is needed that would balance the technological development with the acquisition of new knowledge and skills based on high moral standards at all levels of education, involving high-tech implementers and cooperating with educational staff, scientists and other social partners. A descriptive cross-sectional study design has been chosen for the study and the instrument applied is the survey “International Survey on Artificial Intelligence in Higher Education, Training and Adult Learning” developed by the Singapore Institute of Adult Education within the framework of the 3rd network “Professionalization of Adult Educators in ASEM Countries” of the Asia-Europe Lifelong Learning and Education Research (ASEM LLL Hub).]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1735157</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1735157</link>
        <title><![CDATA[Maize yield prediction using machine learning: a systematic literature review]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Jabulani Nyengere</author><author>Frank Tchuwa</author><author>Harineck Mayamiko Tholo</author><author>Lucius Malalu</author><author>Allena Laura Njala</author><author>Petros Kachulu</author><author>Rodney Maganga</author><author>Brenda Matewere</author><author>Lackson Jamu</author><author>Clement Nyirenda</author><author>Jones Kanjira</author><author>Macdonald Chabwera</author><author>Patson Nalivata</author><author>Weston Mwase</author><author>Agness Mwangwela</author>
        <description><![CDATA[IntroductionAccurate maize yield prediction is critical for food security planning, particularly in sub-Saharan Africa, where maize is essential to national economies and livelihoods. This systematic review assesses the use of machine learning (ML) techniques in maize yield estimation, focusing on the methodologies, predictor variables, and results in peer-reviewed studies.MethodsThe review followed the PRISMA 2021 guidelines, synthesizing 81 peer-reviewed studies published between 2014 and 2025. The analysis examined the ML algorithms, predictor variables, evaluation metrics, and methodological gaps identified in these studies.ResultsThe review found a significant increase in publications after 2021, reflecting growing confidence in the application of ML for agronomic decision-support. Random Forest (49.4%), XGBoost (16.1%), and Support Vector Machines (12.4%) were the most common algorithms, with hybrid deep-learning frameworks showing superior performance. Environmental variables, remote-sensing indices, and soil properties were the most frequently used predictors. RMSE and R2 were the primary evaluation metrics.DiscussionThe findings underscore the challenges of data scarcity, limited interpretability, and geographical imbalance in the research, with Africa contributing less than 25% of the studies. There is a need for open-access agricultural data systems, hybrid explainable AI frameworks, and capacity building in computational agronomy to improve the effectiveness of ML applications in maize yield prediction.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1808575</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1808575</link>
        <title><![CDATA[Commentary: Artificial intelligence and precision medicine: a pilot study predicting optimal ceftaroline dosage for pediatric patients]]></title>
        <pubdate>2026-04-20T00:00:00Z</pubdate>
        <category>General Commentary</category>
        <author>Hassan Nawaz Tahir</author><author>Ehtisham Haider</author><author>Shahnila Javed</author><author>Mursala Tahir</author><author>Yousaf Ali</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1793305</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1793305</link>
        <title><![CDATA[Enhancing segmentation fairness through curriculum learning and progressive loss: a centralized and federated perspective on radiograph analysis]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ehsan E. Alam</author><author>Nickolas Littlefield</author><author>Arash Shaban-Nejad</author><author>Hamidreza Moradi</author>
        <description><![CDATA[BackgroundBias in medical image segmentation can lead to unequal performance across demographic subgroups, raising concerns about fairness and reliability in clinical AI systems. While deep learning models have achieved high segmentation accuracy, ensuring equitable performance across race and gender remains a significant challenge, particularly in privacy-sensitive healthcare environments.MethodsThis study investigates fairness-aware medical image segmentation for hip and knee radiographs using deep learning models evaluated in both centralized and Federated Learning (FL) settings. We introduce Curriculum Learning (CL) strategies and Progressive Loss (PL) functions to regulate sample difficulty during training. In addition, we propose two novel fairness-oriented federated learning algorithms, Federated Intersection over Union (FedIoU) and Federated Intersection over Union with Outlier Analysis (FedIoUoutlier). Experiments are conducted using multiple segmentation backbones and simulated multi-site data partitions derived from the Osteoarthritis Initiative dataset. Model performance is evaluated using Intersection over Union (IoU), IoU standard deviation, Skewed Error Ratio (SER), and Min-Max Disparity across race and gender subgroups. Statistical significance was verified using paired t-tests to compare per-sample IoU performance against baseline configurations.ResultsAcross both hip and knee segmentation tasks, curriculum learning and progressive loss strategies consistently improved segmentation accuracy and reduced demographic performance disparities in centralized training. In federated settings, fairness-aware aggregation further enhanced performance. Notably, FedIoUoutlier combined with balanced curriculum learning and tiered progressive loss achieved the highest mean IoU while yielding the lowest SER and Min-Max Disparity, indicating improved fairness without sacrificing accuracy. In several configurations, federated models matched or exceeded the performance of optimized centralized models, with statistically significant improvements in per-sample IoU over baseline configurations.ConclusionThe results demonstrate that structured training strategies and fairness-aware federated aggregation can jointly improve accuracy, stability, and demographic fairness in medical image segmentation. By integrating curriculum learning, progressive loss, and novel FL algorithms, this work provides a practical pathway toward equitable and privacy-preserving AI systems for medical imaging.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1760137</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1760137</link>
        <title><![CDATA[Machine learning based approach to intrusion detection in internet of things environments]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Oluwatoyin Esther Akinbowale</author><author>Adebola Tajudeen Adesina</author><author>Mulatu Fekadu Zerihun</author><author>Polly Mashigo</author>
        <description><![CDATA[The growing security requirements of Internet of Things (IoT) networks where heterogeneous networks and resource-constrained devices offer exponentially increased attack surface, was addressed using machine learning based intrusion detection system. Open source secondary quantitative IoT intrusion traffic data was obtained and trained using machine learning models. The dataset comprises of over one million labeled flow records consisting of 34 kinds of attacks and benign traffic. First, extensive preprocessing including managing of missing values, encoding features, scaling, and removal of redundancy was carried out followed by the training of three supervised machine learning (ML) classifiers namely Decision Tree (DT), Random Forest (RF), and Support Vector Machine (SVM) for the differentiation of the different types of intrusions. The performance evaluation of the ML models was conducted by evaluating the accuracy, precision, and recall, and F1-score. It was observed that Decision Tree model was the most outstanding in terms of overall accuracy (99.36%) and respectable performance in prevalent attack classes, and was closely followed ccy Random Forest (99.27%) while SVM lagged behind with an accuracy of 80.08% due to computational constraints in handling massive amounts of big data. Inter-arrival time and total packet size were identified as the significant discriminators in malicious behavior through feature-importance analysis. Conclusively, the tree-based models, and specifically Decision Trees, offer extremely effective and interpretable solution for real-time IoT intrusion detection, and provide future avenues in handling class imbalance and examining lightweight, ensemble, and deep-learning approaches for robust detection of rare and unknown threats. This study contributes to cybersecurity via the identification and classification of intrusions in IoT devices for proper mitigation.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1770342</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1770342</link>
        <title><![CDATA[FSD-Net: underwater object detection based on frequency and spatial domain feature enhancement]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chao Zhang</author><author>Shuang Wu</author><author>Baohua Huang</author><author>Binchen Zhao</author><author>Fengqi Cui</author><author>Xingkun Li</author>
        <description><![CDATA[BackgroundComplex underwater visual conditions cause severe missed and false detections in conventional object detection models, hindering reliable autonomous underwater exploration. This work addresses these key performance limitations.MethodsWe propose FSD-Net, a novel underwater detection model with two core enhancement modules. The Frequency Attention Convolution Module reduces missed detections via frequency-domain spatial feature preservation, and the Multi-dimensional Feature Enhancement Module suppresses false detections via enhanced semantic fusion. Experiments include ablation studies and state-of-the-art method comparisons on the UTDAC2020 and Brackish datasets.ResultsFSD-Net achieves state-of-the-art performance on both tested datasets. On the UTDAC2020 dataset, it reaches 85.7% AP50 and 82.5% F1-score, with a 3.8% AP50 improvement over the baseline model. On the Brackish dataset, it achieves 98.1% AP50 and 97.0% F1-score, with a 3.9% AP50 improvement over the baseline. The model outperforms all compared mainstream detection algorithms, and ablation studies validate the effectiveness of both proposed modules.ConclusionFSD-Net's joint frequency-spatial enhancement strategy effectively mitigates underwater image degradation challenges, providing a robust detection solution for autonomous underwater exploration. The proposed dual-module design offers a practical reference for detection model optimization in complex visual environments, with future work focused on lightweight model optimization.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1771431</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1771431</link>
        <title><![CDATA[Fusing appearance and vein morphology using dual-branch deep networks for accurate medicinal plant identification]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Chembon Rajeendran Karthik</author><author>Parthiban Maheswari Adithya</author><author>Naveen Nidadavolu</author><author>Ananthakrishnan Balasundaram</author><author>Ayesha Shaik</author>
        <description><![CDATA[Accurate identification of medicinal plants from leaf images is essential for pharmacognosy, biodiversity conservation, and agricultural decisions. But, accurate identification of medicinal leaves still poses a potential challenge in real-world conditions due to high similarity between species, variability within classes, uneven lighting, background clutter, partial views and occlusions. Existing RGB-based deep models often overfit to color-texture cues that vary with environmental conditions, whereas venation-based (skeleton) methods provide anatomically stable morphology but inherently suppress the critical appearance information needed to distinguish visually similar species. In this study, we introduced a novel dual-branch deep learning framework that explicitly separates and preserves appearance and venation learning using two independent pre-trained feature extractors, instead of relying on traditional fusion methods that combine the modalities at the input level or compress both cues into a single fused image stream. Specifically, MobileNetV2 is used to capture global appearance descriptors (texture, pigmentation, and shape), while DenseNet121 learns fine-grained vascular topology from skeletonized vein representations; the resulting embeddings are then combined via late feature-level fusion to form a unified discriminative representation that minimizes modality interference and maximizes complementarity. To further improve robustness and reduce bias introduced by dataset imbalance, we have integrated a class-frequency aware augmentation strategy that adaptively strengthens minority-class transformations while preserving majority-class fidelity, alongside transfer learning, class weighting, and regularization. The proposed approach is trained and evaluated on a curated dataset of 14,344 paired RGB-skeleton images spanning seven medicinal plant species. It is rigorously benchmarked against RGB-only, skeleton-only, and fused image baselines. Experimental results have shown that the proposed dual-branch model achieves 97 % overall accuracy with high precision, recall, and F1-score, showcasing that the structured dual-stream learning of appearance and vein morphology provides a solution for medicinal plant recognition with the potential for robust performance in changing and real-world settings.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1828869</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1828869</link>
        <title><![CDATA[Editorial: Computational intelligence for multimodal biomedical data fusion]]></title>
        <pubdate>2026-04-17T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Moolchand Sharma</author><author>Umesh Gupta</author><author>Oana Geman</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1794923</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1794923</link>
        <title><![CDATA[Automatic recognition of dynamic signs of Mexican sign language using deep learning]]></title>
        <pubdate>2026-04-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jesús Antonio Navarrete-López</author><author>Michelle Sainos-Vizuett</author><author>Irvin Hussein Lopez-Nava</author>
        <description><![CDATA[IntroductionOver four million individuals in Mexico face communication barriers due to hearing impairments. Sign language serves as an essential communication tool within the deaf community; however, automatic translation between sign and oral languages remains a significant challenge. This study proposes an approach for recognizing dynamic gestures from Mexican Sign Language (LSM) to support the development of assistive communication technologies.MethodsIn collaboration with expert interpreters, an LSM corpus comprising 121 signs was developed, including a specialized lexicon focused on medical emergencies and accident scenarios. A standardized video acquisition protocol was implemented with both expert and non-expert participants. The proposed methodology consists of skeletal keypoint extraction using MediaPipe, data augmentation through frame sampling, and dataset normalization. Multiple deep learning architectures were evaluated, including ResNet, Simple RNN, LSTM, Bidirectional LSTM (BiLSTM), Gated Recurrent Units (GRU), a Transformer encoder, and a hybrid ResNet–Transformer model.ResultsAmong the evaluated models, the ResNet architecture achieved the best performance, obtaining an F1-score of 0.948 under subject-independent evaluation, with an average inference time of 0.468 seconds. Hyperparameter optimization analysis indicated that performance improvements were primarily driven by training dynamics and regularization strategies rather than increases in architectural depth.DiscussionThe results demonstrate the effectiveness of deep learning–based approaches for dynamic LSM gesture recognition and highlight the importance of optimization strategies for robust generalization. This work contributes toward LSM-to-Spanish translation systems and provides a foundation for advancing data-driven sign language recognition technologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1745928</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1745928</link>
        <title><![CDATA[A structured framework for effective and responsible generative artificial intelligence chatbot prompt engineering throughout the scientific process: a comprehensive guide for the health and medical researcher]]></title>
        <pubdate>2026-04-16T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Jeremy Y. Ng</author>
        <description><![CDATA[Generative artificial intelligence (GenAI) chatbots powered by large language models (LLMs) are becoming increasingly integrated into health and medical research workflows, offering researchers new tools to enhance efficiency, support innovation, and assist with knowledge translation. Although their use in health and medical research is expanding rapidly, the practical application of these tools across the broader health and medical research landscape remains complex and evolving. Health and medical researchers often engage with complex study designs, theoretical frameworks, and population needs, all of which require thoughtful, effective and responsible use when involving AI tools. This 10-chapter guide serves as a practical, evidence-informed resource for health and medical researchers to engage effectively and responsibly with GenAI chatbots through the practice of prompt engineering, the design of clear, structured, and purposeful prompts that guide GenAI chatbot outputs. It presents strategies to improve prompt quality and adapt GenAI chatbot interactions to the varied methodological and disciplinary contexts found across health and medical research. The article outlines a structured framework for how GenAI chatbots can be applied throughout the research cycle, including research question development, study design, literature searching, querying for appropriate reporting guidelines and appraisal tools, quantitative and qualitative data analysis, writing and dissemination, and implementation. AI-generated content should be treated as a preliminary draft and must always be reviewed, verified against credible sources, and aligned with disciplinary standards. Risks such as hallucinated content, embedded biases, and ethical challenges are addressed, particularly in sensitive or high-stakes settings. Transparency in AI use and researcher accountability are essential. While GenAI chatbots have the potential to expand access to research support and foster innovation, they cannot replace critical thinking, methodological rigour, or contextual understanding. Instead, they should augment, not replace, human expertise. This guide encourages effective and responsible use of GenAI chatbots and support their thoughtful integration into the health and medical research process.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1809142</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1809142</link>
        <title><![CDATA[MedChat: a fully offline multimodal AI system for privacy-preserving clinical anamnesis]]></title>
        <pubdate>2026-04-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jan Benedikt Ruhland</author><author>Doguhan Bahcivan</author><author>Jan-Peter Sowa</author><author>Ali Canbay</author><author>Dominik Heider</author>
        <description><![CDATA[Recent advances in large language models made it possible to achieve high conversational performance with substantially reduced computational demands, enabling practical on-site deployment in clinical environments. Such progress allows for local integration of AI systems that uphold strict data protection and patient privacy requirements, yet their secure implementation in medicine necessitates careful consideration of ethical, regulatory, and technical constraints. In this study, we introduce MedChat, a locally deployable virtual physician framework that integrates an LLM-based medical chatbot with a diffusion-driven avatar for automated and structured anamnesis. The chatbot was fine-tuned using a corpus of LLM-generated medical dialogues derived from publicly available symptom-disease datasets, enabling scalable and privacy-preserving training. A secure and isolated database interface was implemented to ensure complete separation between patient data and the model’s inference process. The avatar component was realized through a conditional diffusion model operating in latent space, trained on researcher video datasets and synchronized with mel-frequency audio features for realistic speech and facial animation. We demonstrate that the complete multimodal pipeline can operate fully offline on consumer-grade hardware while maintaining interactive response times (average latency: 2.9 ± 0.3 s) and stable system performance. Preliminary evaluation of generated dialogue indicates high linguistic coherence, supporting its suitability for structured anamnesis tasks. MedChat provides a privacy-preserving, resource-efficient, and multimodal solution for clinical data collection. While clinical validation is ongoing, the presented framework establishes a foundation for secure, locally deployable AI-assisted anamnesis in real-world healthcare settings.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1806691</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1806691</link>
        <title><![CDATA[Applications of artificial intelligence in postoperative surveillance and management of esophageal squamous cell carcinoma]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Mini Review</category>
        <author>Kexun Li</author><author>Zilong Qian</author><author>Jie Mao</author><author>Simiao Lu</author><author>Jianzhe Zhang</author><author>Yongtao Han</author><author>Xuefeng Leng</author>
        <description><![CDATA[Esophageal squamous cell carcinoma (ESCC) has high risks of postoperative recurrence, complications, and prolonged nutritional and functional recovery, while conventional follow-up (scheduled visits with imaging, endoscopy, and laboratory testing) is often limited by delays and resource constraints. This review summarizes recent applications of artificial intelligence (AI) across perioperative ESCC care, with emphasis on postoperative surveillance and management. Following PubMed/MEDLINE, etc. were searched (inception–2025) for English-language studies using machine learning, deep learning, radiomics, natural language processing (NLP), and digital health algorithms in postoperative monitoring, recurrence prediction, complication warning, and remote follow-up. Evidence indicates that AI-enabled multimodal models integrating electronic health records, imaging radiomics, and biomarkers can predict major complications (e.g., anastomotic leak and pneumonia) with improved timeliness, enabling earlier intervention compared with symptom-triggered workflows. Imaging-driven radiomics combined with machine learning demonstrates robust performance for recurrence risk and recurrence-pattern prediction, supporting refined risk stratification beyond TNM staging and informing individualized surveillance intensity and adjuvant decision-making. Explainable approaches (e.g., SHAP) enhance clinical interpretability by identifying key predictors such as nutritional and inflammatory indices. Intelligent follow-up systems incorporating NLP, wearable sensors, and electronic patient-reported outcomes (ePROs) facilitate closed-loop monitoring, improve early issue detection, and strengthen patient–clinician communication.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1800302</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1800302</link>
        <title><![CDATA[Delegated agency and moral responsibility in artificial intelligence]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Petar Radanliev</author>
        <description><![CDATA[IntroductionArtificial intelligence ethics is often framed as a response to unprecedented technical autonomy, with risks attributed to recent advances in machine learning and scale. This framing overlooks a recurring ethical structure: the delegation of moral authority to artificial agents. Ethical failures associated with AI are best understood as governance failures rooted in human design choices and accountability arrangements, even where opacity and limited control complicate responsibility attribution.MethodsA qualitative, interdisciplinary approach integrates historical–thematic analysis, comparative interpretation of technological artifacts, and visual–conceptual synthesis. Mythological figures (Talos, the Golem, Pygmalion), early mechanical automata, and foundational computational systems are analyzed as conceptual models of delegated artificial agency rather than technological precursors.ResultsAcross historical contexts, artificial agents exhibit consistent structural features: bounded autonomy, delegated authority, explicit override mechanisms, and dependence on human oversight. These features directly correspond to contemporary AI ethics concerns, including alignment failures, responsibility gaps, human-in-the-loop control, and system interruptibility.DiscussionThe analysis establishes that ethical risk in AI arises from the displacement of human responsibility rather than from machine autonomy. By situating AI within a longer history of artificial agency, the study provides a normative framework that locates moral responsibility unambiguously in human actors and institutions, with direct implications for AI governance and accountability.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1812408</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1812408</link>
        <title><![CDATA[Legal and ethical reflections on the use of artificial intelligence in the diagnosis and treatment of cancer: who assumes responsibility?]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Virgiliu-Mihail Prunoiu</author><author>Ovidiu Juverdeanu</author><author>Codruta Cosma</author><author>Simion Laurentiu</author><author>Victor Strâmbu</author><author>Adrian Radu Petru</author><author>Mihai Stana</author><author>Mircea-Nicolae Brătucu</author>
        <description><![CDATA[Artificial intelligence (AI) offers multiple advantages, such as: improvement and accuracy of the diagnosis, decrease of the doctors’ workload, decrease of the hospitalization costs, and becoming increasingly widespread, studied, and applied in medicine. AI is already used in image recognition, has haptic perception, and can manipulate instruments. Thus, surgical robots will likely be driven by AI. In the near future, machine learning (ML) will also appear. The use of AI and the study of the specialty literature raise ethical and legal questions for which there is no unanimous answer yet. Medical liability (malpractice) for AI-related errors and damages to the patient prompts legal reflections on this topic. The diagnostic algorithms of AI raise questions regarding the risks of using AI in the diagnosis and treatment of cancer (especially in rare cases), the information provided to the patient, all of these having moral and legal implications, as well as regarding the impact on the empathic doctor–patient relationship. Actually, the use of AI in the medical field has triggered a revolution in the doctor-patient relationship, but it has possible medico-legal consequences as well. The current legal framework regulating medical liability when AI is applied is inadequate and requires urgent measures, because there is no specific and uniform legislation to regulate the liability of the various parties involved in applying AI, or that of the end-users. Consequently, greater attention should be paid to the risk of applying AI, to the necessity to regulate its safe use, and to maintain the safety standards of the patient by continuously adapting and updating the system.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1751148</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1751148</link>
        <title><![CDATA[CompoundDenseNet: a novel approach for accurate recognition of Bangla handwritten compound characters]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nazia Alfaz</author><author>Talha Bin Sarwar</author><author>Fahmid Al Farid</author><author>Md Saef Ullah Miah</author><author>Sadia Afrin</author><author>Shakila Rahman</author><author>Jia Uddin</author><author>Hezerul bin Abdul Karim</author>
        <description><![CDATA[Bangla, one of the most widely spoken languages in the world, presents major challenges in handwritten character recognition because of its complex compound characters with intricate shapes, diverse writing styles, and structural similarities. These features make Bangla a representative example of complex scripts that remain difficult for conventional Optical Character Recognition (OCR) systems. This study focuses on improving the recognition of Bangla handwritten compound characters using a modified DenseNet architecture named CompoundDenseNet. The architecture enhances feature extraction and reuse to better capture the visual variations and fine structural details that existing models often struggle to handle. Its performance was evaluated on three benchmark datasets, BanglaLekha Isolated, Ekush, and CMATERdb, achieving recognition accuracies of 98.5%, 98%, and 96.2% respectively, surpassing previously reported methods. Misclassification analysis using a confusion matrix revealed that the Adam optimizer produced the most stable and accurate results with faster convergence compared to other optimizers tested. While the results demonstrate significant progress, the study also highlights the need for larger and more diverse datasets. Overall, CompoundDenseNet contributes to advancing Bangla handwritten compound character recognition and has the potential to enhance real-world applications such as education, legal documentation, and digital accessibility in Bangla language technologies.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frai.2026.1816684</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frai.2026.1816684</link>
        <title><![CDATA[Editorial: Advancing AI-driven code generation and synthesis: challenges, metrics, and ethical implications]]></title>
        <pubdate>2026-04-15T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Sumeet Kaur Sehra</author><author>Sukhjit Singh Sehra</author><author>David S. Allison</author><author>Jaiteg Singh</author>
        <description></description>
      </item>
      </channel>
    </rss>