<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Computer Science | Human-Media Interaction section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/computer-science/sections/human-media-interaction</link>
        <description>RSS Feed for Human-Media Interaction section in the Frontiers in Computer Science journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-09T17:35:26.378+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1758333</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1758333</link>
        <title><![CDATA[Node-Sampling: adaptive multi-agent optimization in medical education]]></title>
        <pubdate>2026-04-23T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Lilly Marie Düsterbeck</author><author>Michael Größler</author><author>Graziella Credidio</author><author>Louis Bellmann</author><author>Layla Tabea Riemann</author>
        <description><![CDATA[IntroductionDifferences in prior knowledge among incoming medical students pose a persistent challenge for universities. To promote more individualized and equitable preparation, a large language model-based learning platform is being developed at the University Medical Center Hamburg-Eppendorf. A central component of this platform is the automated generation of multiple-choice questions (MCQs) from curated medical materials. However, ensuring their educational quality remains difficult, particularly when relying on smaller, locally deployed language models.MethodsThis study introduces Node-Sampling, a self-optimizing multi-agent approach for improving MCQ quality. The method identifies efficient refinement strategies by modeling agents as an adaptive sequence optimized through the REINFORCE algorithm.ResultsExpert evaluations showed that Node-Sampling enhances the quality of question stems significantly compared to a fixed baseline. Importantly, Node-Sampling achieved this performance using an effective three-agent configuration, requiring only 33% of the original resources. Results for answer options were less consistent.DiscussionThe results highlight the potential of adaptive multi-agent optimization to strengthen automated question refinement. Node-Sampling therefore presents a sustainable and promising approach to better MCQ quality and supports more effective and personalized preparation for medical students.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1772813</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1772813</link>
        <title><![CDATA[Trust rises, attention falls: divergent effects of exposure and education in driving automation]]></title>
        <pubdate>2026-04-21T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hanna Chouchane</author><author>Yuki Sakamura</author><author>Kenji Sato</author><author>Genya Abe</author><author>Makoto Itoh</author>
        <description><![CDATA[IntroductionDrivers supervising Level 2 automation must maintain situation awareness while the system controls steering and speed. Miscalibrated trust can contribute to overreliance and lapses in monitoring, whereas insufficient trust leads to disuse. Prolonged supervision is associated with increased mind-wandering, which can slow reactions to critical events. This study tested whether brief educational interventions affect trust, attention, and takeover readiness during Level 2 driving. Our focus on brief interventions reflects the short, time-constrained onboarding that drivers typically receive when adopting driving automation systems.MethodsFifty-five licensed drivers with no prior hands-on experience of Level 2 automation completed a 15-min automated highway drive. Participants received either minimal instruction (Basic), capability-focused education (Knowledge-based), or limitation-focused education (Rule-based). Trust was measured at four time points; additional measures captured self-reported mind-wandering, gaze behavior, and takeover reaction time.ResultsTrust increased significantly over time in all groups, and educational framing did not alter this trajectory. Capability-focused education enhanced monitoring of the human-machine interface on two false discovery rate corrected metrics and produced faster takeover reactions than limitation-focused education (no difference vs. Basic). Across participants, greater trust growth correlated with higher mind-wandering, while more structured gaze was associated with lower mind-wandering.DiscussionOverall, trust formation appeared to be primarily associated with direct experience with system performance, whereas targeted education refined what drivers monitored and how quickly they responded. Together, these results clarify how experience primarily builds trust while education selectively sharpens attention and response readiness in automated driving. These findings clarify distinct roles of experience and brief education in supervising automation and have implications for driver training, human-machine interface design, and gaze-based monitoring.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1805171</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1805171</link>
        <title><![CDATA[Systematic review: inclusivity and sustainability in educational spaces through technology]]></title>
        <pubdate>2026-04-16T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Sebastian Auquilla Clavijo</author><author>Darwin Chuqui-Calle</author><author>Gabriel Cabrera-Coraisaca</author><author>Fabián López-Morocho</author><author>Andrea Paulina Rodríguez Zúñiga</author><author>Priscila Cedillo</author>
        <description><![CDATA[IntroductionThis systematic review investigates the role of technology in advancing inclusivity and sustainability in educational spaces.MethodsA structured methodology was applied to analyze 20 studies published between 2015 and 2025, sourced from IEEE Xplore, ACM Digital Library, and ScienceDirect.ResultsThe findings reveal that technologies such as artificial intelligence (AI), the Internet of Things (IoT), virtual and augmented reality (VR/AR), adaptive platforms, and the Edu-Metaverse enhance inclusion by personalizing learning, overcoming physical and cognitive barriers, and improving access for disadvantaged communities. Sustainability is supported through smart infrastructure, neuroarchitecture principles, and alignment with Sustainable Development Goals (SDGs) 4, 10, and 17. However, limited integration between neuroarchitecture, sustainability, and inclusion was identified, along with a lack of long-term impact assessment.DiscussionThe results highlight the potential of technology to transform educational spaces into more inclusive and sustainable environments. Nevertheless, challenges related to scalability, equitable access, and interdisciplinary integration remain. Future research should focus on developing holistic frameworks and culturally adaptive solutions to bridge these gaps.Systematic review registrationhttps://osf.io/f5rxq]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1821454</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1821454</link>
        <title><![CDATA[From pictorial space to tactile form: a comparative evaluation of AI-based 2.5D reconstruction from modern artwork paintings]]></title>
        <pubdate>2026-04-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Rocco Furferi</author><author>Lapo Governi</author><author>Yary Volpe</author><author>Michaela Servi</author><author>Francesco Buonamici</author>
        <description><![CDATA[IntroductionThe translation of paintings into tactile 2.5D models (i.e., bas-reliefs) represents a significant advancement in improving accessibility for blind and visually impaired individuals. However, reconstructing spatial structure from a single painted image without explicit perspective is inherently ill-posed, particularly in modern and contemporary artworks where perspective, illumination, and geometry deviate from physical realism.MethodsThis study presents a comparative evaluation of three AI-based reconstruction paradigms: Monocular Depth Estimation, Large Language Models, and Large Reconstruction Models. These approaches are applied to a selected corpus of photographic, realist, and abstract artworks from the CSAC collection (Parma, Italy). An assessment framework is introduced, combining expert-based qualitative evaluation by art historians, formal geometric verification (including integrability and topological consistency), and manufacturability analysis conducted by additive manufacturing specialists.ResultsThe results indicate that Large Language Model-based methods generate semantically rich and perceptually plausible bas-reliefs but lack geometric integrability and topological robustness. Monocular Depth Models perform well in capturing depth hierarchies but tend to oversmooth fine details. Large Reconstruction Models demonstrate strong structural coherence and fabrication readiness, though they often struggle with stylistic reinterpretation.DiscussionThese findings highlight the trade-offs among current AI-based reconstruction approaches for tactile bas-relief generation. While each paradigm excels in specific aspects, none achieves a complete balance between perceptual fidelity, geometric soundness, and manufacturability. Future work should focus on hybrid strategies that integrate semantic understanding with geometric consistency to better support accessible cultural heritage applications.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1774796</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1774796</link>
        <title><![CDATA[Graph-based multimodal affect recognition in children using prototypical networks]]></title>
        <pubdate>2026-04-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Kavita Choudhary</author><author>Gend Lal Prajapati</author>
        <description><![CDATA[IntroductionAlthough physiological signals such as heart rate, perspiration, and facial muscle activity are recognized as markers of emotional events, precisely classifying affective states from these data remains a significant challenge. Addressing this issue is fundamental for developing advanced human-computer interaction and assistive technologies. While emotion recognition in adults has been extensively studied, it is less understood in children, necessitating focused research.MethodsThis study introduces a multimodal framework tailored for the emotion recognition of children. We used prototypical networks to learn discriminative embeddings from each physiological modality. These embeddings were then used to construct an adaptive k-nearest-neighbors (KNN) graph that models the interrelationships among affective conditions across the modalities. A graph neural network (GNN) leverages this structural representation for the final classification, improving performance by capturing the intrinsic relational context.ResultsOur proposed framework improved classification performance by 8%–10% compared to single-modality baselines and existing fusion approaches, achieving an overall accuracy of 83%.DiscussionThese results show that multimodal fusion and graph-based learning can accurately capture the complex interplay of biological signals in children, providing a more accurate approach to pediatric affective computing.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1773479</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1773479</link>
        <title><![CDATA[Eudaimonic HCI: a research agenda for designing technologies that support purpose, growth, and meaning]]></title>
        <pubdate>2026-04-09T00:00:00Z</pubdate>
        <category>Conceptual Analysis</category>
        <author>Khaled Tarmissi</author><author>Amine Marref</author>
        <description><![CDATA[The field of Human-Computer Interaction (HCI) has increasingly turned its attention to “digital wellbeing," yet the discourse remains narrowly focused. A significant portion of current research concentrates on mitigating the negative effects of technology—such as addiction, anxiety, and the harms of excessive screen time—or on a limited set of wellbeing domains, primarily social connection and physical health. This paper identifies a critical research gap: the need to move beyond a fragmented, nascent focus on eudaimonic wellbeing toward a systematic research agenda. Eudaimonia encompasses deeper aspects of human flourishing such as purpose, personal growth, reflection, and meaning. Through an extensive literature review, this paper confirms that while pioneering efforts exist, these eudaimonic domains remain significantly under-researched within mainstream HCI. In response, this paper proposes a new research agenda aimed at establishing “Eudaimonic HCI” as a critical sub-field. It articulates key open research questions concerning measurement, design patterns, human-AI collaboration, and the specific needs of vulnerable populations, aiming to unify and build upon current foundational work. Finally, it introduces a preliminary design framework to guide the creation of technologies that move beyond optimizing for engagement and instead aim to actively support users in living more meaningful and fulfilling lives.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1779096</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1779096</link>
        <title><![CDATA[Scalable multi-metric association rule learning for explainable book recommendations]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Adel Hidri</author><author>Suleiman Ali AlSaif</author><author>Eman AlShehri</author><author>Minyar Sassi Hidri</author>
        <description><![CDATA[Digital reading platforms have grown rapidly, increasing information overload and highlighting the need for efficient and transparent recommendation systems. This study presents a scalable hybrid framework that combines multi-metric association rule learning (ARL) with intelligent filtering strategies to provide clear, high-quality book recommendations at scale. Unlike traditional ARL-based recommenders that depend on a single metric or small datasets, our approach combines support, confidence, and lift measures to identify strong behavioral patterns while maintaining computational efficiency. The framework uses data-reduction strategies that select active users and high-impact items, transforming a sparse rating matrix into a dense, computationally tractable representation. Extensive experiments on a real-world dataset demonstrated that our method significantly outperforms collaborative filtering, neural models, and rule-mining baselines in precision, recall, and normalized discounted cumulative gain (NDCG). The resulting rules are inherently interpretable, enabling clear explanations for recommendations, which is a critical feature of modern personalized systems. This study demonstrates that ARL remains viable when designed with modern scalability constraints in mind, providing an explainable, efficient solution for digital libraries, online platforms, and large-scale recommender systems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1746674</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1746674</link>
        <title><![CDATA[Beyond the looking glass: multimodal LLM-based depth-sensing for spatial behavior modeling in media architecture]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zhikun Wu</author><author>Ava Fatah gen. Schieck</author>
        <description><![CDATA[Large media façades are reshaping interactions in buildings and public spaces into immersive environments, yet empirical knowledge of how pedestrians behave inside these media spcaes is still limited. This study introduces a fully automated pipeline for in-the-wild behavior analysis that integrates a system which consists of a stereo-depth camera, an object detection model with multi-target tracking algorithm, and GPT-4o with visual reasoning. Deployed at London's immersive media building Now Arcade, the system captured 2 h of depth-enhanced video and produced more than six hundred anonymised visitor trajectories without any manual annotation. It reliably identified three recurrent behaviors: passing-by, lingering, and shooting (photographing or filming). To reveal where these actions occur, we propose Behavior Instance Density (BiD) heat-maps that project frame-level behavior instances onto a floor-plan grid of 0.5m × 0.5m squares. A comparative BiD study of 2 h-long content loops with static high-contrast imagery and dynamic low-contrast animation, shows clear content-driven behavior differences. Static saturated graphics encourage longer stays and more filming at both buildings entrance and exit thresholds, while dynamic darker visuals maintain a predominantly transit-oriented flow through the corridor.The proposed pipeline uses a compact, cost-effective sensing setup, safeguards privacy by discarding raw images after processing, and can be scaled for long-term or multi-site deployments. The resulting behavioral insights offer concrete guidance for media-architecture design and lay the groundwork for responsive façades that can update their digital content in real time according to observed human engagement.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1754308</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1754308</link>
        <title><![CDATA[Digital dating abuse: a Grounded Theory study]]></title>
        <pubdate>2026-04-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Tiago Rocha-Silva</author><author>Conceição Nogueira</author><author>Liliana Rodrigues</author>
        <description><![CDATA[IntroductionOver the past decade, research on digital dating abuse (DDA) has expanded considerably, resulting in the development of multiple constructs and measurement instruments. Despite this progress, a key theoretical question remains unresolved: how should the behavioral multidimensionality of DDA be conceptualized? Moreover, little research has examined how DDA manifests in long-distance romantic relationships, where partners rely almost exclusively on information and communication technologies to interact and maintain their relationship.MethodsIn response to calls for more in-depth qualitative inquiry, we employed a constructivist Grounded Theory approach to develop a model accounting for the behavioral multidimensionality of DDA. Specifically, we collected and analyzed 1434 online posts published in Reddit (r/LongDistance) between January 2021 and June 2022, in which individuals described their experiences as perpetrators and/or victims of DDA.ResultsFindings indicate that DDA can be conceptualized as a multidimensional behavioral phenomenon encompassing two overarching dimensions: covert DDA and overt DDA. Covert DDA includes behaviors such as major changes in communication, deception, and passive control, which may be normalized within romantic relationships yet can function as precursors to more explicit forms of abuse. Overt DDA encompasses active control, hostility, and sexual coercion. The analysis revealed a continuum between covert and overt forms of DDA.DiscussionThis study contributes to the literature by extending conceptualizations of DDA to the context of LDRRs and by emphasizing the analytical and clinical relevance of covert abusive behaviors.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1780814</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1780814</link>
        <title><![CDATA[Digital anxiety: psychological effects of social media on women and a human-centered AI framework for mental health support]]></title>
        <pubdate>2026-04-02T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Aizhan Nazyrova</author><author>Muslim Sergaziyev</author><author>Assel Omarbekova</author><author>Latifa Sautbayeva</author><author>Zhandos Akimjan</author><author>Zhanar Lamasheva</author>
        <description><![CDATA[This article examines the psychological effects of social media use and explores gender-related differences, with particular attention to issues reported by women. The analysis is informed by social comparison theory and self-determination theory to explain how digital environments influence behavior and self-perception. The study focuses on psychological outcomes such as anxiety, depressive symptoms, body image dissatisfaction, and patterns of compulsive platform use. In parallel, social media platforms generate extensive behavioral data that may support the identification of mental health risks. From a computational perspective, artificial intelligence methods – including content analysis, sentiment analysis, and machine learning classification – are examined as tools for early screening of psychological distress within digital environments. A hybrid methodological approach is applied to integrate psychological analysis with data-driven AI (artificial intelligence) techniques. The results indicate that social media use is associated with higher levels of self-reported psychological vulnerability among women, while AI-based methods demonstrate the capacity to detect mental health-related signals in digital data. From a computer science perspective, the study contributes to human-centered and responsible artificial intelligence by proposing an interdisciplinary computational framework that links multimodal digital data with psychologically grounded constructs. The article concludes by outlining possible applications of AI in digital well-being initiatives and discussing ethical considerations related to privacy, autonomy, and transparency.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1652980</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1652980</link>
        <title><![CDATA[Explainable AI digital twin framework for early lung disease detection]]></title>
        <pubdate>2026-04-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Akey Sungheetha</author><author>Rajesh Sharma R.</author><author>Oluwasegun Julius Aroba</author>
        <description><![CDATA[IntroductionDigital twin technology creates virtual replicas of physical systems, enabling real-time monitoring and predictive analytics through continuous data synchronization. This study presents an explainable artificial intelligence-enhanced digital twin framework specifically designed for the early detection of chronic lung abnormalities in urban young adults aged 20–35 years.MethodsAnalysis of 4,247 patients from the Delhi metropolitan area revealed a 29.3% prevalence of structural lung damage, including bronchiectasis, emphysema, and fibrosis. The framework integrates multimodal physiological sensors, environmental pollution monitoring, and lifestyle data through advanced fusion algorithms. Mathematical modeling incorporates bronchial resistance Rb = 2.34 ± 0.45 cmH2O/L/s, lung compliance CL = 0.187 ± 0.032 L/cmH2O, and deterioration rate λdet = 0.0156 ± 0.0023 per month from longitudinal monitoring. Blockchain integration ensures data security with hash validation efficiency ηhash = 0.987 and real-time processing latency τresp = 127.3 ± 15.7 ms. Environmental factor integration, including the air quality index AQI = 247 ± 67, enables personalized risk stratification accuracy βrisk = 0.876 ± 0.045.ResultsCore performance metrics demonstrate explainability coefficient ξexp = 0.847 ± 0.023, prediction accuracy αpred = 0.923 ± 0.034, and early detection capability extending tearly = 6.7 ± 1.2 months before clinical symptoms. Validation across 1,847 test subjects achieved sensitivity, Searly = 0.891, specificity, Spearly = 0.876, and positive predictive value (PPV) = 0.834. Environmental factor integration, including the air quality index AQI = 247 ± 67, enables personalized risk stratification accuracy βrisk = 0.876 ± 0.045. Statistical analysis confirmed significant improvements in diagnostic timing (p < 0.001), intervention effectiveness (p < 0.001), and patient outcomes compared to conventional approaches.DiscussionClinical implementation demonstrates 68.4% reduction in diagnostic delays, 73.6% improvement in intervention timing, and annual healthcare cost savings of ΔC = $2, 847 per patient.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1822456</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1822456</link>
        <title><![CDATA[Editorial: AI innovations in education: adaptive learning and beyond]]></title>
        <pubdate>2026-03-30T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Luigi Gallo</author><author>Maria Concetta Carruba</author><author>Antonino Ferraro</author><author>Henrik Hautop Lund</author><author>Angelo Rega</author><author>Stefano Triberti</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1800319</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1800319</link>
        <title><![CDATA[When buildings learn how we move: embodied human–building interaction in the age of machine intelligence]]></title>
        <pubdate>2026-03-27T00:00:00Z</pubdate>
        <category>Opinion</category>
        <author>Benoît G. Bardy</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1757509</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1757509</link>
        <title><![CDATA[Interaction design methods for data-intelligent museum exhibitions: an embodied cognition perspective]]></title>
        <pubdate>2026-03-25T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Husheng Pan</author><author>Cuiting Kong</author><author>Lie Zhang</author>
        <description><![CDATA[Addressing current practical challenges in exhibitions in the data-intelligent era—such as an overemphasis on form over content and insufficient emotional resonance—as well as the lack of a systematic theoretical framework to guide practice, this paper draws on embodied cognition and dynamical systems theory to explore the internal mechanisms of interaction design for museum exhibitions in the data-intelligent era from the perspective of cognitive generation. It identifies four core elements of such interaction design, namely the Body Perception Layer, the Body Action Layer, the Environmental Construction Layer and the Meaning Construction Layer, and on this basis constructs a cyclic embodied interaction design framework for museums (the PCAE model) that reveals the dynamic flow of information between visitors and the data-intelligent exhibition environment. Using the data-intelligent interactive exhibit “Dialogue With the Master” at the Confucius Museum in China as a case study, the paper further validates the feasibility and scientific soundness of the proposed framework. This framework introduces a new embodied cross-disciplinary theoretical perspective for research on interaction design in museums in the data-intelligent era and provides an operational design tool that offers designers a clear guiding pathway for optimizing interactive experiences, thereby holding substantial practical value for design practice and theoretical exploration.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1678653</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1678653</link>
        <title><![CDATA[ICT tools for autism spectrum disorder interventions linked with parental involvement in children’s education and support]]></title>
        <pubdate>2026-03-20T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Sevasti Kapsi</author>
        <description><![CDATA[Parental involvement οr engagement is essential for the holistic development of children, especially for children with autism spectrum disorder (ASD). Several ASD interventions integrate parental involvement, with positive outcomes. ICT tools affect children with ASD in their daily lives by empowering social–emotional, communicational, and educational skills. This literature review aims to examine the relationship between ICTs and parental involvement (PI) for children with ASD. Specifically, it describes the most frequently reported theoretical models of PI and identifies effective interventions that integrate ICTs with parental involvement. Combining powerful interventions could lead to better therapeutic and educational outcomes. Results from studies show that, despite methodological limitations, ICTs may support parental engagement in ASD interventions, helping both children with ASD and their parents. Future research could test new synthesized protocols for effectiveness in ASD treatment.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1768435</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1768435</link>
        <title><![CDATA[Lay belief about AI and its decision-making]]></title>
        <pubdate>2026-03-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Suhas Vijayakumar</author><author>W. Yuna Yang</author><author>David DeFranza</author>
        <description><![CDATA[This paper examines people’s lay belief concerning the mind of an artificial intelligence (AI) as a decision-making agent and how this belief shapes an individual’s own decision-making style in response. People perceive AI as more rational and reason-driven, in contrast to viewing humans as emotionally driven. Two studies confirm these beliefs, showing participants consistently judge AI as reason-based and humans as emotion-driven in decision-making. In a subsequent study, participants engage in an economic ultimatum game. When participants thought they were interacting with an AI (vs. a human) competitor, they adopted a more economically rational decision-making style, moving closer to the game-theoretic optimum. This shift in decision-making style was mediated by participants’ belief in the rational nature of AI. The findings suggest that perceptions of AI’s decision-making tendencies can influence the cognitive strategies that are adopted in response, with potential implications for human-AI interactions.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1686763</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1686763</link>
        <title><![CDATA[The data-driven voice-body in performance: AI voices as materials, mediators, and gifts]]></title>
        <pubdate>2026-03-17T00:00:00Z</pubdate>
        <category>Conceptual Analysis</category>
        <author>Jonathan Chaim Reus</author>
        <description><![CDATA[Data-driven, realistic and identity-bearing AI voice technologies have proliferated in recent years. Voice, a multiply embodied phenomenon situated within and across human bodies in space and time, is deeply disrupted by the disembodying tendencies of AI voice technologies and their processes of data collection and data creation, resulting in the need for a re-evaluation of perceptual, cognitive and cultural factors. This article addresses this need by synthesizing ideas from embodied cognition, voice studies, and material anthropology to analyze real-time, AI-mediated voice as a form of embodied cognition that is an intersubjective, extended, materially and socially distributed phenomenon. Through the case study of the live performance iː ɡoʊ weɪ, this article makes three contributions: (1) it articulates AI-mediated vocal identity as a process of continual reconfiguration across human and machine agencies; (2) it foregrounds audience perception as an active force in stabilising and destabilising emergent voice–body assemblages; and (3) it proposes a speculative ethical framework for vocal data practice grounded in the notion of voice as gift.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1652190</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1652190</link>
        <title><![CDATA[Explainable AI framework for psilocybin depression treatment optimization]]></title>
        <pubdate>2026-03-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Akey Sungheetha</author><author>R. Rajesh Sharma</author><author>Oluwasegun Julius Aroba</author><author>Sheila Mahapatra</author><author>P. D. Mahendhiran</author>
        <description><![CDATA[IntroductionThis computational modeling study introduces a novel Explainable Artificial Intelligence (XAI) framework for optimizing single-dose psilocybin treatment protocols through personalized intervention modeling using publicly available mental health datasets. All results presented are derived from novel simulated data and predictive modeling only, not from real-time clinical implementations or actual patient treatments.MethodsThe mathematical optimization model integrates digital twin technologies, multimodal depression detection systems, and Bayesian optimization algorithms to create comprehensive computational patient profiles with temporal resolution processing capabilities at 250 Hz sampling frequency. Validation employed three publicly available datasets: (1) the Psilocybin Precision Functional Mapping dataset from OpenNeuro containing neuroimaging data from 7 participants, (2) the MODMA multimodal mental disorder dataset with 53 participants including electroencephalography and audio signals, and (3) a meta-analytic psilocybin therapy outcomes dataset containing aggregated results from 10 clinical trials. The framework incorporates pharmacokinetic modeling with an absorption rate constant of 0.45 per hour and an elimination rate constant of 0.23 per hour, receptor occupancy dynamics based on a dissociation constant of 6.3 nanomolar, and simulated real-time monitoring protocols processing physiological parameters including heart rate variability, blood pressure measurements, and cortisol levels at a 1 Hz frequency.ResultsThe simulated computational model demonstrates significant improvements in prediction accuracy, reaching 94.7%, and therapeutic transparency, achieving 89.3% explainability scores. Simulated validation demonstrates computational precision of 92.8% in predicting treatment response patterns across diverse patient populations in silico. The proposed computational methodology addresses key challenges in psychedelic-assisted therapy modeling through interpretable artificial intelligence models, achieving 96.2% computational safety index scores and 91.5% algorithmic compliance metrics in simulation environments. Energy-efficient computational architecture achieves 73.4% carbon footprint reduction through optimized algorithm design and sparse matrix representations.DiscussionThis study presents a theoretical computational framework for modeling therapeutic outcomes through simulation and prediction, establishing a foundation for future clinical validation through prospective randomized controlled trials. The framework supports sustainable digital mental healthcare delivery systems compatible with renewable energy infrastructure. All findings represent computational predictions and simulated scenarios requiring extensive clinical validation before any practical application.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1799323</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1799323</link>
        <title><![CDATA[Next-Gen orientation: supporting international students with generative AI NPCs in VR]]></title>
        <pubdate>2026-03-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Santiago Berrezueta-Guzman</author><author>Stefan Wagner</author>
        <description><![CDATA[Educational Virtual Reality (VR) provides immersive learning environments, yet most contemporary applications rely on pre-scripted Non-Player Characters (NPCs) that offer limited personalization and rigid interaction paths. This study presents the technical implementation and evaluation of TUMSphere, a VR orientation platform designed to facilitate the academic and cultural transition of international students. We propose a modular architecture that integrates Large Language Models (LLMs) with Unreal Engine via the Conversational AI (Convai) platform, enabling embodied NPCs to provide real-time speech recognition, context-aware dialogue, and autonomous spatial navigation. To validate this approach, a mixed-methods user study (N = 24) was conducted with international students to assess system latency, usability, and pedagogical efficacy. Results demonstrate a high System Usability Scale (SUS) score of 76.4 (SD = 12.5) and robust task completion rates, reaching 100% for spatial navigation and 96% for information retrieval. While technical benchmarking revealed an average end-to-end latency of 2.90s for complex, retrieval-heavy queries, qualitative findings indicate that users find this “latency-presence trade-off” acceptable in exchange for the pedagogical benefits. Crucially, participants reported a significant reduction in social anxiety when practicing language and administrative queries with AI agents compared to human interlocutors. These findings suggest that embodied, generative AI NPCs can serve as a scalable, low-pressure “social sandbox” that effectively redefines student support systems and orientation strategies in higher education.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1812776</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1812776</link>
        <title><![CDATA[Editorial: Enhancing parental involvement in special education through digital technologies]]></title>
        <pubdate>2026-03-10T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Irene Chaidi</author><author>Athanasios Drigas</author><author>Charalampos Karagiannidis</author>
        <description></description>
      </item>
      </channel>
    </rss>