<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Computer Science | Theoretical Computer Science section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/computer-science/sections/theoretical-computer-science</link>
        <description>RSS Feed for Theoretical Computer Science section in the Frontiers in Computer Science journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-05-02T03:03:17.50+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1748038</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1748038</link>
        <title><![CDATA[Algorithm-based radio labeling for optimal channel assignment in outerplanar graphs]]></title>
        <pubdate>2026-04-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Baskar Mari</author><author>Ravi Sankar Jeyaraj</author>
        <description><![CDATA[IntroductionRadio labeling of graphs extends the channel assignment problem by assigning non-negative integers to vertices of a connected graph G such that |h(℘)−h(𝓆)|≥diam(ℊ)+1−d(℘, 𝓆). The objective is to minimize the span, leading to the radio number rn(G).MethodsWe consider a class of outerplanar graphs with vertex set {u1, v1, x1, …, xn, y1, …, yn} and a structured edge set combining path and matching edges. Analytical bounds and a constructive labeling algorithm are developed.ResultsLower and upper bounds for rn(G) are derived, and the proposed algorithm yields feasible radio labelings with near-optimal span.DiscussionThe results highlight that the structure of outerplanar graphs enables efficient labeling strategies and provide a basis for extending radio labeling techniques to broader graph classes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1746591</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1746591</link>
        <title><![CDATA[Toward a novel measure of user trust in XAI systems]]></title>
        <pubdate>2026-04-28T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Miquel Miró-Nicolau</author><author>Gabriel Moyà-Alcover</author><author>Antoni Jaume-i-Capó</author><author>Manuel González-Hidalgo</author><author>Adel Ghazel</author><author>Maria Gemma Sempere Campello</author><author>Juan Antonio Palmer Sancho</author>
        <description><![CDATA[The increasing reliance on Deep Learning models, combined with their inherent lack of transparency, has spurred the development of a novel field of study known as eXplainable AI (XAI) methods. These methods aim to enhance end-users' trust in automated systems by providing insights into the rationale behind their decisions. This paper presents a novel trust measure in XAI systems, allowing their refinement. Our proposed metric combines both performance metrics and trust indicators from an objective perspective. To validate this novel methodology, we conducted three case studies showing an improvement with respect to the state-of-the-art, with an increased sensitivity to different scenarios.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1757450</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1757450</link>
        <title><![CDATA[Semantic foundations for digital twins: the contribution of ontological analysis]]></title>
        <pubdate>2026-03-02T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mohammed Elhajj</author>
        <description><![CDATA[Digital Twins (DTs) are revolutionizing industries by enabling real-time simulations, data-driven decision-making, and enhanced operational efficiency. However, their integration and scalability remain challenging due to the complexity of multi-domain systems, heterogeneous data sources, and semantic inconsistencies. This paper proposes an ontology-driven DT framework that leverages Web Ontology Language (OWL) and Description Logic (DL) to enhance semantic reasoning, data representation, and facilitates system interoperability through standards-aligned semantic mapping. A distributed ontology architecture ensures scalability and adaptability across diverse industrial applications. The results demonstrate a 60% reduction in integration time, a 75% decrease in error rates, and improved decision-making accuracy, highlighting the superiority of the ontology-based approach over traditional DTs. Comparative analysis underscores its effectiveness in addressing interoperability, semantic ambiguity, and system maintenance challenges. The findings emphasize the critical role of ontological analysis in developing self-adaptive, cross-domain DT systems. Future research will explore automated ontology generation, AI-driven semantic reasoning, and user-centric design to further enhance ontology-powered DT ecosystems.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1804000</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1804000</link>
        <title><![CDATA[Correction: OntoTrack: a linked open data based solution to track mutual citation networks in research publications]]></title>
        <pubdate>2026-02-26T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Mark Daly</author><author>Muhammad Ahtisham Aslam</author><author>Ronja Froelian</author><author>Sonja Schimmler</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1717711</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1717711</link>
        <title><![CDATA[Algorithmic self-repair: frontiers in fault-tolerant computation]]></title>
        <pubdate>2026-02-12T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Christine Markarian</author><author>Alavikunhu Panthakkan</author>
        <description><![CDATA[How can algorithms continue to function when confronted with faults, noise, and malicious behavior? This question lies at the heart of resilient computation, a challenge addressed by multiple traditions but rarely examined through a unified lens. In this article, we introduce the concept of algorithmic self-repair as a framework for understanding how algorithms detect, mitigate, and recover from failures. We compare five major classes of algorithmic self-repair: (1) self-stabilizing algorithms that guarantee convergence from arbitrary states; (2) self-healing graph algorithms that preserve connectivity under dynamic failures; (3) error-resilient online algorithms that sustain competitiveness despite uncertain or corrupted inputs; (4) redundancy-based and probabilistic repair techniques that achieve robustness through replication or stochastic correction; and (5) Byzantine fault-tolerant algorithms that maintain correctness even in the presence of adversarial participants. By consolidating these approaches into a shared taxonomy, we highlight their guiding principles, strengths, and trade-offs. The result is not merely a survey but a structured foundation and roadmap for advancing resilient computation, positioning algorithmic self-repair as a frontier where fault tolerance becomes a defining design principle of algorithms.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1636097</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1636097</link>
        <title><![CDATA[OntoTrack: a linked open data based solution to track mutual citation networks in research publications]]></title>
        <pubdate>2026-01-29T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mark Daly</author><author>Muhammad Ahtisham Aslam</author><author>Ronja Froelian</author><author>Sonja Schimmler</author>
        <description><![CDATA[ContextScientific publications are vital for a researcher's scientific career. Citations of scientific work by other researchers are considered as evidence of scientific and technical strength and acceptance of one's scientific contributions. The higher citation count directly raises the “h-index,” which is evidence of a strong scientific profile. Due to its direct impact on scientific profiles, citation manipulation (unnatural citations) is becoming a major concern in academia and industry.MethodsTo address this challenge, we present OntoTrack, an ontology-based solution that can be used to detect and identify potential unnatural citations and citation networks within the scientific literature. In this paper, we present the complete architecture of the OntoTrack solution. We also present the OntoTrack data model and ontology with its key attributes and parameters, which play an important role in tracking citation networks. OntoTrack ontology is equipped with a comprehensive set of rules that are defined using the Semantic Web Rule Language (SWRL). These rules enhance the reasoning capabilities of OntoTrack and facilitate smart identification of unnatural citation indicators. A proof-of-concept dataset is produced as part of this work and used to evaluate the effectiveness and precision of the OntoTrack Solution in detecting citation anomalies.ResultsWe also present the evaluation of the OntoTrack Solution by defining a comprehensive set of Competency Questions (CQs) and executing these against the OntoTrack SPARQL Protocol and RDF Query Language (SPARQL) Endpoint. The results of the evaluations show that OntoTrack can successfully identify various forms of unnatural citations, including self-citations, citation cartels, and citation manipulation among researchers. The results also show that an ontology-based approach provides a sustainable and efficient alternative to traditional machine-learning methods, which often require extensive computational resources.DiscussionThe findings suggest that ontology-based systems such as OntoTrack can enhance transparency and integrity in academic research by providing a robust mechanism for monitoring citation practices.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1744088</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1744088</link>
        <title><![CDATA[Correction: Benchmarking quantum annealing with maximum cardinality matching problems]]></title>
        <pubdate>2025-11-24T00:00:00Z</pubdate>
        <category>Correction</category>
        <author>Daniel Vert</author><author>Madita Willsch</author><author>Berat Yenilen</author><author>Renaud Sirdey</author><author>Stéphane Louise</author><author>Kristel Michielsen</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1617597</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1617597</link>
        <title><![CDATA[Deep federated learning: a systematic review of methods, applications, and challenges]]></title>
        <pubdate>2025-11-04T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Lakshan Cooray</author><author>Janaka Sendanayake</author><author>Pramuditha Vithanaarachchi</author><author>Y. H. P. P. Priyadarshana</author>
        <description><![CDATA[Federated Learning (FL) represents a paradigm shift in machine learning, enabling collaborative model training on decentralized data while preserving user privacy. However, the transition from theory to real-world application is impeded by significant challenges, including high communication costs, statistical and system heterogeneity and persistent privacy vulnerabilities. These barriers critically limit the performance, scalability and security of FL systems. This paper provides a systematic review of the state-of-the-art solutions developed to address these fundamental obstacles. The review analyzes core methodological advancements, including advanced model aggregation methods, techniques to enhance communication efficiency such as model compression and decentralized training and strategies to combat statistical heterogeneity arising from non-IID data. Furthermore, it delves into emerging paradigms like Federated Meta-Learning and Federated Reinforcement Learning, alongside advanced architectural models such as hierarchical and blockchain-based systems. The practical impact of these advancements is contextualized through a review of key application domains, including healthcare, vehicular networks and the Internet of Things. A benchmark analysis is presented to assess the practical efficacy of these diverse techniques. In conclusion, this work synthesizes the critical trade-offs inherent in FL systems and highlights key directions for future research, offering a comprehensive guide for researchers and practitioners in this rapidly evolving field.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1678976</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1678976</link>
        <title><![CDATA[Machine learning-based cloud resource allocation algorithms: a comprehensive comparative review]]></title>
        <pubdate>2025-10-21T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Deep Bodra</author><author>Sushil Khairnar</author>
        <description><![CDATA[Cloud resource allocation has emerged as a major challenge in modern computing environments, with organizations struggling to manage complex, dynamic workloads while optimizing performance and cost efficiency. Traditional heuristic approaches prove inadequate for handling the multi-objective optimization demands of existing cloud infrastructures. This paper presents a comparative analysis of state-of-the-art artificial intelligence and machine learning algorithms for resource allocation. We systematically evaluate 10 algorithms across four categories: Deep Reinforcement Learning approaches, Neural Network architectures, Traditional Machine Learning enhanced methods, and Multi-Agent systems. Analysis of published results demonstrates significant performance improvements across multiple metrics including makespan reduction, cost optimization, and energy efficiency gains compared to traditional methods. The findings reveal that hybrid architectures combining multiple artificial intelligence and machine learning techniques consistently outperform single-method approaches, with edge computing environments showing the highest deployment readiness. Our analysis provides critical insights for both academic researchers and industry practitioners seeking to implement next-generation cloud resource allocation strategies in increasingly complex and dynamic computing environments.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1649354</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1649354</link>
        <title><![CDATA[Left-deep join order selection with higher-order unconstrained binary optimization on quantum computers]]></title>
        <pubdate>2025-10-07T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Valter Uotila</author>
        <description><![CDATA[Join order optimization is among the most crucial query optimization problems, and its central position is also evident in the new research field where quantum computing is applied to database optimization and data management. In this field, join order optimization is the most studied database problem, typically tackled with a quadratic unconstrained binary optimization model, which is solved using various meta-heuristics, such as quantum and digital annealing, the quantum approximate optimization algorithm, or the variational quantum eigensolver. In this study, we continue developing quantum computing techniques for left-deep join order optimization by presenting three novel quantum optimization algorithms. These algorithms are based on a higher-order unconstrained binary optimization model, which is a generalization of the quadratic model and has not previously been applied to database problems. Theoretically, these optimization problems naturally map to universal quantum computers and quantum annealers. Compared to previous studies, two of our algorithms are the first quantum algorithms to model the join order cost function precisely. We prove theoretical bounds by showing that these two methods encode the same plans as the dynamic programming algorithm with respect to the query graph, which provides the optimal result up to cross products. The third algorithm achieves plans at least as good as those of the greedy algorithm with respect to the query graph. These results establish a meaningful theoretical connection between classical and quantum algorithms for selecting left-deep join orders. To demonstrate the practical usability of our algorithms, we have conducted an extensive experimental evaluation on thousands of clique, cycle, star, tree, and chain query graphs using both quantum and classical solvers.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1693260</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1693260</link>
        <title><![CDATA[Editorial: Realizing quantum utility: grand challenges of secure & trustworthy quantum computing]]></title>
        <pubdate>2025-09-29T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Betis Baheri</author><author>Edoardo Giusto</author><author>Shuai Xu</author><author>Kaitlin N. Smith</author><author>Ed Younis</author><author>Phuong Cao</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1520903</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1520903</link>
        <title><![CDATA[Dependable classical-quantum computing systems engineering]]></title>
        <pubdate>2025-07-16T00:00:00Z</pubdate>
        <category>Hypothesis and Theory</category>
        <author>Edoardo Giusto</author><author>Santiago Núñez-Corrales</author><author>Kaitlin N. Smith</author><author>Phuong Cao</author><author>Ed Younis</author><author>Paolo Rech</author><author>Flavio Vella</author><author>Betis Baheri</author><author>Alessandro Cilardo</author><author>Bartolomeo Montrucchio</author><author>Weiwen Jiang</author><author>Shuai Xu</author><author>Samudra Dasgupta</author><author>Ravishankar K. Iyer</author><author>Travis S. Humble</author>
        <description><![CDATA[Increasing evidence suggests quantum computing (QC) complements traditional High-Performance Computing (HPC) by leveraging its unique capabilities, leading to the emergence of a new, hybrid paradigm, QHPC. However, this integration introduces new challenges, with dependability–defined by reproducibility, resiliency, and security and privacy–emerging as a central concern for building trustworthy systems that provide an advantage to the users. This paper proposes a framework for dependable QHPC system design, organized around these three pillars. We identify integration challenges, anticipate roadblocks, and highlight productive synergies across QC, HPC, cloud platforms, and network security. Drawing from both classical computing principles and quantum-specific insights, we present a roadmap for co-design that supports robust hybrid architectures. Our approach offers concrete metrics for assessing dependability, provides design guidance for engineers working at the QC-HPC interface, and surfaces new engineering questions around complexity, scale, and fault tolerance. Ultimately, designing for dependability is key to realizing practical, scalable QHPC systems and accelerating the broader quantum ecosystem capable of translating quantum promises into actual application delivery.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1521059</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1521059</link>
        <title><![CDATA[Trusted execution environments for quantum computers]]></title>
        <pubdate>2025-06-18T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Theodoros Trochatos</author><author>Chuanqi Xu</author><author>Sanjay Deshpande</author><author>Yao Lu</author><author>Yongshan Ding</author><author>Jakub Szefer</author>
        <description><![CDATA[The cloud-based environments in which today's and future quantum computers will operate raise concerns about the security and privacy of user's intellectual property, whether code, or data, or both. Without dedicated security protections, quantum circuits submitted to cloud-based quantum computer providers could be accessed by the cloud provider, or malicious insiders working in the cloud provider's data centers. Furthermore, data embedded in these circuits can similarly be accessed as it is encoded using quantum gates inside the circuit. This study presents various hardware and architecture modifications that could be deployed in today's quantum computers, based on superconducting qubits, to protect both the code and data from potentially untrusted quantum computer providers or malicious insiders. Motivated by existing Trusted Execution Environments (TEEs) in classical computers, this study introduces the notion of Quantum Trusted Execution Environments (QTEEs) which leverage trusted hardware to hide or obfuscate quantum circuits executing on a remote, cloud-based quantum computer. This study presents multiple, different approaches to design of QTEEs and considers both hardware and architecture, as well as system software and operating system support necessary for realization of QTEEs. Overall, this study presents three hardware architectures, namely, QC-TEE, SoteriaQ, and CASQUE, that have been designed to protect users' circuits and data from potential threats originating from both malicious quantum computer cloud providers or insider attackers. This study further outlines a roadmap for other possible QTEEs that can be developed in the future, to account for different threat models or to support different types of quantum computer architectures.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1523699</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1523699</link>
        <title><![CDATA[Evaluating large language models: a systematic review of efficiency, applications, and future directions]]></title>
        <pubdate>2025-05-27T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Yasmeen Saleh</author><author>Manar Abu Talib</author><author>Qassim Nasir</author><author>Fatima Dakalbab</author>
        <description><![CDATA[Large language models, the innovative breakthrough taking the world by storm, have been applied in several fields, such as medicine, education, finance, and law. Moreover, large language models can integrate into those fields through their abilities in natural language processing, text generation, question answering, and several other use cases that benefit human interactions and decision-making. Furthermore, it is imperative to acknowledge the differences involved with large language models beyond their applications by considering aspects such as their types, setups, parameters, and performance. This could help us understand how each large language model could be utilized to its fullest extent for maximum benefit. In this systematic literature review, we explore each of these aspects in depth. Finally, we conclude with insights and future directions for advancing the efficiency and applicability of large language models.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1519212</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1519212</link>
        <title><![CDATA[Shallow implementation of quantum fingerprinting with application to quantum finite automata]]></title>
        <pubdate>2025-05-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Mansur Ziiatdinov</author><author>Aliya Khadieva</author><author>Kamil Khadiev</author>
        <description><![CDATA[Quantum fingerprinting is a technique that maps a classical input word to a quantum state. The obtained quantum state is much shorter than the original word, and its processing uses fewer resources, making it useful in quantum algorithms, communication, and cryptography. One of the examples of quantum fingerprinting is the quantum automata algorithm for MODp={ai·p∣i≥0} languages, where p is a prime number. However, implementing such an automaton on current quantum hardware is not efficient. Quantum fingerprinting maps a word x∈{0, 1}n of length n to a state |ψ(x)〉 of O(logn) qubits, and uses O(n) unitary operations. Computing quantum fingerprint using all available qubits of the current quantum computers is infeasible due to many quantum operations. To make quantum fingerprinting practical, we should optimize the circuit for depth instead of width, in contrast to the previous works. We propose explicit methods of quantum fingerprinting based on tools from additive combinatorics, such as generalized arithmetic progressions (GAPs), and prove that these methods provide circuit depth comparable to a probabilistic method. We also compare our method to prior work on explicit quantum fingerprinting methods. We provide a series of numerical experiments with implementation of the quantum automata for MOD17 language on noisy simulators of IBMQ quantum devices. We show that shallow implementation based on GAPs produces results with much smaller computational error compared to standard deep circuit implementation. Despite the fact that on the ideal quantum computational device, the opposite situation arises. We show that the shallow circuit for the quantum automaton is better for near-future quantum computational devices.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1584114</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1584114</link>
        <title><![CDATA[Quantum algorithms and complexity in healthcare applications: a systematic review with machine learning-optimized analysis]]></title>
        <pubdate>2025-05-07T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Agostino Marengo</author><author>Vito Santamato</author>
        <description><![CDATA[This paper presents a systematic review of quantum computing approaches to healthcare-related computational problems, with an emphasis on quantum-theoretical foundations and algorithmic complexity. We adopt an optimized machine learning methodology—combining Particle Swarm Optimization (PSO) with Latent Dirichlet Allocation (LDA)—to analyze the literature and identify key research themes at the intersection of quantum computing and healthcare. A total of 63 peer-reviewed studies were analyzed, with 41 categorized under the first domain and 22 under the second. This approach revealed two primary research directions: (1) quantum computing for artificial intelligence in healthcare, and (2) quantum computing for healthcare data security. We highlight the theoretical advances underlying these domains, from novel quantum machine learning algorithms for biomedical data to quantum cryptographic protocols for securing medical information. A gradient boosting classifier further validates our taxonomy by reliably distinguishing between the two categories of research, demonstrating the robustness of the identified themes, with an accuracy of 84.2%, a precision of 88.9%, a recall of 84.2%, an F1-score of 84.5%, and an area under the curve of 0.875. Interpretability analysis using Local Interpretable Model-Agnostic Explanations (LIME) exposes distinguishing features of each category (e.g., references to biomedical applications versus blockchain-based security frameworks), offering transparency into the literature-driven categorization, with the latter showing the most significant contributions to topic assignment (ranging from −0.133 to +0.128). Our findings underscore that quantum algorithms offer significant potential to enhance data security, optimize complex diagnostic computations, and provide computational speedups for health informatics. We also identify outstanding challenges—such as the need for scalable quantum algorithms and error-tolerant hardware integration—that must be addressed to translate these theoretical advancements into real-world clinical impact. This study emphasizes the importance of hybrid quantum-classical models and cross-disciplinary research to bridge the gap between cutting-edge quantum computing theory and its practical applications in healthcare.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1557977</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1557977</link>
        <title><![CDATA[Model checking deep neural networks: opportunities and challenges]]></title>
        <pubdate>2025-04-25T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Zohra Sbai</author>
        <description><![CDATA[Deep neural networks (DNNs) are extensively used in both current and future manufacturing, transportation, and healthcare sectors. The widespread use of neural networks in highly safety-critical applications has made it necessary to prevent catastrophic issues from arising during prediction processes. In fact, misreading a traffic sign by an autonomous car or performing an incorrect analysis of medical records could put human lives in danger. With this awareness, the number of studies related to deep neural network verification has increased dramatically in recent years. In particular, formal guarantees regarding the behavior of a DNN under particular settings are provided by model checking, which is crucial in safety-critical applications where network output errors could have disastrous effects. Model checking is an effective approach for confirming that neural networks perform as planned by comparing them to clearly stated qualities. This paper aims to highlight the critical need for and present challenges associated with using model-checking verification techniques to verify deep neural networks before relying on them in real-world applications. It examines state-of-the-art research and draws the most prominent future directions in the model checking of neural networks.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1504725</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1504725</link>
        <title><![CDATA[Plagiarism types and detection methods: a systematic survey of algorithms in text analysis]]></title>
        <pubdate>2025-03-17T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Altynbek Amirzhanov</author><author>Cemil Turan</author><author>Alfira Makhmutova</author>
        <description><![CDATA[Plagiarism in academic and creative writing continues to be a significant challenge, driven by the exponential growth of digital content. This paper presents a systematic survey of various types of plagiarism and the detection algorithms employed in text analysis. We categorize plagiarism into distinct types, including verbatim, paraphrasing, translation, and idea-based plagiarism, discussing the nuances that make detection complex. This survey critically evaluates existing literature, contrasting traditional methods like string-matching with advanced machine learning, natural language processing, and deep learning approaches. We highlight notable works focusing on cross-language plagiarism detection, source code plagiarism, and intrinsic detection techniques, identifying their contributions and limitations. Additionally, this paper explores emerging challenges such as detecting cross-language plagiarism and AI-generated content. By synthesizing the current landscape and emphasizing recent advancements, we aim to guide future research directions and enhance the robustness of plagiarism detection systems across various domains.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1504523</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1504523</link>
        <title><![CDATA[Urban sentiment mapping using language and vision models in spatial analysis]]></title>
        <pubdate>2025-03-14T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jayedi Aman</author><author>Timothy C. Matisziw</author>
        <description><![CDATA[IntroductionUnderstanding how urban environments shape public sentiment is crucial for urban planning. Traditional methods, such as surveys, often fail to capture evolving sentiment dynamics. This study leverages language and vision models to assess the influence of urban features on public emotions across spatial contexts and timeframes.MethodsA two-phase computational framework was developed. First, sentiment inference used a BERT-based model to extract sentiment from geotagged social media posts. Second, urban context inference applied PSPNet and Mask R-CNN to street view imagery to quantify urban design features, including visual enclosure, human scale, and streetscape complexity. The study integrates publicly available data and spatial simulation techniques to examine sentiment-urban form relationships over time.ResultsThe analysis reveals that greenery and pedestrian-friendly infrastructure positively influence sentiment, while excessive openness and fenced-off areas correlate with negative sentiment. A hotspot analysis highlights shifting sentiment patterns, particularly during societal disruptions like the COVID-19 pandemic.DiscussionFindings emphasize the need to incorporate public sentiment into urban simulations to create inclusive, safe, and resilient environments. The study provides data-driven insights for planners, supporting human-centered design interventions that enhance urban livability.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1528985</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1528985</link>
        <title><![CDATA[Defining quantum-ready primitives for hybrid HPC-QC supercomputing: a case study in Hamiltonian simulation]]></title>
        <pubdate>2025-03-12T00:00:00Z</pubdate>
        <category>Systematic Review</category>
        <author>Andrea Delgado</author><author>Prasanna Date</author>
        <description><![CDATA[As computational demands in scientific applications continue to rise, hybrid high-performance computing (HPC) systems integrating classical and quantum computers (HPC-QC) are emerging as a promising approach to tackling complex computational challenges. One critical area of application is Hamiltonian simulation, a fundamental task in quantum physics and other large-scale scientific domains. This paper investigates strategies for quantum-classical integration to enhance Hamiltonian simulation within hybrid supercomputing environments. By analyzing computational primitives in HPC allocations dedicated to these tasks, we identify key components in Hamiltonian simulation workflows that stand to benefit from quantum acceleration. To this end, we systematically break down the Hamiltonian simulation process into discrete computational phases, highlighting specific primitives that could be effectively offloaded to quantum processors for improved efficiency. Our empirical findings provide insights into system integration, potential offloading techniques, and the challenges of achieving seamless quantum-classical interoperability. We assess the feasibility of quantum-ready primitives within HPC workflows and discuss key barriers such as synchronization, data transfer latency, and algorithmic adaptability. These results contribute to the ongoing development of optimized hybrid solutions, advancing the role of quantum-enhanced computing in scientific research.]]></description>
      </item>
      </channel>
    </rss>