<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Computer Science | Software section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/computer-science/sections/software</link>
        <description>RSS Feed for Software section in the Frontiers in Computer Science journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-27T04:39:05.763+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1845840</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1845840</link>
        <title><![CDATA[Editorial: Software specification and verification: models and tools]]></title>
        <pubdate>2026-04-22T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Vincenzo Arceri</author><author>Nabendu Chaki</author><author>Agostino Cortesi</author><author>Novarun Deb</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1655377</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1655377</link>
        <title><![CDATA[Whole-value analysis by abstract interpretation]]></title>
        <pubdate>2026-01-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Luca Negrini</author>
        <description><![CDATA[Value analysis is the task of understanding what concrete values a program might compute for each variable or memory region. Historically, research focused mostly on numerical analysis (i.e., value analysis of programs manipulating numeric values), while string analyses have received wider attention in the last two decades. String analyses present a key challenge: reasoning about strings entails reasoning about integer values either used as arguments to string operations (e.g., evaluating a substring) or returned by string operations (e.g., calculating the length of a string). Traditionally, string analyses were formalized with respect to a specific numeric analysis, usually considering constant values or their possible ranges, tailoring definitions, semantic proofs, and implementations to that particular combination, hence hindering the adoption of the analyses in different contexts. This study presents a modular framework to define whole-value analyses (that is, combinations of numeric analyses, string analyses, and possibly other value types computed by a program) by Abstract Interpretation. The framework defines information exchange between the different analyses in the form of abstract constraints, allowing each analysis to perform given only a generic and analysis-independent description of the abstract values computed by other analyses. Adopting such a framework (i) ensures that soundness proofs are still valid when changing the combination of domains used, and (ii) eases implementation and experimentation of different combinations of value analyses, simplifying comparisons between different scientific contributions and augmenting the set of domains an abstract interpreter can use to analyze a program.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1723480</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1723480</link>
        <title><![CDATA[A configurable approach for intra-model inconsistency management in multi-view collaborative modeling]]></title>
        <pubdate>2026-01-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hayder Ali Neamah Alsharuee</author><author>Mohammadreza Sharbaf</author><author>Behrouz Tork Ladani</author>
        <description><![CDATA[IntroductionIn the software development life cycle, collaborative modeling through multiple projective views of a single, shared model is a critical activity that enables effective collaboration among experts and stakeholders. Real-time optimistic collaboration in multi-view modeling allows concurrent modifications but often introduces inconsistencies that must be resolved to achieve an integrated and valid model. Existing inconsistency management methods frequently focus on isolated repairs or offer limited alternatives, lacking support for collaborative dynamics and configurable resolution strategies. This study aims to develop a configurable framework for managing intra-model inconsistencies in real-time multi-view collaborative modeling environments.MethodsWe propose a novel framework for inconsistency management tailored to multi-view collaborative modeling, based on Model-Driven Engineering (MDE) principles. The framework supports real-time modeling scenarios and enables change propagation according to the online collaboration mode. Key components include a consistency oracle and incremental consistency checking, which together manage the integration of model changes and overlaps. We introduce the COMIM approach, which assists collaborators in handling inconsistencies by considering team interactions, individual ownership, and configurable repair strategies.ResultsThe framework was evaluated through a case study involving multi-view collaborative modeling sessions. Empirical results demonstrate the feasibility and effectiveness of the COMIM approach in maintaining consistency during concurrent modeling activities. The system performed efficiently for teams of up to seven concurrent users, successfully managing change propagation, detecting inconsistencies incrementally, and supporting configurable resolution aligned with collaborative priorities.DiscussionThe proposed framework effectively addresses the complexities of repairing inconsistencies across diverse software models in a collaborative setting. By emphasizing collaborative dynamics, our approach advances traditional inconsistency management methods, which often lack personalization and configurability. Future work may explore scalability to larger teams and adaptation to additional modeling paradigms.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1710121</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1710121</link>
        <title><![CDATA[Enhancing RAPTOR with semantic chunking and adaptive graph clustering]]></title>
        <pubdate>2026-01-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Yan Liu</author><author>Xiaodong Xie</author><author>Xin Wan</author><author>Yi Pan</author><author>Cheng Wang</author>
        <description><![CDATA[IntroductionWhile Retrieval-Augmented Generation (RAG) enhances language models, its application to long documents is often hampered by simplistic retrieval strategies that fail to capture hierarchical context. Although the RAPTOR framework addresses this through a recursive tree-structured approach, its effectiveness is constrained by semantic fragmentation from fixed-token chunking and a static clustering methodology that is suboptimal for organizing the hierarchy.MethodsIn this paper, we propose a comprehensive two-stage enhancement framework to address these limitations. We first employ Semantic Segmentation to generate coherent foundational leaf nodes, and subsequently introduce an Adaptive Graph Clustering (AGC) strategy. This strategy leverages the Leiden algorithm with a novel layer-aware dual-adaptive parameter mechanism to dynamically tailor clustering granularity.ResultsExtensive experiments on the narrative QuALITY benchmark and the scientific Qasper dataset demonstrate the robustness and domain generalization of our framework. Our full model achieves a peak accuracy of 65.5% on QuALITY and demonstrates superior semantic validity on Qasper, significantly outperforming the baseline. Comparative ablation studies further reveal that our graph-topological approach outperforms traditional distance-based, density-based, and distribution-based clustering methods. Additionally, our approach constructs a dramatically more compact hierarchy, reducing the number of required summary nodes by up to 76%.DiscussionThis work underscores the critical importance of a holistic, semantic-first approach to building more effective and efficient retrieval trees for complex RAG tasks.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1760117</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1760117</link>
        <title><![CDATA[Editorial: Human-centered approaches in modern software engineering]]></title>
        <pubdate>2026-01-06T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Javed Ali Khan</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1694979</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1694979</link>
        <title><![CDATA[MESIAS: a web-based platform rooted in ethical principles for evaluating trustworthiness in AI projects]]></title>
        <pubdate>2025-12-03T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Georgina Romani</author><author>Cesar Avendaño</author><author>José Santisteban</author>
        <description><![CDATA[The accelerated growth of artificial intelligence (AI)-based projects has intensified the need for tools to assess their reliability, safety, and ethical alignment. In response to this challenge, the MESIAS initiative was developed. MESIAS is a web-based platform that provides a framework for evaluating AI systems through the lenses of ethical principles and international governance frameworks. The tool features a virtual assistant, adaptive forms, and a monitoring dashboard. The validation process comprised three steps: a preliminary investigation into operational efficiency, expert judgment validation with technological leaders, and a user satisfaction validation with 52 technology professionals. The operational assessment revealed a substantial 41.8% reduction in total assessment time and a 40% reduction in human resources required. Expert validation reflected a general acceptance of 85%. User validation revealed elevated satisfaction levels: 92% for usability, 94% for content, 91% for follow-up, and 95% for overall satisfaction. The study results indicate that the MESIAS strategy is a practical and effective approach to enhancing ethical governance in AI, particularly in public settings, fostering more responsible and informed decision-making processes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1655469</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1655469</link>
        <title><![CDATA[A dual perspective review on large language models and code verification]]></title>
        <pubdate>2025-11-24T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Greta Dolcetti</author><author>Eleonora Iotti</author>
        <description><![CDATA[Recent advances in Large Language Models (LLMs) have sparked significant interest in their application to code verification and the assessment of LLM-generated code safety. This review examines current research on the intersection of LLMs with software verification, focusing on two main aspects: the use of LLMs as verification tools and the verification of code produced by LLMs. We analyze the emerging approaches for integrating LLMs with traditional static analyzers and formal verification tools, including prompt engineering techniques and combinations with established verification frameworks. The review explores various verification methodologies, from standalone LLM applications to hybrid approaches incorporating traditional verification methods. We examine research addressing the safety assessment of LLM-generated code and investigate frameworks developed for vulnerability detection and repair. Through this analysis, we aim to provide insights into the current state of LLM applications in code verification, identify key challenges in the field, and outline important directions for future research in this rapidly evolving domain.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1643075</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1643075</link>
        <title><![CDATA[Input parameters authentication through dynamic software watermarking]]></title>
        <pubdate>2025-11-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Maikel Lázaro Pérez Gort</author>
        <description><![CDATA[Modern civilization relies on computers and the Internet. Web services and microservices make many processes more accessible, often without users realizing the extent of their dependency. As digitalization spreads, the integrity of input parameters used by programmed methods becomes crucial for generating accurate and reliable outcomes, which are essential for the proper functioning of society. This paper introduces a dynamic software watermarking approach designed to validate the authenticity of input parameters in high-level programming language functions. The proposed approach operates without interfering with software functionalities and is resilient to code optimization, obfuscation, and other transformations. The experimental results demonstrate the robustness of our method, ensuring 100% accuracy in detecting tampering with parameter values across all test cases.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1659785</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1659785</link>
        <title><![CDATA[Correct implementation of agent interaction protocols]]></title>
        <pubdate>2025-10-31T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Federico Bergenti</author><author>Lavinia Egidi</author><author>Leonardo Galliera</author><author>Paola Giannini</author><author>Stefania Monica</author>
        <description><![CDATA[Coordinating agents that communicate through asynchronous message exchanges to execute interaction protocols presents a complex and pressing challenge. In this article, we address this issue by introducing Multiparty Session Types (MPST) for the formal specification of agent interaction protocols, from which we derive implementations of the corresponding agent systems. Correctness is ensured on one side by the MPST methodology, which derives the local protocols of participants from a global specification by projection, and on the other by translating local types into agents, providing a proof that these agents behave as prescribed by the local protocols of participants. Our agent language is Jadescript, an agent programming language that targets the widely used JADE agent platform. In addition to the theoretical framework, we describe a prototype implementation of the related tools.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1457563</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1457563</link>
        <title><![CDATA[How relevant are personas in open-source software development?]]></title>
        <pubdate>2025-10-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ahmed Chelly</author><author>Salma Hamza</author><author>Javed Ali Khan</author>
        <description><![CDATA[IntroductionOpen-source software (OSS) projects, characterized by distributed development and volunteer contributions, face challenges in prioritizing user-centered design and usability. This difficulty arises because these projects are primarily driven by developers who focus on technical contributions. As a result, usability and user experience (UX) considerations are often neglected, leading to software that may not meet the needs of its broad and diverse users.MethodsTo address this issue, we explore the potential of using user personas which are fictional characters representing real user groups, to enhance user-centered design in OSS projects. Personas promote empathy and a deeper understanding of user needs, thereby improving alignment between developers and users. We conducted an experimental study on three OSS projects: Moodle, Lichess, and Audacity. Personas were created for each project and refined based on feedback from industry experts.ResultsDevelopers rated personas highly for credibility (86%), consistency (79%), and friendliness (86%), highlighting their relevance in OSS projects. A follow-up experiment with students confirmed these findings, with consistency (79%) demonstrating personas' role in improving usability and aligning developers with user needs.DiscussionWhile adoption remains limited due to technical priorities (only 14% of developers and 34% of students found personas useful and expressed willingness to adopt them), personas show significant potential to enhance user-centered design in OSS. Further research is needed to understand developers' reluctance to adopt this technique and explore strategies to integrate personas more effectively into OSS workflows. This study's novelty lies in its empirical exploration of personas within OSS, providing quantitative evidence of their effectiveness in improving usability and user-centered design.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1626456</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1626456</link>
        <title><![CDATA[Measuring agility in software development teams: development and initial validation of the Agile Team Practice Inventory for Software Development (ATPI-SD)]]></title>
        <pubdate>2025-10-27T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Niklas Retzlaff</author><author>Matthias Spörrle</author>
        <description><![CDATA[IntroductionAgile methodologies are ubiquitous in software development, yet their measurement remains challenging due to a lack of validated instruments. This paper details the development and initial validation of the Agile Team Practice Inventory for Software Development (ATPI-SD), a new questionnaire measuring team-level agility based on core agile values and practices.MethodsStarting from a comprehensive literature review (258 items) and expert consultations (n = 7), five dimensions were initially identified, leading to 67 generated items. Expert feedback refined this to 37 items across 4 dimensions, which were tested in Study 1 (n = 199). Further analysis resulted in a final 20-item scale with four dimensions: Customer Involvement (CI), Team Collaboration (TC), Iterative and Incremental Development Processes (IIDP), and Continuous Development Process Improvement (CDPI).ResultsData from our study (n = 237) showed good internal consistency for the total scale (α = 0.89) and subscales (ranging from 0.69 to 0.84). Confirmatory Factor Analysis indicated a moderate-to-acceptable model fit (e.g., CFI = 0.88, TLI = 0.86). Moderate convergent validity was supported by a significant correlation with a single-item self-rating of team agility (p = 0.404, p < 0.001).DiscussionWhile suggesting potential for refinement, the ATPI-SD provides a systematically developed and initially validated instrument for researchers and practitioners assessing agility in software development teams.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1596804</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1596804</link>
        <title><![CDATA[An application layer with protocol-based java smart contract verification]]></title>
        <pubdate>2025-09-04T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Luca Olivieri</author><author>Fausto Spoto</author><author>Fabio Tagliaferro</author>
        <description><![CDATA[Smart contracts are software that runs in blockchain and expresses the rules of an agreement between parties. An incorrect smart contract might allow blockchain users to violate its rules and even jeopardize its expected security. Smart contracts cannot be easily replaced to patch a bug since the nature of contracts requires them to be immutable. More problems occur when a smart contract is written in a general-purpose language, such as Java, whose executions, in a blockchain, could hang the network, break consensus or violate data encapsulation. To limit these problems, there exist automatic static analyzers that find bugs before smart contracts are installed in the blockchain. This so-called off-chain verification is optional because programmers are not forced to use it. This paper presents a general framework for the verification of smart contracts, instead, that is part of the protocol of the nodes and applies when the code of the smart contracts gets installed. It is a mandatory entry filter that bans code that does not abide by the verification rules. Consequently, such rules become part of the consensus rules of the blockchain. Therefore, an improvement in the verification protocol entails a consensus update of the network. This paper describes an implementation of a smart contracts application layer with protocol-based verification for smart contracts written in the Takamaka subset of Java, that filters only those smart contracts whose execution in blockchain is not dangerous. This application layer runs on top of a consensus engine such as Tendermint and its derivatives Ignite and CometBFT (proof of stake), or Mokamint (proof of space). This paper provides examples of actual implementations of verification rules that check if the smart contracts satisfy some constraints required by the Takamaka language. This paper shows that protocol-based verification works and reports how consensus updates are implemented. It shows actual experiments as well as limits to its use, mainly related to the fact that protocol-based verification must be fast and its complexity must never explode, or otherwise, it would compromise the performance of the blockchain network.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1554299</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1554299</link>
        <title><![CDATA[An empirical study on performance comparisons of different types of DevOps team formations]]></title>
        <pubdate>2025-09-01T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Halil Ergun Korkmaz</author><author>Mehmet Nafiz Aydin</author>
        <description><![CDATA[IntroductionDespite all the efforts to successfully implement DevOps practices, principles, and cultural change, there is still a lack of understanding on how DevOps team structure formation and performance differences are related. The lack of a ground truth for DevOps team structure formation and performance has become a persistent and relevant problem for companies and researchers.MethodsIn this study, we propose a framework for DevOps team Formation–Performance and conduct a survey to examine the relationships between team formations and performance with the five metrics we identified, two of which are novel. We conducted an empirical study using a survey to gather data. We employed targeted outreach on a social media platform along via a snowball sampling and sent 380 messages to DevOps professionals worldwide. This approach resulted in 122 positive responses and 105 completed surveys, achieving a 69.7% response rate from those who agreed to participate.ResultsThe research shows that implementing the DevOps methodology enhances team efficiency across various team structures, with the sole exception of “Separate Development and Operation teams with limited collaboration”. Moreover, the study reveals that all teams experienced improvements in Repair/Recovery performance metric following DevOps adoption. Notably, the “Separate Development and Operation teams with high collaboration” formation emerged as the top performer in the key metrics, including Deployment Frequency, Number of Incidents, and Number of Failures/Service Interruptions. The analysis further indicates that different DevOps organizational formations do not significantly impact Lead Time, Repair/Recovery, and Number of Failures/Service Interruptions in terms of goal achievement. However, a statistically significant disparity was observed between “Separate Development and Operation teams with high collaboration” and “A single team formation” regarding the Deployment Frequency goal achievement percentage.DiscussionThe analysis confirms that DevOps adoption improves performance across most team formations, with the exception of “Separate Development and Operation teams with limited collaboration” (TeamType1), which shows significant improvement only in Mean Time to Recovery (MTTR). Standardized effect size calculations (Cohen’s d) reveal that TeamType2 (“Separate Development and Operation teams with high collaboration”) consistently achieves large effects in Deployment Frequency (DF), Number of Incidents (NoI), and Number of Failures/Service Interruptions (NoF/NoSI), while TeamType3 shows strong results for Lead Time (LT) and NoF/NoSI. MTTR improvements are large across all formations, with TeamType4 performing best in this metric. These findings suggest that collaboration intensity is a critical determinant of performance gains. While team formation type does not significantly influence LT, MTTR, or NoF/NoSI goal achievement, DF goal achievement is significantly higher for TeamType2 compared to TeamType4, highlighting the potential competitive advantage of high-collaboration structures.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1626899</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1626899</link>
        <title><![CDATA[Exploring the impact of fixed theta values in RoPE on character-level language model performance and efficiency]]></title>
        <pubdate>2025-08-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Zhigao Huang</author><author>Musheng Chen</author><author>Shiyan Zheng</author>
        <description><![CDATA[Rotary Positional Embedding (RoPE) is a widely used technique in Transformers, influenced by the hyperparameter theta (θ). However, the impact of varying *fixed* theta values, especially the trade-off between performance and efficiency on tasks like character-level modeling, remains under-explored. This paper presents a systematic evaluation of RoPE with fixed theta values (ranging from 500 to 50,000) on a character-level GPT model across three datasets: Tiny Shakespeare, Enwik8, and Text8, compared against the standard θ = 10, 000 baseline. However, all non-default theta configurations incur significant computational overhead: inference speed is approximately halved across all datasets, suggesting implementation—specific bottlenecks rather than theta—dependent costs. This study quantifies a critical performance—efficiency trade-off when tuning fixed RoPE theta. Our findings emphasize the practical need to balance generalization gains with computational budgets during model development and deployment, contributing empirical insights into RoPE hyperparameter sensitivity and demonstrating that optimal theta selection is highly dataset-dependent. These insights suggest that future positional encoding designs could benefit from adaptive θ scheduling or dataset-specific θ optimization strategies to maximize both performance and computational efficiency.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1670939</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1670939</link>
        <title><![CDATA[Editorial: Machine learning for software engineering]]></title>
        <pubdate>2025-08-19T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Kevin Lano</author><author>Shekoufeh Kolahdouz Rahimi</author><author>Sobhan Yassipour Tehrani</author><author>Hessa Alfraihi</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1636758</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1636758</link>
        <title><![CDATA[Real-time fire and smoke detection system for diverse indoor and outdoor industrial environmental conditions using a vision-based transfer learning approach]]></title>
        <pubdate>2025-08-15T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Uttam U. Deshpande</author><author>Goh Kah Ong Michael</author><author>Sufola Das Chagas Silva Araujo</author><author>Sowmyashree H. Srinivasaiah</author><author>Harshel Malawade</author><author>Yash Kulkarni</author><author>Yash Desai</author>
        <description><![CDATA[The risk of fires in both indoor and outdoor scenarios is constantly rising around the world. The primary goal of a fire detection system is to minimize financial losses and human casualties by rapidly identifying flames in diverse settings, such as buildings, industrial sites, forests, and rural areas. Traditional fire detection systems that use point sensors have limitations in identifying early ignition and fire spread. Numerous existing computer vision and artificial intelligence-based fire detection techniques have produced good detection rates, but at the expense of excessive false alarms. In this paper, we propose an advanced fire and smoke detection system on the DetectNet_v2 architecture with ResNet-18 as its backbone. The framework uses NVIDIA’s Train-Adapt-Optimize (TAO) transfer learning methods to perform model optimization. We began by curating a custom data set comprising 3,000 real-world and synthetically augmented fire and smoke images to enhance models’ generalization across diverse industrial scenarios. To enable deployment on edge devices, the baseline FP32 model is fine-tuned, pruned, and subsequently optimized using Quantization-Aware Training (QAT) to generate an INT8 precision inference model with its size reduced by 12.7%. The proposed system achieved a detection accuracy of 95.6% for fire and 92% for smoke detections, maintaining a mean inference time of 42 ms on RTX GPUs. The comparative analysis revealed that our proposed model outperformed the baseline YOLOv8, SSD MobileNet_v2, and Faster R-CNN models in terms of precision and F1-scores. Performance benchmarks on fire instances such as mAP@0.5 (94.9%), mAP@0.5:0.95 (87.4%), and a low false rate of 3.5% highlight the DetectNet_v2 framework’s robustness and superior detection performance. Further validation experiments on NVIDIA Jetson Orin Nano and Xavier NX platforms confirmed their effective real-time inference capabilities, making them suitable for deployment in safety-critical scenarios and enabling human-in-the-loop verification for efficient alert handling.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1647904</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1647904</link>
        <title><![CDATA[Editorial: Computer technology and sustainable futures]]></title>
        <pubdate>2025-08-15T00:00:00Z</pubdate>
        <category>Editorial</category>
        <author>Niusha Shafiabady</author>
        <description></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1516410</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1516410</link>
        <title><![CDATA[A comparison of large language models and model-driven reverse engineering for reverse engineering]]></title>
        <pubdate>2025-07-25T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hanan Siala</author><author>Kevin Lano</author>
        <description><![CDATA[Large language models (LLMs) have been extensively researched for programming-related tasks, including program summarisation, over recent years. However, the task of abstracting formal specifications from code using LLMs has been less explored. Precise program analysis approaches based on model-driven reverse engineering (MDRE) have also been researched, and in this paper we compare the results of the LLM and MDRE approaches on tasks of abstracting Python and Java programs to the OCL formal language. We also define a combined approach which achieves improved results.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1564048</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1564048</link>
        <title><![CDATA[Models of high-level computation]]></title>
        <pubdate>2025-07-09T00:00:00Z</pubdate>
        <category>Perspective</category>
        <author>Damian Arellanes</author>
        <description><![CDATA[Classical models of computation are useful for understanding computability in the small; however, they fall short when it comes to analyzing large-scale, complex computations. To address this gap, theoretical computer science has witnessed the emergence of several formalisms that attempt to raise the level of abstraction with the aim of describing not only a single computing device but interactions among a collection of them. In this paper, we unify such formalisms under a common framework, which we refer to as Models of High-Level Computation. Our aim is to offer an accessible overview of these models.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1550453</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1550453</link>
        <title><![CDATA[A hierarchical multi-class classification system for face and text datasets]]></title>
        <pubdate>2025-06-20T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Ashish Saini</author><author>Nasib Singh Gill</author><author>Preeti Gulia</author><author>Khushwant Singh</author><author>Fernando Moreira</author>
        <description><![CDATA[In an era of rapidly growing multimedia data, the need for robust and efficient classification systems has become critical, specifically the identification of class names and poses or styles. This study provides an understanding of the organization of data, and feature selection (i.e., edge) using the k-means segmentation technique is explained. Furthermore, for the optimization of features, the linear regression technique is used. The optimized features can be directly used with classifiers, but to reduce the noise, outliers are identified and removed from the training data. The classifiers are involved in training and recognizing the face or text class label. After the prediction of class labels, the distance matrix-based technique is used to identify the style or pose name. Finally, the experiments are conducted with the help of the ORL dataset (40 classes and 10 poses in each class) and character dataset (36 characters and 10 font styles in each character). The experimental results indicated that the proposed methodology accurately classifies hierarchically organized data and demonstrates superiority over KNN and Bayesian-based classification when compared to support vector machine (SVM). The system provides classification outcomes with up to 100% accuracy for outlier-removed data, and up to 98% for basic features. Unlike traditional flat classification approaches, our system leverages hierarchical structures to enhance classification accuracy, scalability, and interpretability.]]></description>
      </item>
      </channel>
    </rss>