<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Computer Science | Computer Graphics and Visualization section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/computer-science/sections/computer-graphics-and-visualization</link>
        <description>RSS Feed for Computer Graphics and Visualization section in the Frontiers in Computer Science journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-04T11:27:43.286+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2026.1755361</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2026.1755361</link>
        <title><![CDATA[Accuracy of three-dimensional Gaussian Splatting for virtual crime scene reconstruction]]></title>
        <pubdate>2026-02-09T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Soujin Cho</author><author>Teakbum Woo</author>
        <description><![CDATA[With the rapid advancement of artificial intelligence, three-dimensional (3D) Gaussian Splatting (3DGS), which reconstructs 3D data from standard photographs and videos, has garnered increasing attention in digital forensic applications. This study evaluated the quantitative accuracy of 3DGS-based virtual crime scene reconstruction to determine its suitability for forensic documentation. To this end, a mock crime scene was constructed, and both photographs and videos were captured using a DSLR camera to generate a virtual environment through 3DGS. Since the generated environment inherently possesses only relative scale, a ‘Reference Object-based Scale Calibration’ method was employed to establish absolute dimensions by adjusting the scale of the entire virtual space based on the physical measurements of a single reference object. The reconstructed object dimensions were then compared with actual measurements in two phases: a preliminary test involving seven objects and a main test involving 13 objects provided by the Seoul Metropolitan Police Agency. The results demonstrated millimeter-level accuracy, with mean measurement errors ranging from 0.25 to 0.65 mm in the preliminary test and from 1.73 to 3.58 mm in the main test. Notably, while larger objects such as desks and doors exhibited stable reconstruction accuracy, smaller or thinner items like bloodstains showed higher relative errors due to scale-induced artifacts; however, their absolute physical precision remained intact. Overall, these findings underscore the potential of 3DGS as a reliable and practical tool for the digital preservation and reconstruction of crime scenes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2025.1549693</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2025.1549693</link>
        <title><![CDATA[Advanced articulated motion prediction]]></title>
        <pubdate>2025-04-11T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Anthony Belessis</author><author>Iliana Loi</author><author>Konstantinos Moustakas</author>
        <description><![CDATA[Motion synthesis using machine learning has seen rapid advancements in recent years. Unlike traditional animation methods, utilizing deep learning to generate human movement offers the unique advantage of producing slight variations between motions, similar to the natural variability observed in real examples. While several motion synthesis methods have achieved remarkable success in generating highly varied and probabilistic animations, controlling the synthesized animation in real-time while retaining stochastic elements remains a serious challenge. The main purpose of this work is to develop a Conditional Generative Adversarial Network to generate real-time controlled motion that balances realism and stochastic variability. To achieve this, three novel Generative Adversarial models were developed. The models differ in the architecture of their generators that utilize: a Mixture-of-Experts method, a Latent-Modulated Noise Injection technique, and a Transformer-based architecture respectively. We consider the latter to be the main contribution of this work, and we evaluate our method by comparing it to the other models on both stylized locomotion data and complex, aperiodic dance sequences, assessing its ability to generate diverse, realistic motions, being able to mix between different styles while responding to motion control. Our findings highlight the trade-offs between motion quality, variety and motion generalization in real-time synthesis by comparing by exploring the advantages and disadvantages of each architecture, contributing to the ongoing development of more flexible and varied animation techniques.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1455963</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1455963</link>
        <title><![CDATA[DMPNet: dual-path and multi-scale pansharpening network]]></title>
        <pubdate>2025-01-17T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Gurpreet Kaur</author><author>Manisha Malhotra</author><author>Dilbag Singh</author><author>Sunita Singhal</author>
        <description><![CDATA[IntroductionPansharpening is an important remote sensing task that aims to produce high-resolution multispectral (MS) images by combining low-resolution MS images with high-resolution panchromatic (PAN) images. Although deep learning-based pansharpening has shown impressive results, the majority of these models frequently struggle to balance spatial and spectral information, resulting in artifacts and a loss of detail in pansharpened images. Furthermore, these models may fail to properly integrate spatial and spectral information, leading to poor performance in complex scenarios. Additionally, these models face challenges such as gradient vanishing and overfitting.MethodsThis paper proposes a dual-path and multi-scale pansharpening network (DMPNet). It consists of three modules: the feature extraction module (FEM), the multi-scale adaptive attention fusion module (MSAAF), and the image reconstruction module (IRM). The FEM is designed with two paths, namely the primary and secondary paths. The primary path captures global spatial and spectral information using dilated convolutions, while the secondary path focuses on fine-grained details using shallow convolutions and attention-guided feature extraction. The MSAAF module adaptively combines spatial and spectral data across different scales, employing a self-calibrated attention (SCA) mechanism for dynamic weighting of local and global contexts and a spectral alignment network (SAN) to ensure spectral consistency. Finally, to achieve optimal spatial and spectral reconstruction, the IRM decomposes the fused features into low- and high-frequency components using discrete wavelet transform (DWT).ResultsThe proposed DMPNet outperforms competitive models in terms of ERGAS, SCC (WR), SCC (NR), PSNR, Q, QNR, and JQM by approximately 1.24%, 1.18%, 1.37%, 1.42%, 1.26%, 1.31%, and 1.23%, respectively.DiscussionExtensive experimental results and evaluations reveal that the DMPNet is more efficient and robust than competing pansharpening models.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1423129</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1423129</link>
        <title><![CDATA[A systemic survey of the Omniverse platform and its applications in data generation, simulation and metaverse]]></title>
        <pubdate>2024-11-19T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Naveed Ahmed</author><author>Imad Afyouni</author><author>Hamzah Dabool</author><author>Zaher Al Aghbari</author>
        <description><![CDATA[Nvidia’s Omniverse platform represents a paradigm shift in the realm of virtual environments and simulation technologies. This paper presents a comprehensive examination of the Omniverse platform, a transformative force in virtual environments and simulation technologies. We offer a detailed systematic survey of the Omniverse’s impact across various scientific fields, underscoring its role in fostering innovation and sculpting the technological future. Our focus includes the Omniverse Replicator for generating synthetic data to address data insufficiency, and the utilization of Isaac Sim with its Issac Gym and software development kit (SDK) for robotic simulations, alongside Drive Sim for autonomous vehicle emulation. We further investigate the Extended Reality (XR) suite for augmented and virtual realities, as well as the Audio2Face application, which translates audio inputs into animated facial expressions. A critical analysis of Omniverse’s technical architecture, user-accessible applications, and extensions are provided. We contrast existing surveys on the Omniverse with those on the metaverse, delineating their focus, applications, features, and constraints. The paper identifies potential domains where the Omniverse excels and explores its real-world application capabilities by discussing how existing research papers utilize the Omniverse platform. Finally, we discuss the challenges and hurdles facing the Omniverse’s broader adoption and implementation, mitigating the lack of surveys solely focusing on the Omniverse.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1415648</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1415648</link>
        <title><![CDATA[A lightweight visualization tool for protein unfolding by collision detection and elimination]]></title>
        <pubdate>2024-09-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Hua Qian</author><author>Yu Chen</author><author>Yelu Jiang</author>
        <description><![CDATA[The experiments involving protein denaturation and refolding serve as the foundation for predicting the three-dimensional spatial structures of proteins based on their amino acid sequences. Despite significant progress in protein structure engineering, exemplified by AlphaFold2 and OmegaFold, there remains a gap in understanding the folding pathways of polypeptide chains leading to their final structures. We developed a lightweight tool for protein unfolding visualization called PUV whose graphics design is mainly implemented by OpenGL. PUV leverages principles from molecular biology and physics, and achieves rapid visual dynamics simulation of protein polypeptide chain unfolding through mechanical force and atom-level collision detection and elimination. After a series of experimental validations, we believe that this method can provide essential support for investigating protein folding mechanisms and pathways.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2024.1414923</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2024.1414923</link>
        <title><![CDATA[Visualization of explainable artificial intelligence for GeoAI]]></title>
        <pubdate>2024-08-26T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Cédric Roussel</author>
        <description><![CDATA[Shapley additive explanations are a widely used technique for explaining machine learning models. They can be applied to basically any type of model and provide both global and local explanations. While there are different plots available to visualize Shapley values, there is a lack of suitable visualization for geospatial use cases, resulting in the loss of the geospatial context in traditional plots. This study presents a concept for visualizing Shapley values in geospatial use cases and demonstrate its feasibility through an exemplary use case—predicting bike activity in a rental bike system. The visualizations show that visualizing Shapley values on geographic maps can provide valuable insights that are not visible in traditional plots for Shapley additive explanations. Geovisualizations are recommended for explaining machine learning models in geospatial applications or for extracting knowledge about real-world applications. Suitable visualizations for the considered use case are a proportional symbol map and a mapping of computed Voronoi values to the street network.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2023.1085867</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2023.1085867</link>
        <title><![CDATA[Dense agent-based HPC simulation of cell physics and signaling with real-time user interactions]]></title>
        <pubdate>2023-05-12T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Naman Merchant</author><author>Adam T. Sampson</author><author>Andrei Boiko</author><author>Ruth E. Falconer</author>
        <description><![CDATA[IntroductionDistributed simulations of complex systems to date have focused on scalability and correctness rather than interactive visualization. Interactive visual simulations have particular advantages for exploring emergent behaviors of complex systems. Interpretation of simulations of complex systems such as cancer cell tumors is a challenge and can be greatly assisted by using “built-in” real-time user interaction and subsequent visualization.MethodsWe explore this approach using a multi-scale model which couples a cell physics model with a cell signaling model. This paper presents a novel communication protocol for real-time user interaction and visualization with a large-scale distributed simulation with minimal impact on performance. Specifically, we explore how optimistic synchronization can be used to enable real-time user interaction and visualization in a densely packed parallel agent-based simulation, whilst maintaining scalability and determinism. We also describe the software framework created and the distribution strategy for the models utilized. The key features of the High-Performance Computing (HPC) simulation that were evaluated are scalability, deterministic verification, speed of real-time user interactions, and deadlock avoidance.ResultsWe use two commodity HPC systems, ARCHER (118,080 CPU cores) and ARCHER2 (750,080 CPU cores), where we simulate up to 256 million agents (one million cells) using up to 21,953 computational cores and record a response time overhead of ≃350 ms from the issued user events.DiscussionThe approach is viable and can be used to underpin transformative technologies offering immersive simulations such as Digital Twins. The framework explained in this paper is not limited to the models used and can be adapted to systems biology models that use similar standards (physics models using agent-based interactions, and signaling pathways using SBML) and other interactive distributed simulations.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/fcomp.2023.957920</guid>
        <link>https://www.frontiersin.org/articles/10.3389/fcomp.2023.957920</link>
        <title><![CDATA[Interactive landscape–scale cloud animation using DCGAN]]></title>
        <pubdate>2023-03-08T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Prashant Goswami</author><author>Abbas Cheddad</author><author>Fredrik Junede</author><author>Samuel Asp</author>
        <description><![CDATA[This article presents an interactive method for 3D cloud animation at the landscape scale by employing machine learning. To this end, we utilize deep convolutional generative adversarial network (DCGAN) on GPU for training on home-captured cloud videos and producing coherent animation frames. We limit the size of input images provided to DCGAN, thereby reducing the training time and yet producing detailed 3D animation frames. This is made possible through our preprocessing of the source videos, wherein several corrections are applied to the extracted frames to provide an adequate input training data set to DCGAN. A significant advantage of the presented cloud animation is that it does not require any underlying physics simulation. We present detailed results of our approach and verify its effectiveness using human perceptual evaluation. Our results indicate that the proposed method is capable of convincingly realistic 3D cloud animation, as perceived by the participants, without introducing too much computational overhead.]]></description>
      </item>
      </channel>
    </rss>