<?xml version="1.0" encoding="utf-8"?>
    <rss version="2.0">
      <channel xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <title>Frontiers in Signal Processing | Statistical Signal Processing section | New and Recent Articles</title>
        <link>https://www.frontiersin.org/journals/signal-processing/sections/statistical-signal-processing</link>
        <description>RSS Feed for Statistical Signal Processing section in the Frontiers in Signal Processing journal | New and Recent Articles</description>
        <language>en-us</language>
        <generator>Frontiers Feed Generator,version:1</generator>
        <pubDate>2026-04-05T10:32:54.649+00:00</pubDate>
        <ttl>60</ttl>
        <item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2024.1323538</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2024.1323538</link>
        <title><![CDATA[Bayesian learning of nonlinear gene regulatory networks with switching architectures]]></title>
        <pubdate>2024-05-22T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Nayely Vélez-Cruz</author><author>Antonia Papandreou-Suppappola</author>
        <description><![CDATA[Introduction: Gene regulatory networks (GRNs) are characterized by their dynamism, meaning that the regulatory interactions which constitute these networks evolve with time. Identifying when changes in the GRN architecture occur can inform our understanding of fundamental biological processes, such as disease manifestation, development, and evolution. However, it is usually not possible to know a priori when a change in the network architecture will occur. Furthermore, an architectural shift may alter the underlying noise characteristics, such as the process noise covariance.Methods: We develop a fully Bayesian hierarchical model to address the following: a) sudden changes in the network architecture; b) unknown process noise covariance which may change along with the network structure; and c) unknown measurement noise covariance. We exploit the use of conjugate priors to develop an analytically tractable inference scheme using Bayesian sequential Monte Carlo (SMC) with a local Gibbs sampler.Results: Our Bayesian learning algorithm effectively estimates time-varying gene expression levels and architectural model indicators under varying noise conditions. It accurately captures sudden changes in network architecture and accounts for time-evolving process and measurement noise characteristics. Our algorithm performs well even under high noise conditions. By incorporating conjugate priors, we achieve analytical tractability, enabling robust inference despite the inherent complexities of the system. Furthermore, our method outperforms the standard particle filter in all test scenarios.Discussion: The results underscore our method’s efficacy in capturing architectural changes in GRNs. Its ability to adapt to a range of time-evolving noise conditions emphasizes its practical relevance for real-world biological data, where noise presents a significant challenge. Overall, our method provides a powerful tool for studying the dynamics of GRNs and has the potential to advance our understanding of fundamental biological processes.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2023.1287516</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2023.1287516</link>
        <title><![CDATA[A survey on Bayesian nonparametric learning for time series analysis]]></title>
        <pubdate>2024-01-16T00:00:00Z</pubdate>
        <category>Review</category>
        <author>Nayely Vélez-Cruz</author>
        <description><![CDATA[Time series analysis aims to understand underlying patterns and relationships in data to inform decision-making. As time series data are becoming more widely available across a variety of academic disciplines, time series analysis has become a rapidly growing field. In particular, Bayesian nonparametric (BNP) methods are gaining traction for their power and flexibility in modeling, predicting, and extracting meaningful information from time series data. The utility of BNP methods lies in their ability to encode prior information and represent complex patterns in the data without imposing strong assumptions about the underlying distribution or functional form. BNP methods for time series analysis can be applied to a breadth of problems, including anomaly detection, noise density estimation, and time series clustering. This work presents a comprehensive survey of the existing literature on BNP methods for time series analysis. Various temporal BNP models are discussed along with notable applications and possible approaches for inference. This work also highlights current research trends in the field and potential avenues for further development and exploration.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2022.877336</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2022.877336</link>
        <title><![CDATA[Rain Field Retrieval by Ground-Level Sensors of Various Types]]></title>
        <pubdate>2022-08-17T00:00:00Z</pubdate>
        <category>Review</category>
        <author>H. Messer</author><author>A. Eshel</author><author>H. V. Habi</author><author>S. Sagiv</author><author>X. Zheng</author>
        <description><![CDATA[Rain gauges (RGs) have been utilized as sensors for local rain monitoring dating back to ancient Greece. The use of a network of RGs for 2D rain mapping is based on spatial interpolation that, while presenting good results in limited experimental areas, has limited scalability because of the unrealistic need to install and maintain a large quantity of sensors. Alternatively, commercial microwave links (CMLs), widely spread around the globe, have proven effective as near-ground opportunistic rain sensors. In this study, we study 2D rain field mapping using CMLs and/or RGs from a practical and a theoretical point of view, aiming to understand their inherent performance differences. We study sensor networks of either CMLs or RGs, and also a mixed network of CMLs and RGs. We show that with proper preprocessing, the rain field retrieval performance of the CML network is better than that of RGs. However, depending on the characteristics of the rain field, this performance gain can be negligible, especially when the rain field is smooth (relative to the topology of the sensor network). In other words, for a given network, the advantage of rain retrieval using a network of CMLs is more significant when the rain field is spotty.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2022.868638</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2022.868638</link>
        <title><![CDATA[Bayesian Nonparametric Learning and Knowledge Transfer for Object Tracking Under Unknown Time-Varying Conditions]]></title>
        <pubdate>2022-07-06T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Omar Alotaibi</author><author>Antonia Papandreou-Suppappola</author>
        <description><![CDATA[We consider the problem of a primary source tracking a moving object under time-varying and unknown noise conditions. We propose two methods that integrate sequential Bayesian filtering with transfer learning to improve tracking performance. Within the transfer learning framework, multiple sources are assumed to perform the same tracking task as the primary source but under different noise conditions. The first method uses Gaussian mixtures to model the measurement distribution, assuming that the measurement noise intensity at the learning sources is fixed and known a priori and the learning and primary sources are simultaneously tracking the same source. The second tracking method uses Dirichlet process mixtures to model noise parameters, assuming that the learning source measurement noise intensity is unknown. As we demonstrate, the use of Bayesian nonparametric learning does not require all sources to track the same object. The learned information can be stored and transferred to the primary source when needed. Using simulations for both high- and low-signal-to-noise ratio conditions, we demonstrate the improved primary tracking performance as the number of learning sources increases.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2022.842513</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2022.842513</link>
        <title><![CDATA[How Scalable Are Clade-Specific Marker K-Mer Based Hash Methods for Metagenomic Taxonomic Classification?]]></title>
        <pubdate>2022-07-05T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Melissa Gray</author><author>Zhengqiao Zhao</author><author>Gail L. Rosen</author>
        <description><![CDATA[Efficiently and accurately identifying which microbes are present in a biological sample is important to medicine and biology. For example, in medicine, microbe identification allows doctors to better diagnose diseases. Two questions are essential to metagenomic analysis (the analysis of a random sampling of DNA in a patient/environment sample): How to accurately identify the microbes in samples and how to efficiently update the taxonomic classifier as new microbe genomes are sequenced and added to the reference database. To investigate how classifiers change as they train on more knowledge, we made sub-databases composed of genomes that existed in past years that served as “snapshots in time” (1999–2020) of the NCBI reference genome database. We evaluated two classification methods, Kraken 2 and CLARK with these snapshots using a real, experimental metagenomic sample from a human gut. This allowed us to measure how much of a real sample could confidently classify using these methods and as the database grows. Despite not knowing the ground truth, we could measure the concordance between methods and between years of the database within each method using a Bray-Curtis distance. In addition, we also recorded the training times of the classifiers for each snapshot. For all data for Kraken 2, we observed that as more genomes were added, more microbes from the sample were classified. CLARK had a similar trend, but in the final year, this trend reversed with the microbial variation and less unique k-mers. Also, both classifiers, while having different ways of training, generally are linear in time - but Kraken 2 has a significantly lower slope in scaling to more data.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2022.842570</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2022.842570</link>
        <title><![CDATA[VIPDA: A Visually Driven Point Cloud Denoising Algorithm Based on Anisotropic Point Cloud Filtering]]></title>
        <pubdate>2022-03-16T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Tiziana Cattai</author><author>Alessandro Delfino</author><author>Gaetano Scarano</author><author>Stefania Colonnese</author>
        <description><![CDATA[Point clouds (PCs) provide fundamental tools for digital representation of 3D surfaces, which have a growing interest in recent applications, such as e-health or autonomous means of transport. However, the estimation of 3D coordinates on the surface as well as the signal defined on the surface points (vertices) is affected by noise. The presence of perturbations can jeopardize the application of PCs in real scenarios. Here, we propose a novel visually driven point cloud denoising algorithm (VIPDA) inspired by visually driven filtering approaches. VIPDA leverages recent results on local harmonic angular filters extending image processing tools to the PC domain. In more detail, the VIPDA method applies a harmonic angular analysis of the PC shape so as to associate each vertex of the PC to suit a set of neighbors and to drive the denoising in accordance with the local PC variability. The performance of VIPDA is assessed by numerical simulations on synthetic and real data corrupted by Gaussian noise. We also compare our results with state-of-the-art methods, and we verify that VIPDA outperforms the others in terms of the signal-to-noise ratio (SNR). We demonstrate that our method has strong potential in denoising the point clouds by leveraging a visually driven approach to the analysis of 3D surfaces.]]></description>
      </item><item>
        <guid isPermaLink="true">https://www.frontiersin.org/articles/10.3389/frsip.2021.727387</guid>
        <link>https://www.frontiersin.org/articles/10.3389/frsip.2021.727387</link>
        <title><![CDATA[Contrast Agent Quantification by Using Spatial Information in Dynamic Contrast Enhanced MRI]]></title>
        <pubdate>2021-10-19T00:00:00Z</pubdate>
        <category>Original Research</category>
        <author>Jianfeng Wang</author><author>Anders Garpebring</author><author>Patrik Brynolfsson</author><author>Jun Yu</author>
        <description><![CDATA[The purpose of this work is to investigate spatial statistical modelling approaches to improve contrast agent quantification in dynamic contrast enhanced MRI, by utilising the spatial dependence among image voxels. Bayesian hierarchical models (BHMs), such as Besag model and Leroux model, were studied using simulated MRI data. The models were built on smaller images where spatial dependence can be incorporated, and then extended to larger images using the maximum a posteriori (MAP) method. Notable improvements on contrast agent concentration estimation were obtained for both smaller and larger images. For smaller images: the BHMs provided substantial improved estimates in terms of the root mean squared error (rMSE), compared to the estimates from the existing method for a noise level equivalent of a 12-channel head coil at 3T. Moreover, Leroux model outperformed Besag models with two different dependence structures. Specifically, the Besag models increased the estimation precision by 27% around the peak of the dynamic curve, while the Leroux model improved the estimation by 40% at the peak, compared with the existing estimation method. For larger images: the proposed MAP estimators showed clear improvements on rMSE for vessels, tumor rim and white matter.]]></description>
      </item>
      </channel>
    </rss>