Skip to main content

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 15 July 2021
Sec. Statistics and Probability
Volume 7 - 2021 | https://doi.org/10.3389/fams.2021.689991

Oscillatory Biomedical Signals: Frontiers in Mathematical Models and Statistical Analysis

www.frontiersin.orgHau-Tieng Wu1 www.frontiersin.orgTze Leung Lai2* www.frontiersin.orgGabriel G. Haddad3 www.frontiersin.orgAlysson Muotri4
  • 1Department of Mathematics, Duke University, Durham, NC, United States
  • 2Department of Statistics, Stanford University, Stanford, CA, United States
  • 3Department of Pediatrics and Rady Children’s Hospital, University of California at San Diego, San Diego, CA, United States
  • 4Department of Cellular & Molecular Medicine, Department of Pediatrics, University of California at San Diego, San Diego, CA, United States

Herein we describe new frontiers in mathematical modeling and statistical analysis of oscillatory biomedical signals, motivated by our recent studies of network formation in the human brain during the early stages of life and studies forty years ago on cardiorespiratory patterns during sleep in infants and animal models. The frontiers involve new nonlinear-type time–frequency analysis of signals with multiple oscillatory components, and efficient particle filters for joint state and parameter estimators together with uncertainty quantification in hidden Markov models and empirical Bayes inference.

1 Introduction

The 2017 Nobel Prize in Physiology or Medicine was awarded to Jeffrey Hall and Michael Rosbash of Brandeis University, and Michael Young of Rockefeller University, “for their discoveries of molecular mechanisms controlling the circadian rhythm.” In 1984, they succeeded in isolating the “period gene” (i.e., the gene that controls the circadian rhythm). Hall and Rosbash then went on to “discover PER, the protein encoded by period, accumulated during the night and degraded during the day.” In 1994, Young answered a “tantalizing puzzle” concerning how PER produced in the cytoplasm could reach the cell nucleus where genetic material is located. He discovered a second gene timeless, encoding the TIM protein so that TIM bound to PER can enter the cell nucleus to block the period gene activity. “Such a regulatory feedback mechanism explained how this oscillation of cellular protein levels emerged, but questions lingered,” such as what controlled the frequency of the oscillations. Young identified another gene doubletime encoding the DBT protein that delayed the accumulation of the PER protein. The three laureates identified additional proteins required for the activation of the period gene, as well as for the mechanisms by which light can synchronize the circadian clock.

One of us (Muotri) was PI of a project on “spontaneous network formation” displaying “periodic and regular oscillatory events that were dependent on glutamatergic and GABAergic signaling” during early the brain maturations, for which structural and transcriptional changes “follow fixed developmental programs defined by genetics,” see [1] who also found that “the oscillatory activity transitioned to more spatiotemporally irregular patterns which synchronous network activity resembled features similar to those observed in preterm human EEG.” This project is similar in spirit to the exemplary work of Hall, Rosbash, and Young but the “experimental inaccessibility” of the human brain during the early stages of life pushes mathematical modeling and statistical analysis of the oscillatory signals and events to new frontiers that we present in the next section. We describe in the next paragraph the underlying biomedical background of this project.

One of the major recent realizations, especially in the neurosciences, is that while we can obtain important information from animal studies, there are major differences between humans and animals. This is manifested in many ways, especially in those major clinical trials based on animal findings that did not pan out. Therefore, if we intend to study pathogenesis of disease, treat them, prevent them, or cure diseases across the age spectrum, we need to refocus our scientific approaches and strategies in order to be more efficient and effective. Since embryonic stem cells are often problematic to obtain for ethical reasons, the discovery of being able to re-program somatic cells from humans into induced pluro-potential stem cells (iPSCs, taking these somatic cells back into their “history”) and differentiate them into different relatively mature cell types have opened a major avenue for the scientific community, resulting in the 2012 Nobel Prize in Physiology or Medicine to John Gurdon of Cambridge and Shinya Yamanaka of Kyoto. If these iPSCs are exposed to the right growth factors, they would assemble into the early human brain (brain organoids) by an amazing process of self-organizing the 3-dimensional cellular elements that recapitulate the network, cellular, and membrane properties of neurons and the glia. Many types of organoids such as the kidney, the intestine, the liver, and the lung organoids have been recently developed. These organoids have been particularly useful for studying either normal early human biology or developmental disorders as in neurodevelopmental diseases.

2 Methods

The statistical methods used by Trujillo et al. (2019, pp. 16–19) in their analysis of data on oscillatory signals and events consist of 1) multi-electrode array (MEA) recording and custom analysis, 2) network event analysis that involves detecting spikes (when at least 80% of the maximum spiking values) over the length of the recording when reached at least 1 s away from any other network event, 3) oscillatory spectral power analysis, in which “oscillatory power” is defined as “peaks in the PSD (power spectral density estimated by Peter D. Welch’s method) above the aperiodic 1/f power law decay,” and 4) resampled Pearson’s correlation coefficient between neonatal age and each of 12 EEG features. Because of “the inability to interrogate the electrophysiology of intact human brains” and the emergence of induced pluripotent stem cells (iPSCs) and organoids as “a scaled-down and three-dimensional model of the human brain, mimicking various developmental features at cellular and molecular levels,” [1, pp. 4, 7–9, 18] used oscillatory dynamics of LFP (local field potential) and other mesoscopic brain signals, which manifest “a phenomenon known as cross-frequency phase-amplitude coupling (PAC) wherein the high-frequency content of LFP is entrained to the phase of slow oscillations.” Noting that “the pattern of alternating periods of quiescence and network-synchronized events resembles electrophysiological signatures in preterm human EEG,” [1] analyzed “a publicly available dataset of 101 serial EEG recordings from 39 preterm infants ranging from 24 to 38 weeks post-menstrual age,” containing 23 precomputed features (including spectral power in canonical oscillatory bands, duration, and timing of “spontaneous activity transients” or SATs) for each EEG record. To compare the features between cortical organoids and preterm infants, [1] trained a regularized regression model (ElasticNet) with cross-validation for hyparameter selection based on the preterm infants’ EEG recordings and applied the model to the organoid dataset to “obtain the predicted developmental time.” The results were mixed and [1] concluded that “given the potential roles of synchronized and oscillatory network dynamics in coordinating information flow between developed brain regions, these results highlight the potential for cortical organoids to advance our understanding of functional physiology” and to model “cellular interactions and neural circuit dysfunctions related to neurodevelopmental and neuropsychiatric pathologies” that “affect millions of people but otherwise lack an existing animal model.” These statistical methods are “custom” (or traditional) methods, as acknowledged by [1]. We describe innovative and powerful methods in the next two subsections, first for time–frequency analysis of oscillatory biomedical signals with time-varying features and then a new hidden Markov model (HMM) which incorporates the key features of the cortical organoid model and provides uncertainty quantification for empirical Bayes inference based on the model and observed data.

2.1 Time–Frequency Analysis of Signals With Multiple Oscillatory Components

The first author (Wu) has been working on time–frequency analysis (TFA) and its applications to high-frequency biomedical signals in the last ten years. Examples include electrocardiography, electroencephalogram, local field potential, photoplethysmogram (PPG), actinogram, peripheral venous pressure (PVP), arterial blood pressure, phonocardiogram, and airflow respiratory signal, to name several. Usually, these signals are composed of multiple components, each of which reflects the dynamics of a physiological system. The analysis is challenged by the physiological variability that appears in the form of time-varying frequency and amplitude or even time-varying oscillatory pattern that is referred to as the “wave-shape function.” Furthermore, depending on the signal, the “waxing and waning” effect is sometimes inevitable for its components; see [2, Figure 1] for an illustration. Take the widely applied PPG signal as an example, for which [3] has given an introduction to photoplethysmogram (PPG) and its applications “beyond the calculation of arterial oxygen saturation and heart rate.” In addition to the well-known cardiac component reflecting hemodynamic information, PPG may contain the respiratory dynamics as another component. The frequency of the cardiac component (respiratory component, respectively) is impacted by the heart rate variability (breathing rate variability, respectively). Cicone and [4] provide an algorithm to “extract both heart and respiratory rates” from the PPG signal and thereby to analyze their interactions. Such information can be used in conjunction with other biomedical signals reflecting hemodynamics. In particular, PVP is ubiquitous in the hospital environment and a rich source of hemodynamic information [5]. But, it typically has a low signal-to-noise ratio (SNR), and its oscillatory pattern is sensitive to the physiological status, making it much less used in comparison with PPG. Wu et al. [6] have developed new signal processing tools to facilitate its use.

Combining time–frequency analysis (TFA) with statistical analysis, the lack of which in the previous work “presents an opportunity for much future research,” is illustrated in Figure 2 (applied to PPG, fetal ECG, and fetal heart rate variability) of Wu [2] who describes several recent advances in TFA for high-frequency biomedical signals. There are several challenges common to different biomedical signal processing problems. The first is how to estimate the dynamics (e.g., how to quantify the time-varying frequency, amplitude, or wave-shapes) of the signal. The second is to assess signal quality and determine artifacts, distinguishing between physiological and nonphysiological ones. The third is to identify oscillatory components and the fourth is to decompose the signal into constituent components. To address these challenges, several TFA tools have been proposed. In addition to the traditional linear-type time–frequency analysis tools like short-time Fourier transform (STFT), continuous wavelet transform (CWT), and bilinear time-frequency analysis tools [7], several nonlinear-type tools have been developed and applied, including the reassignment method, empirical mode decomposition (EMD), Blaschke decomposition (BKD), adaptive locally iterative filtering (ALIF), sparse time-frequency representation (STFR), synchrosqueezing transform (SST), scattering transform (ST), concentration of frequency and time (ConcFT), de-shape and dynamic diffusion maps [814]. The statistical properties of these methods have been relatively unexplored and we are currently investigating them; new methods to handle emerging scientific problems might be developed on the way.

2.2 Efficient Particle Filters for Joint State and Parameter Estimation in HMM

During the past three years, the second author (Lai) has been developing a new Markov Chain Monte Carlo (MCMC) procedure called “MCMC with sequential substitutions” (MCMC-SS) for joint state and parameter estimation in hidden Markov models. The basic idea is to approximate an intractable distribution of interest (or target distribution) by the empirical distribution of N representative atoms, chosen sequentially by an MCMC procedure, so that the empirical distribution approximates the target distribution after a large number of iterations as explained below.

Lai’s work in this area began with the landmark paper of Gordon, Salmond and Smith [15] on the development of sequential Monte Carlo (SMC), also called particle filters, for the estimation of latent states in a hidden Markov model (HMM). Liu’s monograph [16] contains a collection of techniques that have been developed since then, with examples of applications in computational biology and engineering, and Chan and Lai [17] provide a general theory of particle filters. Let ={Xt,t1} be a Markov chain and let Y1,Y2, be conditionally independent given, such that Xtpt(|Xt1),Ytgt(|Xt) in which pt and gt are density functions with respect to measures νX and νY. The density function pT of X0:T=(X0,,XT) conditional on Y1:T=(Y1,,YT) is given by

pT(x0:T|Y1:T)t=1T[pt(xt|xt1)gt(Yt|xt)].

This conditional distribution is often difficult to sample from and the normalizing constant is also difficult to compute for high-dimensional or complicated state spaces, and particle filters use sequential Monte Carlo that involves importance sampling and resampling to circumvent this difficulty. The particle filter computes E[ψ(X0:T)|Y1,,YT] by the recursive Monte Carlo scheme summarized in Algorithm 1. Let X0:t1m denote the sample path of the mth particle (trajectory), 1mM. The scheme uses importance sampling from a proposal density qt to circumvent this difficulty and updates not only the particles X0:t1m but also the associated weights wt1m and ancestor At1m of X0:tm. It is initialized with A0m=m and w0m=1. The SMC estimate of ψT:=E[ψ(X0:T)|Y1:T] is given by:

ψ˜T=(m=1MwTmψ(X0:Tm))/(m=1MwTm).

By using martingale theory, Chan and Lai [17] provide a comprehensive theory of the SMC estimate ψ˜T, which includes asymptotic normality and consistent standard error estimation as follows:

Theorem 1. Under certain Integrability Conditions,

M(ψ˜tψT)N(0,σ2).

Moreover, letting w¯t=M1i=1Mwti, σ2 can be consistently estimated by the following equation:

σ˜2=1Mm=1M(i:AT1i=mwTiw¯T[ψ(X0:Ti)ψ˜T])2.
ALGORITHM 1
www.frontiersin.org

Algorithm 1. SMC with M particles

Chan and Lai [17, Lemmas 1 and 4] use the following representation of ψ˜TψT to derive Theorem 1. Let wt(x0:t)=pt(xt|xt1)gt(Yt|xt)/qt(xt|x0:t1), in which that Yt can be treated as constants since the particle filter is the conditional distribution of X0:t given the observations Y1,,YT. Let Htm=(w¯1,,w¯t)/j1twjm,ηt=Eq[i=1twi(X0:t)], where Eq denotes expectation under which Xt|X0:t1 has the conditional density function qt(|X0:t1) for 1tT. Letting Ψ0=ψT and Ψt(X0:t)=Eq{ψ(X0:T)i=1Twi(X0:i)|X0:t} for 1tT, define

ϵ2t1m=i:At1m=m{ψ(X0:ti)Ψt1(x0:t1i)}Ht1i,ϵ2tm=i:At1m=m(#tmmWti){Ψt(X0:ti)HtBtiΨ0},

in which Wti=wti/j=1Mwtj, #ti is the number of copies of X0:ti generated by bootstrap resampling from {X0:t1,,X0:tM} in Algorithm 1 (where the Bti is also defined). Then (#t1,,#tM)Multinomial(M;Wt1,,WtM) and

ψ˜TψT={(w¯1w¯T)1ηT}M1m=1M(ϵ1m++ϵ2T1m);

see Eqs. (3.3) and (3.36) of the study by authors of reference [15] which show that {ϵtm,1t2T1} is a martingale difference sequence and that (w¯1w¯T)1ηT=1+op(1) under the integrability assumptions ηT< and Eq[i=1Twi2(X0:t)]<.

ALGORITHM 2
www.frontiersin.org

Algorithm 2. PMCMC at the kth iteration, initialized with θ0f()

 The assumption of a single fully specified HMM in particle filter is often too restrictive in applications since the model parameters are usually unknown and also need to be estimated sequentially from the observed data. A standard method to estimate unknown parameters is to assume a prior distribution for the unknown parameter vector and to use the Markov chain Monte Carlo (MCMC) to estimate the posterior distribution. Authors of reference [15] carried out this method for time-homogeneous Markov chains Xtpθ(|Xt1) for t1 and X0pθ(), with latent states Xt and observations Ytgθ(|Xt), in which θ is an unknown parameter with a prior density function π() with respect to some measure νθ on the parameter space Θ. The posterior density of (θ,X0:T) given Y1:T is proportional to

pT(ϑ,x0:T)=π(ϑ)pϑ(x0)t=1T{pϑ(xt|xt1)gϑ(Yt|xt)}.

PMCMC uses SMC involving M particles (each of which consists of a sampled parameter and state trajectory) at every iteration k to construct an approximation p˜T to pT in a Metropolis–Hastings (MH) MCMC scheme that uses a proposal density f(|θk1) with respect to the measure vθ to the sample θk at the kth iteration, as summarized in Algorithm 2. Chopin et al. (2013, Section 1.2) point out the difficulties in the asymptotic analysis of PMCMC as k becomes infinite. In particular, although the authors of reference [15] have shown that under some strong assumptions, PMCMC converges to a measure in total variation norm as k, for fixed value of M, the limiting measure is not the target posterior distribution of (θ,X0:t). On the other hand, allowing M to approach with k would lead to an analytically intracAlgorithm scheme involving state spaces whose dimensions change with k. Authors of reference [16] propose the SMC2 scheme to target heuristically the posterior distribution of (θ,X0:t) given Y1:t(1tT) as follows. It involves N θ-particles, which we will call “atoms,” and attaches to each atom θ a particle filter that propagates and resamples M particles (state trajectories X0:tm) generated by SMC (as in Algorithm 1 with the given θ). It carries out the MH iterations to determine if a candidate atom is accepted (as in Step (c) of Algorithm 2). For the N atoms θt1,,θtN and their corresponding importance weights at time t generated in this way, if the degeneracy criterion in the study by the author of reference [17] is satisfied, carry out bootstrap resampling of the weighted parameter-particle set to replace it by an unweighted set, but no convergence theory as k is provided. Although MCMC methods with MH iterations are widely used computational tools in Bayesian inference on θΘ that has prior density function with respect to some measure νθ, they do not have convergence rate guarantees in terms of the number of iterations to automate termination of the iterations. On the other hand, if the target density p, which is the posterior density of θ given Y1:t, was known and easy to sample from, then the standard Monte Carlo approximation of μ:=Ep(ψ(θ)) could be carried out by generating i.i.d. θ1,,θN from p()dm and using the sample average μ˜=N1n=1Nψ(θn) to estimate μ. Under the assumption Ep(ψ2(θ))<, the estimated standard error is σ˜N/N, and μ˜±N1/2σ˜Nζ1α/2 is an approximate (1α)-level confidence interval for μ, where σ˜N2=(N1)1n=1N(ψ(θn)μ˜)2 and ζq is the qth quantile of the standard normal distribution. This follows from the classical central limit theorem and is very useful for determining N to ensure μ˜ to be within some prescribed tolerance limit ϵ of μ: N1/2σ˜Nζ1α/2ϵ and has inspired Lai to develop, with his current Ph.D. students Huanzhong Xu, Michael Hongyu Zhu, and former Ph.D. student Hock Peng Chan, the following novel MCMC algorithm which is asymptotically equivalent to the oracle procedure that assumes known target density p and which they call MCMC with sequential state substitutions (MCMC-SS). As in MH, let f be a given function that is proportional to the target density. Let {q(|γ):γΓ} be a family of positive proposal densities with respect to some measure m, where Γ is a convex subset of d. MCMC-SS initializes by choosing γ0Γo and generating νB i.i.d. θ1,01,,θ1,0ν;;θB,01,,θB,0ν from the proposal distribution q(|γ0)dm, thereby forming the B disjoint sets Θb,0={θb,01,,θb,0ν}. At the stage k, it uses the sequential substitution procedure SS(Θb,k,wkb) in Algorithm 3 to update the atom set in the bth block and to assign the weight wi,kb to the ith atom in Θb,k, b=1,,B. MCMC-SS estimates μ=Epψ(θ) by the following equation:

ψ^=1B(Kκ)b=1Bk=κ+1Kψ^b,k, with ψ^b,k=i=1νwi,kbψ(θi,kb)i=1νwi,kb,

in which κ represents an initial burn-in period that is asymptotically negligible as κ=o(K). In many applications, the parameter γ of the family of proposal densities is a function γ:PΓ, where P is the space of probability measures on Θ. Assuming this framework, we now describe the choice of γb,k1 in Algorithm 3. For kκ, let γb,k1=ν1θΘb,k1γ(θ), which is the mean of the empirical measure of the atoms in the bth block at the end of stage k1. On the other hand, for k>κ, we pool across blocks by letting γk1=B1b=1Bγb,k1, which we use as the modified γb,k1 for all blocks. Therefore, after the burn-in period, we can carry out the update SS(Θb,k) in the order b=1,,B, so that if the candidate atom in SS(Θb,k) is not used for block b, it can serve as candidate atom for the block b+1 (B), which then does not need to generate another random variable from q(|γk1), an obvious advantage for high-dimensional and complicated states. Lai, Xu, Zhu, and Chan have developed a comprehensive asymptotic theory of MCMC-SS showing its asymptotic optimality with respect to computational and statistical criteria and have also derived consistent estimators of the standard errors for the Monte Carlo state/parameter estimates; see reference [18] for which the main results are summarized in the following that includes Algorithm 3. Moreover, Algorithm 3 that can be vectorized and parallelized and illustrate its applications to latent variable analysis with uncertainty quantification in image reconstruction and brain network development.

ALGORITHM 3
www.frontiersin.org

Algorithm 3. Updating procedure SS(Θb,k,wkb) for MCMC-SS

Theorem 2. SupposeEpψ2(θ)and there existβ>α>0andV:Θν[1,)such that forγ:PΓ,

ΘνV(θ)q(θ1|γ0)q(θν|γ0)dmν(θ)< with θ=(θ1,,θν), andαV(θ)λ(θ˜|γ(θ))βV(θ) for all θΘν and θ˜Θ,

where λ(θ˜|γ)=f(θ˜|γ)/p(θ˜).

(i) Let Gb,k be the joint distribution of (θ1,kb,,θν,kb) and let Qν be the probability measure on Θν that has the density of ν independent components each of which has density q(|γf) with respect to m, where γf=argminγΓI(qγf) and I(qf)=Ef{log(q(θ)/f(θ))} is the Kullback–Leibler divergence (or relative entropy) of q from the target density f in Algorithm 3 Then there exist positive constants a and c such that Gb,kQνVceak for 1kK, where V denotes the weighted total variation norm associated with the weight function V. Hence, after klogB iterations, bBGb,kQνV0.

(ii) Let N=B(Kκ) be the total number of atoms used to define the MCMC-SS estimate of ψ˜ of μ=Ep(ψ(θ)). Then as K and B such that B=O(K),

Nν(ψ^μ)N(0,σ2),

where σ2=Varp(ψ(θ)) and can be consistently estimated by:

σ^2=1B(Kκ)b=1Bk=κ+1K1ν1θΘb,k(ψ(θ)ψ^b,k)2.

 As shown in reference [21], with probability approaching 1 by large k, the candidate atom θ˜ in Algorithm 3 substitutes some existing atom in Θb,k1. Hence, similar to the case of known target density p from which θ˜ is sampled, the newly sampled atom features in the weighted average ψ^b,k. The reason we need the weighted average, with “importance sampling weights” wi,kb, is that for large k, the conditional distribution of Θb,k given Θb,k1 behaves like the ν-fold product measure Qν on Θν. This shows that importance sampling (likelihood ratio) weights wi,kb are needed to convert Q to P and suggests the asymptotic optimality of ψ^, which is the overall average of the B(Kκ) estimates ψ^b,k, similar to μ^ that is described for the case of known p. Each random variable generated in the MCMC-SS scheme asymptotically contributes weight (Nν)1 to (a) the estimate ψ^ of μ and (b) the asymptotic variance of ψ^. Theorem 2 shows that there is in fact considerable flexibility in the choice of the factors K (the number of iterations) and B (the number of blocks) in N=B(Kκ) that determines the scaling factor in the central limit theorem, although the theorem highlights the case B=O(K) to emphasize that K should not be chosen too small relative to B. Reference [21] gives an application to uncertainty quantification in the following image reconstruction problem. Reference [22] propose to use MCMC methods “whenever the target measure has density with respect to a Gaussian process or Gaussian random field reference measure.” A wide range of applications involving such a framework considers Bayesian inference on a latent random field {u(x):xD}d generated by some stochastic partial differential equation (SPDE) in which D is a connected subset of d', based on data generated by some nonlinear function of the random field. It is shown that after discretization and truncation to fit into this framework, the Radon–Nikodym derivative of the target measure P with respect to the reference measure Q has the following form:

(dQ/dP)(u)exp(l(u)),

for some real-valued function l, which [19] call “potential” in their substantive applications. The advantage of using a zero-mean Gaussian random field reference measure Q is that it is specified by the covariance operator C whose eigenvalues λi and orthonormal eigenfunctions ϕi yield the Karhunen–Loève expansion u(x)=i=1ξiϕi(x), with i.i.d. ξi that are N(0,λi2) and i=1λi2<. Reference [22] uses a random truncation τ with a sieve prior to convert the infinite-dimensional expansion to a finite sum u(x)=i=1τξiϕi(x). In addition, a discrete approximation of the random field u(x) is used, with x taken over a mesh of width δ in each coordinate. MCMC-SS uses a parametric family of Gaussian proposal measures Q(γ) instead of a single one by [22]. Putting 1/L(θ)=exp(l(u(x))), we can also incorporate the random truncation τ and possibly also other random effects ρ into the state θ=(τ,ζ1,,ζτ,ρ), where ζj=G(u(xj)), j=1,,τ, and G is an operator associated with the SPDE and the discretization scheme for which xj belongs to a discrete subset of D. With this definition of θ, MCMC-SS uses the updating procedure described in Algorithm 3. Section 4.2 of [22] argues that simply applying MCMC to a discretized random field leads to a singular reference measure with respect to the target measure. However, the MCMC procedure used in [22] is the random walk Metropolis algorithm that involves the acceptance probability a(u,v)=min{1,(dη*/dη)(u,v)}, where η is the measure defined by the transition kernel q(u,v) of the MCMC algorithm (i.e., v|uq(u)) and η* is the measure obtained by reversing the roles of u and v in the definition of η. Theorem 6.3 of [22] shows that after discretization, η* is singular with respect to η and therefore “all proposal moves are rejected with probability 1” for the random walk Metropolis algorithm, which proposes v(k)=u(k)+βξ(k), with ξ(k)N(0,C), and chooses u(k+1)=v(k) with probability a(u(k),v(k)), setting u(k+1)=u(k) if v(k) is rejected. To get around this difficulty, [22] introduces a preconditioned Crank–Nicolson (pCN) adjustment, which proposes v(k)=1β2u(k)+βξ(k). Here, β2=8δ/(2+δ)2 and C is the covariance matrix (after truncation and discretization) of the covariance operator C for the Gaussian proposal measure. Because MCMC-SS does not involve η and η*, it does not require the pCN adjustments; see reference [20] for details and further discussion. Making use of bounds on a weighted total variation norm of the difference between the target distribution and the empirical measure defined by the sample paths of the MCMC procedure, reference [21] has developed an asymptotic theory of the MCMC-SS estimates, as both K and N approach , of functionals of the target distribution. This asymptotic theory includes asymptotic normality of the MCMC-SS estimates, provides consistent estimators of their standard errors, and establishes their asymptotic optimality by deriving certain oracle properties. Implementation via sequential Monte Carlo schemes called “particle filters” and parallelization is also given. In his Ph.D. thesis, Zhu who is a coauthor of [21] describes a numerically stable implementation of MCMC-SS that can be vectorized and parallelized, using Julia v0.62 [23] and the ArrayFire GPU library [24]. He also develops scalable implementations for high-dimensional states/parameters using differentiation through mixture distributions for stochastic gradient descent; see [25]. In the context of cortical organoids described in the first paragraph of Section 2, the target distribution is the posterior distribution of a precomputed feature of the organoid as a scaled-down model of the preterm human brain, conditional on the observations which are the 101 serial EEG recordings from 39 preterm infants. The uncertainty quantification [21] of the posterior distribution of a precomputed feature of cortical organoids provides a principled and systematic approach to the comparison of the feature between cortical organoids and the observations from the preterm infants, in contrast to the lack of uncertainty quantification for the approach and results of [1, pp. 8–9 and Fig. 4A, B, C, and D on p. 31] mentioned in the first paragraph of Section 2. Moreover, the methods of time–frequency analysis in the preceding subsection can be used to compute the predictive distribution of the feature of the cortical organoids given the observations, which is the same as the target distribution. The predictive distribution typically also involves an unspecified hyperparameter vector θ, as in manifold learning of [14]. This corresponds to a Bayesian approach with prior densities belonging to a family of proposal densities q(θ|γ), in which γΓ indexes the family and Γ is a convex subset of d. Reference [21] has shown that MCMC-SS eventually samples from q(|γp) that has the smallest Kullback–Leibler divergence from p(), and therefore from the target density if it belongs to {q(|γ):γΓ}.

3 Discussion and Concluding Remarks

Haddad and Lai actually initiated similar research forty years ago when they worked on cardiorespiratory patterns during sleep in a SIDS (Sudden Infant Death Syndrome) project at Columbia University’s Pediatrics Department; see reference [26] which describes the study population consisting of 12 infants “with one or more episodes of aborted SIDS” (four of whom had siblings who died of SIDS), and 19 normal infants, all born full-term except for one aborted SIDS infant born at 37 weeks of gestation. After describing the study design and methods of statistical analysis, the authors of reference [26] presented results on total tidal volume (Vt), respiratory cycle time (Ttot), and increase in Vt/Ttot resulting from 2% increase in CO2 concentration in the sleeping chamber, comparing aborted SIDS to normal infants in both REM (rapid eye movement) and quiet sleep. Because of the inability to induce stress such as loaded breathing as in reference [27, 28], animal models involving sheep, puppies, and dogs were used; see also [29]. In particular, the authors of reference [24] “studied diaphragmatic muscle function during inspiratory flow resistive loaded breathing” in 6 unanesthetized sheep over periods of 6–8 months. Data were collected (baseline) and after application of the loads that were sustained for up to 90 min. Loads were divided into mild (<50 cm H2Ol1s), moderate (50–150 cm H2Ol1s), and severe (>150 cm H2Ol1s). They found that “1) the diaphragm is capable of generating large pressure for prolonged periods with no evidence of fatigue, 2) with very high inspiratory resistive loads mechanical failure of the diaphragm can occur, 3) diaphragmatic fatigue is associated with acute hypercania and therefore failure of the entire respiratory pump, and 4) a decrease in integrated EMG (iEMG) and a concomitant shift in the EMG power spectral density toward lower frequencies precede the mechanical failure of the diaphragm.” Thus, similar to the power spectral density of the EEG signal in the first paragraph of Section 2, [27] uses a shift of the power spectrum of the EMG toward lower frequencies to identify the onset of diaphragmatic muscle fatigue in adult sheep. The frontier methods of time–frequency analysis in Section 2.1 are therefore also relevant to the problem of diaphragmatic muscle fatigue and rhythmic variations in cardiorespiratory signals studied by Haddad and Lai forty years ago. Pointing out that in the 1980s “investigators from various disciplines focused their efforts on finding out whether SIDS is related to hypoxia or anoxia (acute or chronic) before death and whether this relation is responsible for events leading to death”, Haddad [30], reviewed “studies in the recent past” from various fields—epidemiology, physiology of infant death and SIDS, pathology of the airway, and animal studies. Although “most of the evidence accumulated so far, including that obtained in the past two years, is circumstantial,” he concluded that “SIDS was little understood for many years until, over the past few years, its basic underlying genetic defect was better characterized (from recent animals and human studies), and light could finally be seen at the end of the tunnel,” again linking genetics and feedback mechanisms to see this light, as in the exemplary work of Hall, Rosbach, and Young on the circadian rhythm. Combining various clues and insights from different areas/studies via an empirical Bayes model is the capability of the frontier approach described in Section 2.2; see Sections 3.6.3, 5.4, 6.2.3 and 7.4 of reference [31] on postmarketing monitoring of medical product safety.

A related direction of our ongoing research is to combine several biomedical signals, which form a multivariate time series, thereby providing a more holographic view of a human subject. For example, in an intensive care unit, PPG can be combined with EEG, EMG, respiratory, and other signals to evaluate a patient’s health status. Wu and his collaborators have applied in [32,33] a combination of ST and EEG channels to study sleep dynamics and an “interpretable machine learning algorithm” to assess consistency of sleep-stage scoring rules across multiple sleep centers. How to utilize available information from multiple centers is a sensor fusion problem. We are currently combining recent advances in sensor fusion with those in TFA to develop integrated statistical analysis of the multivariate time series of multiple biomedical signals.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, and further inquiries can be directed to the corresponding author.

Author Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work, and approved it for publication.

Funding

TL’s research was supported by the National Science Foundation under DMS-1811818. GH’s research was supported by the National Institutes of Health under 1R01HL146530 and 1R21NS111270. AM’s research was supported by the National Institutes of Health under DP2-OD006495.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We thank TL’s research assistant Chenru Liu at Stanford University for her valuable contributions to the preparation, timely submission, and revision of the paper. We also thank the reviewers whose innovative comments and suggestions have resulted in substantial improvement of the presentation.

References

1. Trujillo, CA, Gao, R, Negraes, PD, Gu, J, Buchanan, J, Preissl, S, et al. Complex Oscillatory Waves Emerging from Cortical Organoids Model Early Human Brain Network Development. Cell Stem Cell (2019). 25(4):558–69. doi:10.1016/j.stem.2019.08.002

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Wu, H-T. Current State of Nonlinear-type Time-Frequency Analysis and Applications to High-Frequency Biomedical Signals. Curr Opin Syst Biol (2020). 23:8–21. doi:10.1016/j.coisb.2020.07.013

CrossRef Full Text | Google Scholar

3. Shelley, KH. Photoplethysmography: beyond the Calculation of Arterial Oxygen Saturation and Heart Rate. Anesth Analg (2007). 105:531–6. doi:10.1213/01.ane.0000269512.82836.c9

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Cicone, A, and Wu, HT. How nonlinear-type time-frequency analysis can help in sensing instantaneous heart rate and instantaneous respiratory rate from photoplethysmography in a reliable way. Front Phy (2017). 8:701.

CrossRef Full Text | Google Scholar

5. Wardhan, R, and Shelley, K. Peripheral venous pressure waveform. Curr Opin Anesthesiol (2009). 22:814–821.

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Wu, HT, Alian, A, and Shelley, KH. A new approach to complicated and noisy physiological waveforms analysis: peripheral venous pressure waveform as an example. J Clin Monit Comput (2020). 1–17.

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Flandrin, P. Wavelet Analysis and its Applications (1999). p. 10.Time-frequency/time-scale Analysis

Google Scholar

8. Huang, NE, Shen, Z, Long, RS, Wu, HH, Zheng, Q, et al. The Empirical Mode Decomposition and the hilbert Spectrum for Nonlinear and Non-stationary Time Series Analysis. Proc R Soc Lond A (1998). 454:903–95. doi:10.1098/rspa.1998.0193

CrossRef Full Text | Google Scholar

9. Nahon, M. Phase Evaluation and segmentationPhD Thesis. New Haven: Yale University (2000).

10. Daubechies, I, Lu, J, and Wu, H-T. Synchrosqueezed Wavelet Transforms: an Empirical Mode Decomposition-like Tool. Appl Comput Harmonic Anal (2011). 30:243–61. doi:10.1016/j.acha.2010.08.002

CrossRef Full Text | Google Scholar

11. Daubechies, I, Wang, Y, and Wu, H-t. ConceFT: Concentration of Frequency and Time via a Multitapered Synchrosqueezed Transform. Phil Trans R Soc A (2016). 374:20150193. doi:10.1098/rsta.2015.0193

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Mallat, S. Group Invariant Scattering. Comm Pure Appl Math (2012). 65:1331–98. doi:10.1002/cpa.21413

CrossRef Full Text | Google Scholar

13. Lin, C-Y, Su, L, and Wu, H-T. Wave-Shape Function Analysis. J Fourier Anal Appl (2018). 24:451–505. doi:10.1007/s00041-017-9523-0

CrossRef Full Text | Google Scholar

14. Wang, S-C, Wu, H-T, Huang, P-H, Chang, C-H, Ting, C-K, and Lin, Y-T. Novel Imaging Revealing Inner Dynamics for Cardiovascular Waveform Analysis via Unsupervised Manifold Learning. Anesth Analgesia (2020). 130:1244–54. doi:10.1213/ane.0000000000004738

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Gordon, NJ, Salmond, DJ, and Smith, AFM. Novel Approach to Nonlinear/non-Gaussian Bayesian State Estimation. IEE Proc F Radar Signal Process UK (1993). 140:107–13. doi:10.1049/ip-f-2.1993.0015

CrossRef Full Text | Google Scholar

16. Liu, JS. Monte Carlo Strategies in Scientific Computing. 2nd ed. New York: Springer (2008).

17. Chan, HP, and Lai, TL. A General Theory of Particle Filters in Hidden Markov Models and Some Applications. Ann Stat (2013). 41(6):2877–904. doi:10.1214/13-aos1172

CrossRef Full Text | Google Scholar

18. Lai, TL Recursive Particle Filters for Joint State and Parameter Estimation in Hidden Markov Models with Multifaceted Applications. In: Proceedings of the 8th ICCM. Boston: International of Press; (2021) in press

Google Scholar

19. Chopin, N. A Sequential Particle Filter Method for Static Models. Biometrika (2002). 89(3):539–52. doi:10.1093/biomet/89.3.539

CrossRef Full Text | Google Scholar

20. Cotter, SL, Roberts, GO, Stuart, AM, and White, D. MCMC Methods for Functions: Modifying Old Algorithms to Make Them Faster. Stat Sci (2013). 28:424–46. doi:10.1214/13-sts421

CrossRef Full Text | Google Scholar

21. Andrieu, C, Doucet, A, and Holenstein, R. Particle Markov Chain Monte Carlo Methods. J R Stat Soc Ser B (2010). 72(3):269–342. doi:10.1111/j.1467-9868.2009.00736.x

CrossRef Full Text | Google Scholar

22. Chopin, N, Jacob, PE, and Papaspiliopoulos, O. SMC2: an Efficient Algorithm for Sequential Analysis of State Space Models. J R Stat Soc Ser B (2013). 75(3): 397–426. doi:10.1111/j.1467-9868.2012.01046.x

CrossRef Full Text | Google Scholar

23. Bezanson, J, Edelman, A, Karpinski, S, and Shah, VB. Julia: A Fresh Approach to Numerical Computing. SIAM Rev (2017). 59(1):65–98. doi:10.1137/141000671

CrossRef Full Text | Google Scholar

24. Yalamanchili, P, Arshad, U, Mohammed, Z, Garigipati, P, Entschev, P, Kloppenborg, B, et al. ArrayFire - A High Performance Software Library for Parallel Computing with an Easy-To-Use API (2015).

25. Zhu, M. Ph.D. Dissertation in Computer Science. Stanford University (2021). doi:10.1145/3450439.3451855Improving and Accelerating Particle-Based Probabilistic Inference

CrossRef Full Text | Google Scholar

26. Haddad, GG, Leistner, HL, Lai, TL, and Mellins, RB. Ventilation and Ventilatory Pattern during Sleep in Aborted Sudden Infant Death Syndrome. Pediatr Res (1981). 15(5):879–83. doi:10.1203/00006450-198105000-00011

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Bazzy, AR, and Haddad, GG. Diaphragmatic Fatigue in Unanesthetized Adult Sheep. J Appl Physiol (1984). 57(1):182–90. doi:10.1152/jappl.1984.57.1.182

CrossRef Full Text | Google Scholar

28. Haddad, GG, Jeng, HJ, Bazzy, AR, and Lai, TL. Within-breath Electromyographic Changes during Loaded Breathing in Adult Sheep. J Appl Physiol (1986). 61(4):1316–21. doi:10.1152/jappl.1986.61.4.1316

CrossRef Full Text | Google Scholar

29. Haddad, GG, Jeng, HJ, Lee, SH, and Lai, TL. Rhythmic Variations in R-R Interval during Sleep and Wakefulness in Puppies and Dogs. Am J Physiology-Heart Circulatory Physiol (1984). 247(1):H67–H73. doi:10.1152/ajpheart.1984.247.1.h67

CrossRef Full Text | Google Scholar

30. Haddad, GG. The Multifaceted Sudden Infant Death Syndrome. Curr Opin Pediatr (1992). 4(3):426–30. doi:10.1097/00008480-199206000-00006

CrossRef Full Text | Google Scholar

31. Chen, J, Heyse, JF, and Lai, TL. Medical Product Safety Evaluation: Biological Models and Statistical Methods. Boca Raton FL: Chapman & Hall/CRC (2018). doi:10.1201/9781351021982

CrossRef Full Text

32. Liu, GR, Lo, YL, Malik, J, Sheu, YC, and Wu, HT. Diffuse to Fuse EEG Spectra–Intrinsic Geometry of Sleep Dynamics for Classification. Biomed Signal Process Control (2019). 55(5):101576.

Google Scholar

33. Liu, G-R, Lin, T-Y, Wu, H-T, Sheu, Y-C, Liu, C-L, Liu, W-T, et al. Large-scale Assessment of Consistency in Sleep Stage Scoring Rules Among Multiple Sleep Centers Using an Interpretable Machine Learning Algorithm. J Clin Sleep Med (2021). 17(2):159–66. doi:10.5664/jcsm.8820

CrossRef Full Text | Google Scholar

Keywords: hidden Markov model, time–frequency analysis, oscillatory components, biorhythms, empirical bayes, uncertainty quantification

Citation: Wu H-T, Lai TL, Haddad GG and Muotri A (2021) Oscillatory Biomedical Signals: Frontiers in Mathematical Models and Statistical Analysis. Front. Appl. Math. Stat. 7:689991. doi: 10.3389/fams.2021.689991

Received: 01 April 2021; Accepted: 11 May 2021;
Published: 15 July 2021.

Edited by:

Charles K. Chui, Hong Kong Baptist University, Hong Kong

Reviewed by:

Xiaoming Huo, Georgia Institute of Technology, United States
Haiyan Cais, University of Missouri–St. Louis, United States

Copyright © 2021 Wu, Lai, Haddad and Muotri. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tze Leung Lai, lait@stanford.edu

Download