<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Netw. Physiol.</journal-id>
<journal-title>Frontiers in Network Physiology</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Netw. Physiol.</abbrev-journal-title>
<issn pub-type="epub">2674-0109</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">845327</article-id>
<article-id pub-id-type="doi">10.3389/fnetp.2022.845327</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Network Physiology</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Partial Directed Coherence and the Vector Autoregressive Modelling Myth and a Caveat</article-title>
<alt-title alt-title-type="left-running-head">Baccal&#xe1; and Sameshima</alt-title>
<alt-title alt-title-type="right-running-head">PDC: Ditching the VAR Myth</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>Luiz A.</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/82597/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sameshima</surname>
<given-names>Koichi</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/5651/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Laborat&#xf3;rio de Comunica&#xe7;&#xf5;es e Sinais</institution>, <institution>Departamento de Telecomunica&#xe7;&#xf5;es e Controle</institution>, <institution>Escola Polit&#xe9;cnica</institution>, <institution>Universidade de S&#xe3;o Paulo</institution>, <addr-line>S&#xe3;o Paulo</addr-line>, <country>Brazil</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Departamento de Radiologia e Oncologia</institution>, <institution>Faculdade de Medicina</institution>, <institution>Universidade de S&#xe3;o Paulo</institution>, <addr-line>S&#xe3;o Paulo</addr-line>, <country>Brazil</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/34129/overview">Luca Faes</ext-link>, University of Palermo, Italy</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/65656/overview">Daniele Marinazzo</ext-link>, Ghent University, Belgium</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/589746/overview">Yuri Antonacci</ext-link>, University of Palermo, Italy</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Luiz A. Baccal&#xe1;, <email>baccala@lcs.poli.usp.br</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Information Theory, a section of the journal Frontiers in Network Physiology</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>28</day>
<month>04</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>2</volume>
<elocation-id>845327</elocation-id>
<history>
<date date-type="received">
<day>29</day>
<month>12</month>
<year>2021</year>
</date>
<date date-type="accepted">
<day>14</day>
<month>02</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2022 Baccal&#xe1; and Sameshima.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Baccal&#xe1; and Sameshima</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Here we dispel the lingering myth that Partial Directed Coherence is a Vector Autoregressive (VAR) Modelling dependent concept. In fact, our examples show that it is <italic>spectral factorization</italic> that lies at its heart, for which VAR modelling is a mere, albeit very efficient and convenient, device. This applies to Granger Causality estimation procedures in general and also includes instantaneous Granger effects. Care, however, must be exercised for connectivity between multivariate data generated through nonminimum phase mechanisms as it may possibly be <italic>incorrectly</italic> captured.</p>
</abstract>
<kwd-group>
<kwd>partial directed coherence</kwd>
<kwd>total partial directed coherence</kwd>
<kwd>spectral factorization</kwd>
<kwd>Granger causality</kwd>
<kwd>time series connectivity modelling</kwd>
<kwd>nonminimum phase systems</kwd>
</kwd-group>
<contract-sponsor id="cn001">Conselho Nacional de Desenvolvimento Cient&#xed;fico e Tecnol&#xf3;gico<named-content content-type="fundref-id">10.13039/501100003593</named-content>
</contract-sponsor>
<contract-sponsor id="cn002">Funda&#xe7;&#xe3;o de Amparo &#xe0; Pesquisa do Estado de S&#xe3;o Paulo<named-content content-type="fundref-id">10.13039/501100001807</named-content>
</contract-sponsor>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>The aim of Granger time series connectivity modelling is to examine how observations from different simultaneously observed time series may be related in the hope of exposing possible mechanisms behind their generation. This goal is intrinsically limited by a number of factors: chief among them are potential structural artifacts that result from unobserved series (confounders). This plus the fact that Granger analysis rests exclusively on observations rather than active intervention (<xref ref-type="bibr" rid="B2">Baccal&#xe1; and Sameshima, 2014a</xref>) means that one must characterize interactions as &#x201c;Granger-causal&#x201d; rather than causal in the strictest sense.</p>
<p>In spite of this, and in connection to situations where intervention is either impossible, such as when impacting phenomena on a geophysical scale as for Solar spot/Melanoma data (<xref ref-type="bibr" rid="B11">Baccal&#xe1; and Sameshima, 2014b</xref>) or undesirable as in physiological data analysis where noninvasive methods, at least in the human case, are always to be preferred, Granger Causality remains of interest in providing clues as to the dynamics behind the observed variables.</p>
<p>In recent years a vast array of methods have been developed; they originated in economics research following Granger&#x2019;s seminal paper (<xref ref-type="bibr" rid="B22">Granger, 1969</xref>) who used vector autoregression as a device to model data relationships in the time domain. His &#x201c;causality&#x201d; notion rests on how well the knowledge of one time series&#x2019;s past can enhance one&#x2019;s ability to predict another time series, which once vindicated, implies their connectivity. Though initially a strictly bivariate concept, the idea has been extended to the analysis of more than two simultaneously observed time series in an attempt to disentangle the effect of other series that might be acting as interaction confounders to pairwise observations (<xref ref-type="bibr" rid="B3">Baccal&#xe1; and Sameshima, 2001a</xref>). Historically, most developments that followed rest on <xref ref-type="bibr" rid="B21">Geweke (1984)</xref>&#x2019;s work who used Vector Autoregressive (VAR) modelling for more than two time series as a preliminary step to deduct the effect of the other observed series from the time series pair of interest. After that subtraction, the method consists of looking at a power ratio of the prediction errors between when the past of a series is taken into account against when it is not.</p>
<p>Much along the lines of improved estimation and inference of Granger time domain representations has been made since then and can be read in (<xref ref-type="bibr" rid="B28">L&#xfc;tkepohl, 2005</xref>).</p>
<p>As a general rule, much of what followed is patterned on the representation of temporal data in terms of &#x201c;output-only&#x201d; systems, i.e., systems where the observed time series, <italic>x</italic>
<sub>1</sub>(<italic>n</italic>), <italic>&#x2026;</italic>, <italic>x</italic>
<sub>
<italic>N</italic>
</sub>(<italic>n</italic>), are represented as conveniently filtered versions of white noise&#x2014;the so called innovations.</p>
<p>Because VAR models can be naturally interpreted in terms of linear filtering, already some aspects of a spectral interpretation to the Granger connectivity scenario were present in (<xref ref-type="bibr" rid="B21">Geweke, 1984</xref>)&#x2019;s work. Further specifics have been developed since (<xref ref-type="bibr" rid="B28">L&#xfc;tkepohl, 2005</xref>; <xref ref-type="bibr" rid="B13">Barrett and Seth, 2010</xref>).</p>
<p>The spectral nature of these problems, specially in connection to EEG data processing which are naturally characterized in terms of oscillatory behaviour, was boosted by the introduction of <italic>Directed Transfer Function</italic> (DTF) (<xref ref-type="bibr" rid="B25">Kami&#x144;ski and Blinowska, 1991</xref>) and later by <italic>partial directed coherence</italic> (PDC) (<xref ref-type="bibr" rid="B8">Baccal&#xe1; and Sameshima, 2001b</xref>). Both quantities employed VAR modelling for their definition. Also both have since evolved to more accurate, and thus, more appropriate measures, please see (<xref ref-type="bibr" rid="B9">Baccal&#xe1; and Sameshima, 2021a</xref>) for their development. A <italic>leitmotif</italic> of those improvements was the growing realization of the importance and consequent incorporation of the estimated covariance of the innovations noise driving the observed outputs <italic>x</italic>
<sub>
<italic>i</italic>
</sub>(<italic>n</italic>) (<xref ref-type="bibr" rid="B4">Baccal&#xe1; et al., 2007</xref>; <xref ref-type="bibr" rid="B37">Takahashi et al., 2010</xref>; <xref ref-type="bibr" rid="B9">Baccal&#xe1; and Sameshima, 2021a</xref>; <xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>).</p>
<p>In fact, explicit consideration of innovations covariance effects are important in connection to the so-called &#x201c;instantaneous&#x201d; Granger causality (iGC) and are helpful in unveiling aspects of cardio-hemodynamic behaviour (<xref ref-type="bibr" rid="B19">Faes, 2014</xref>). Much as in the case of GC itself, iGC was originally only seen as a time domain aspect. There have been early efforts to portray it in the frequency domain (<xref ref-type="bibr" rid="B20">Faes and Nollo, 2010</xref>; <xref ref-type="bibr" rid="B19">Faes, 2014</xref>); more general efforts have only recently appeared with <xref ref-type="bibr" rid="B17">Cohen et al. (2019)</xref> and <xref ref-type="bibr" rid="B30">Nuzzi et al. (2021)</xref> along Geweke&#x2019;s line of description and along PDC/DTF lines (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>).</p>
<p>All of the latter developments have relied heavily on VAR modelling. This paper, by contrast, aims to dispel the notion that PDC (<xref ref-type="bibr" rid="B8">Baccal&#xe1; and Sameshima, 2001b</xref>) (and DTF, its dual) or any of its related quantifiers require vector autoregressive (VAR) modelling as a mandatory prerequisite. This notion coupled with limited familiarity with VAR modelling may have been a hindrance to their spread as methods of choice for Granger time series connectivity modelling among non time series specialists. We show here that absolute reliance on VAR modelling is not a must, but rather a matter of convenience, even though PDC and DTF were originally introduced with the help of VAR models.</p>
<p>As we have been alerted in the review process to this paper, an early precursor to the present developments is contained in (<xref ref-type="bibr" rid="B24">Jachan et al., 2009</xref>), which undeservedly does not seem to have attracted much following having just 22 citations at the Web of Science at the moment of this writing with only a small fraction of them reflecting actual practical method employment, mostly by its proponents. The present exposition not only confirms those results but provides evidence that they hold for more general PDC/DTF versions as well.</p>
<p>To dispel the VAR reliance misconception we employ a set of examples comprising a variety of methods, parametric and nonparametric, that, as we show next, yield essentially the same results. The methodological equivalence between them holds even for total PDC (tPDC) and total DTF (tDFT) as defined in (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>) which represent recently introduced extensions that incorporate the effects of instantaneous Granger causality to connectivity descriptions.</p>
<p>For brevity, we only show results for total PDC since it incorporates a consistent frequency domain description of instantaneous Granger interactions to PDC that automatically extends to total DTF&#x2019;s, given their duality (<xref ref-type="bibr" rid="B9">Baccal&#xe1; and Sameshima, 2021a</xref>; <xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>).</p>
<p>The rest of this paper is organized as follows: <xref ref-type="sec" rid="s2">Section 2</xref> reviews the theoretical basis and is followed in <xref ref-type="sec" rid="s3">Section 3</xref> with a brief description of the methods employed in the comparative computations which are illustrated in <xref ref-type="sec" rid="s4">Section 4</xref> and commented in <xref ref-type="sec" rid="s5">Section 5</xref> leading to the conclusion in <xref ref-type="sec" rid="s6">Section 6</xref> that tPDC/PDC (tDTF/DTF) representations are essentially canonical factors of the joint power spectral density of the data which portrays the relationship between multivariate data.</p>
<p>A concept that turns out to be key in the present setup is that of <italic>spectral factorization</italic> and the notion of a <italic>minimum phase</italic> spectral factor covered in more detail in <xref ref-type="sec" rid="s2-1">Section 2.1</xref>.</p>
<p>The concept of a <italic>minimum</italic> versus a <italic>nonminimum</italic> phase system is important for our discussion. This is briefly examined in the development that follows as we show it can lead to possibly false connectivity inference when nonminimum phase mechanisms are behind the data generation process.</p>
</sec>
<sec id="s2">
<title>2 Mathematical Considerations</title>
<sec id="s2-1">
<title>2.1 General Linear Models With Rational Spectra</title>
<p>A general class of linear stationary multivariate processes <inline-formula id="inf1">
<mml:math id="m1">
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2026;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula> is represented (<xref ref-type="bibr" rid="B28">L&#xfc;tkepohl, 2005</xref>) by:<disp-formula id="e1">
<mml:math id="m2">
<mml:mi mathvariant="bold">x</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="bold-italic">x</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mi mathvariant="bold">w</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:math>
<label>(1)</label>
</disp-formula>where <inline-formula id="inf2">
<mml:math id="m3">
<mml:mi mathvariant="bold">w</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo stretchy="false">[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
<mml:mo>&#x2026;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo stretchy="false">]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mi>T</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
</inline-formula> is a stationary (zero mean without loss of generality) multivariate innovations process with covariance matrix <bold>&#x3a3;</bold>
<sub>
<bold>w</bold>
</sub>. The process defined by (1) is termed a Vector Autoregressive Moving Average process, denoted VARMA (<italic>p</italic>, <italic>q</italic>), whose structure is defined by the <inline-formula id="inf3">
<mml:math id="m4">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula> matrices (<xref ref-type="bibr" rid="B28">L&#xfc;tkepohl, 2005</xref>). VAR processes and vector moving average (VMA) processes are special cases, respectively when <inline-formula id="inf4">
<mml:math id="m5">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2200;</mml:mo>
<mml:mi>s</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
</inline-formula>, or <inline-formula id="inf5">
<mml:math id="m6">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold-italic">r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2200;</mml:mo>
<mml:mi>r</mml:mi>
</mml:math>
</inline-formula>. The equivalences between VAR(<italic>p</italic>) and VMA (<italic>&#x221e;</italic>), and between VMA(<italic>q</italic>) and VAR(<italic>&#x221e;</italic>) are well known, where <italic>p</italic> and <italic>q</italic> refer respectively to the AR and MA orders that make up the model.</p>
<p>We implicitly assume that (1) is stable, i.e., the associated <bold>x</bold>(<italic>n</italic>) is wide sense stationary. For simplicity we consider only the case of finite <italic>p</italic> and <italic>q</italic>. This is guaranteed if the magnitude of the roots of<disp-formula id="e2">
<mml:math id="m7">
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>t</mml:mi>
<mml:mspace width="0.28em"/>
<mml:mi mathvariant="fraktur">A</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
<label>(2)</label>
</disp-formula>are less than 1 for<disp-formula id="e3">
<mml:math id="m8">
<mml:mi mathvariant="fraktur">A</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold">I</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(3)</label>
</disp-formula>where det stands for the determinant.</p>
<p>
<statement content-type="definition" id="definition_1">
<label>Definition 1</label>
<p>&#x7c; The system represented by (1) is minimum phase if the magnitude of the roots of<disp-formula id="e4">
<mml:math id="m9">
<mml:mi>d</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>t</mml:mi>
<mml:mspace width="0.28em"/>
<mml:mi mathvariant="fraktur">B</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:math>
<label>(4)</label>
</disp-formula>are less than or equal to 1 for<disp-formula id="e5">
<mml:math id="m10">
<mml:mi mathvariant="fraktur">B</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mi>z</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(5)</label>
</disp-formula>
</p>
<p>
<xref ref-type="statement" rid="definition_1">Definition 1</xref> guarantees that stable <bold>w</bold>(<italic>n</italic>) innovations sequences for <italic>n</italic> &#x2265; 0 may be found that lead to the observations, i.e. the system defined by (1) has a stable inverse.</p>
</statement>
</p>
<p>
<statement content-type="remark" id="remark_1">
<label>Remark 1</label>
<p>&#x7c; Strictly speaking when the roots in (5) are equal to 1, the impulse response of the inverse is merely bounded.</p>
</statement>
</p>
<p>
<statement content-type="remark" id="remark_2">
<label>Remark 2</label>
<p>&#x7c; When used as a data generating mechanism for <bold>x</bold>(n), (1) does not need to be minimum phase. However, data modelling through (1) always leads to an estimated minimum phase counterpart system. This follows from the fact that only second order statistics are used for estimating (1) coefficients. When the data is Gaussian, this is the only available alternative, as higher order statistics are redundant and offer no additional information that might expose any evidence of possible phase nonminimality.</p>
<p>It is easy to show that the power spectral density matrix of <bold>x</bold>(<italic>n</italic>) (1) is given by:<disp-formula id="e6">
<mml:math id="m11">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="fraktur">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mspace width="-0.17em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="fraktur">B</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">w</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="0.17em"/>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mspace width="-0.17em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="fraktur">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>H</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mspace width="-0.17em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:math>
<label>(6)</label>
</disp-formula>where<disp-formula id="e7">
<mml:math id="m12">
<mml:mi mathvariant="fraktur">A</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold">I</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>p</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">A</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold">j</mml:mi>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(7)</label>
</disp-formula>
<disp-formula id="e8">
<mml:math id="m13">
<mml:mi mathvariant="fraktur">B</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:munderover accentunder="false" accent="false">
<mml:mrow>
<mml:mo>&#x2211;</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>0</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mi>q</mml:mi>
</mml:mrow>
</mml:munderover>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mi>e</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold">j</mml:mi>
<mml:mn>2</mml:mn>
<mml:mi>&#x3c0;</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
</mml:math>
<label>(8)</label>
</disp-formula>for 0 &#x2264; &#x7c;<italic>&#x3bd;</italic>&#x7c; &#x3c; 0.5 which represents the normalized frequency and <inline-formula id="inf6">
<mml:math id="m14">
<mml:mi mathvariant="bold">j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msqrt>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msqrt>
</mml:math>
</inline-formula>. Naturally (7) and (8) are associated with making <italic>z</italic> &#x3d; <italic>e</italic>
<sup>
<bold>j</bold>2<italic>&#x3c0;r&#x3bd;</italic>
</sup> in (3) and (5) respectively.</p>
<p>It is easy to realize that (6) is of the form<disp-formula id="e9">
<mml:math id="m15">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="fraktur">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.28em"/>
<mml:mspace width="-0.17em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">w</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="0.28em"/>
<mml:mspace width="-0.17em"/>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="fraktur">H</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mspace width="-0.17em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(9)</label>
</disp-formula>containing the frequency dependent factor, <inline-formula id="inf7">
<mml:math id="m16">
<mml:mi mathvariant="fraktur">H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>, and a frequency independent factor, <bold>&#x3a3;</bold>
<sub>
<bold>w</bold>
</sub>.</p>
</statement>
</p>
<p>
<statement content-type="remark" id="remark_3">
<label>Remark 3</label>
<p>&#x7c; Equations (6) and (9) hold regardless of whether (1) is minimum phase or not.</p>
<p>From (9) it is easy to write the coherency matrix <inline-formula id="inf8">
<mml:math id="m17">
<mml:mi mathvariant="fraktur">C</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> with entries:<disp-formula id="e10">
<mml:math id="m18">
<mml:msub>
<mml:mrow>
<mml:mi>C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>i</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.28em"/>
<mml:msub>
<mml:mrow>
<mml:mi>S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mi>j</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mfrac>
</mml:math>
<label>(10)</label>
</disp-formula>by writing<disp-formula id="e11">
<mml:math id="m19">
<mml:mtable class="eqnarray">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mi mathvariant="fraktur">C</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mspace width="0.17em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="fraktur">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">w</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mspace width="0.17em"/>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="fraktur">H</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mspace width="-0.17em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mi mathvariant="bold">&#x393;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="fraktur">R</mml:mi>
<mml:mspace width="0.17em"/>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">&#x393;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mspace width="-0.17em"/>
<mml:mspace width="0.22em"/>
<mml:mspace width="-0.17em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(11)</label>
</disp-formula>where <inline-formula id="inf9">
<mml:math id="m20">
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mo>&#x22c5;</mml:mo>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> is the diag matrix operator, i.e. one that produces a matrix that is nonzero except for the diagonal elements of the operand so that<disp-formula id="e12">
<mml:math id="m21">
<mml:mi mathvariant="bold">&#x393;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mi mathvariant="fraktur">H</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(12)</label>
</disp-formula>and<disp-formula id="e13">
<mml:math id="m22">
<mml:mi mathvariant="fraktur">R</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">w</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(13)</label>
</disp-formula>is a correlation matrix with ones along the main diagonal for <inline-formula id="inf10">
<mml:math id="m23">
<mml:mi mathvariant="bold">D</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">w</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>Writing (11) as a product of the frequency dependent part <bold>&#x393;</bold>(<italic>&#x3bd;</italic>) mediated by a correlation matrix <inline-formula id="inf11">
<mml:math id="m24">
<mml:mi mathvariant="fraktur">R</mml:mi>
</mml:math>
</inline-formula> allows one to apply the definition of total DTF matrix (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>) as:<disp-formula id="e14">
<mml:math id="m25">
<mml:mi mathvariant="bold">&#x2040;</mml:mi>
<mml:mi mathvariant="bold">&#x0393;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="italic">v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="bold">&#x0393;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi mathvariant="normal">&#x2299;</mml:mi>
<mml:msup>
<mml:mi mathvariant="bold">&#x0393;</mml:mi>
<mml:mi mathvariant="normal">&#x2217;</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x002B;</mml:mo>
<mml:mi mathvariant="bold">&#x0393;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi mathvariant="italic">&#x03C1;</mml:mi>
<mml:mi mathvariant="normal">&#x2299;</mml:mi>
<mml:msup>
<mml:mi mathvariant="bold">&#x0393;</mml:mi>
<mml:mi mathvariant="normal">&#x2217;</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
<label>(14)</label>
</disp-formula>where <inline-formula id="inf12">
<mml:math id="m26">
<mml:mi mathvariant="bold-italic">&#x3c1;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="fraktur">R</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>, and <bold>I</bold>
<sub>
<italic>N</italic>
</sub> is an <italic>N</italic> &#xd7; <italic>N</italic> identity matrix with &#x2299; standing for the Hadamard element-wise matrix product.</p>
<p>The entries <italic>i</italic>, <italic>j</italic> from <inline-formula id="inf912">
<mml:math id="m926">
<mml:mi mathvariant="normal">&#x2040;</mml:mi>
<mml:mi mathvariant="bold">&#x0393;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi mathvariant="italic">v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> reduce to the absolute square value of directed coherence from <italic>j</italic> to <italic>i</italic>, which is a scale invariant form of DTF (<xref ref-type="bibr" rid="B6">Baccal&#xe1; et al., 1998</xref>), when instantaneous Granger causality is absent. <xref ref-type="disp-formula" rid="e14">Eq. 14</xref> describes what we have termed <italic>Total Granger Influentiability</italic> (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>).</p>
<p>An entirely parallel development allows defining total partial directed coherence (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>), taking advantage of the fact the partial coherence matrix can be shown to equal:<disp-formula id="e15">
<mml:math id="m27">
<mml:mtable class="eqnarray">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:mi mathvariant="fraktur">K</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="fraktur">C</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right"/>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a0;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>H</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mspace width="-0.17em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="fraktur">R</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="bold">&#x3a0;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:math>
<label>(15)</label>
</disp-formula>for<disp-formula id="e16">
<mml:math id="m28">
<mml:mi mathvariant="bold">&#x3a0;</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="bold">D</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mrow>
<mml:mi mathvariant="fraktur">H</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>H</mml:mi>
</mml:mrow>
</mml:msup>
<mml:mspace width="-0.17em"/>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mspace width="0.17em"/>
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">S</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">x</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
</mml:mfenced>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mspace width="0.17em"/>
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold">D</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(16)</label>
</disp-formula>and<disp-formula id="e17">
<mml:math id="m29">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="fraktur">R</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold">D</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mspace width="0.17em"/>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
<mml:mspace width="0.17em"/>
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold">D</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
</mml:math>
<label>(17)</label>
</disp-formula>which is a partial correlation matrix between the <bold>
<italic>w</italic>
</bold>
<sub>
<italic>i</italic>
</sub>(<italic>n</italic>) innovations where <inline-formula id="inf13">
<mml:math id="m30">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold">D</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mi mathvariant="fraktur">D</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<p>The form in (15) is what allowed us to define total PDC as:<disp-formula id="e18">
<mml:math id="m31">
<mml:mi mathvariant="bold">&#x2040;</mml:mi>
<mml:mi mathvariant="bold">&#x03A0;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi mathvariant="bold">&#x03A0;</mml:mi>
<mml:mi mathvariant="normal">&#x2217;</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi mathvariant="normal">&#x2299;</mml:mi>
<mml:mi mathvariant="bold">&#x03A0;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x002B;</mml:mo>
<mml:msup>
<mml:mi mathvariant="bold">&#x03A0;</mml:mi>
<mml:mi mathvariant="normal">&#x2217;</mml:mi>
</mml:msup>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi mathvariant="normal">&#x2299;</mml:mi>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="italic">&#x03C1;</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
<mml:mi mathvariant="bold">&#x03A0;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>v</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:math>
<label>(18)</label>
</disp-formula>where <inline-formula id="inf14">
<mml:math id="m32">
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="bold-italic">&#x3c1;</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mover accent="true">
<mml:mrow>
<mml:mi mathvariant="fraktur">R</mml:mi>
</mml:mrow>
<mml:mo>&#x303;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">I</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
</mml:math>
</inline-formula>. The <italic>i</italic>, <italic>j</italic> entries describe what we termed the <italic>Total Granger Connectivity</italic> from <italic>j</italic> to <italic>i</italic> (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>), which reduce to generalized PDC (<xref ref-type="bibr" rid="B4">Baccal&#xe1; et al., 2007</xref>) when instantaneous Granger causality is absent.</p>
<p>Whenever one can properly write the spectral density matrix as in (9), one may employ the latter quantities to describe multivariate time series within the tPDC-tDTF framework. A case in point which we describe briefly in <xref ref-type="sec" rid="s3-3">Section 3.3</xref> is provided by Wilson&#x2019;s spectral factorization algorithm (<xref ref-type="bibr" rid="B38">Wilson, 1972</xref>), which has been used before in connection with alternative Granger causality characterizations (<xref ref-type="bibr" rid="B18">Dhamala et al., 2008</xref>) and is also behind <xref ref-type="bibr" rid="B24">Jachan et al. (2009)</xref>&#x2019;s results.</p>
</statement>
</p>
</sec>
</sec>
<sec id="s3">
<title>3 Estimation Methods</title>
<p>
<xref ref-type="disp-formula" rid="e1">Eq. 1</xref> was used as a general data mechanism for imposing relationships between the time series we examine in <xref ref-type="sec" rid="s4">Section 4</xref>. The data generated were analysed via the three main approaches we briefly describe next.</p>
<sec id="s3-1">
<title>3.1 Vector Autoregressive Modelling</title>
<p>Vector autoregressive modelling is a traditional subject (<xref ref-type="bibr" rid="B28">L&#xfc;tkepohl, 2005</xref>). The version used here was implemented in the AsympPDC package (<xref ref-type="bibr" rid="B33">Sameshima and Baccal&#xe1;, 2014</xref>) and employs Nuttall-Strand&#x2019;s method to obtain the autoregression coefficients (<xref ref-type="bibr" rid="B29">Marple, 1987</xref>). One important step in this sort of procedure involves finding the best model order <italic>p</italic>. Here Hannan-Quinn&#x2019;s method was chosen; it is a variant from the better known Akaike&#x2019;s method (<xref ref-type="bibr" rid="B28">L&#xfc;tkepohl, 2005</xref>).</p>
</sec>
<sec id="s3-2">
<title>3.2 Vector Moving Average and Vector Autoregressive Moving Average Modelling</title>
<p>A traditional means of fitting VMA(<italic>q</italic>) and VARMA (<italic>p</italic>, <italic>q</italic>) models is to determine a preliminary VAR model of very large order (<italic>p</italic> &#x3d; 50 was adopted here) and use its residuals <italic>&#x3f5;</italic>
<sub>
<italic>i</italic>
</sub>(<italic>n</italic>) to fit the observed data <italic>x</italic>
<sub>
<italic>j</italic>
</sub>(<italic>n</italic>) through a mock multi-input/multi-output system via least-squares. An univariate version of this approach can be appreciated in (<xref ref-type="bibr" rid="B36">Stoica and Moses, 2005</xref>).</p>
<p>In practical applications, determining <italic>p</italic> and <italic>q</italic> can be achieved through minimizing model order choice functions as in Akaike&#x2019;s method. Whereas, minimizing Akaike-type penalization is trivial in the VMA case, bidimensional search of tentative <italic>p</italic> and <italic>q</italic> is required in the VARMA case. To simplify matters here, we have employed the theoretical model orders used to get the estimates.</p>
</sec>
<sec id="s3-3">
<title>3.3 Wilson&#x2019;s Algorithm</title>
<p>Wilson&#x2019;s method is an iterative method that decomposes (9) into estimates for <inline-formula id="inf15">
<mml:math id="m33">
<mml:mi mathvariant="fraktur">H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> and <bold>&#x3a3;</bold>
<sub>
<bold>w</bold>
</sub> (<xref ref-type="bibr" rid="B38">Wilson, 1972</xref>). It starts by guessing a <inline-formula id="inf16">
<mml:math id="m34">
<mml:mi mathvariant="fraktur">H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> with the restriction of its representing filters to have impulse responses that are identically zero for negative time (the so-called filter causality condition, sometimes referred as nonanticipative filters whose output cannot anticipate the input). The solution essentially amounts to Newton&#x2019;s root finding iterations until a maximum prescribed error is achieved. In the present case, a maximum error of 10<sup>&#x2212;6</sup> was adopted.</p>
<p>Wilson&#x2019;s method has been used before in connection with other Granger causality descriptions both related (<xref ref-type="bibr" rid="B24">Jachan et al., 2009</xref>) and directly unrelated (<xref ref-type="bibr" rid="B18">Dhamala et al., 2008</xref>) to PDC/DTF descriptions. It has the advantage that it can be applied to nonparametric spectral estimates, whether they are obtained by periodogram smoothing (<xref ref-type="bibr" rid="B31">Percival and Walden, 1993</xref>) or other means like wavelets (<xref ref-type="bibr" rid="B27">Lima et al., 2020</xref>).</p>
<p>The spectral estimates used here (henceforth referred as <bold>WN</bold>, nonparametric Wilson estimates) employed Welch&#x2019;s method as implemented in Matlab&#x2019;s cpsd.m function with von Hann&#x2019;s data window and 50% segment overlap (<xref ref-type="bibr" rid="B31">Percival and Walden, 1993</xref>).</p>
<p>The reader may obtain a working Python implementation in (<xref ref-type="bibr" rid="B27">Lima et al., 2020</xref>). Here a similar Matlab version was used.</p>
</sec>
<sec id="s3-4">
<title>3.4 Brief Comments</title>
<p>The time series modelling methods of <xref ref-type="sec" rid="s3-1">Section 3.1</xref>, <xref ref-type="sec" rid="s3-2">3.2</xref> are essentially least squares approaches. Wilson&#x2019;s algorithm on the other hand is a numerical square-rooting procedure that also achieves the spectral factorization of the power spectral density matrix <bold>S</bold>(<italic>&#x3bd;</italic>). In all cases, one obtains the so-called minimum phase spectral factor represented by <inline-formula id="inf17">
<mml:math id="m35">
<mml:mi mathvariant="fraktur">H</mml:mi>
<mml:mrow>
<mml:mo stretchy="false">(</mml:mo>
<mml:mrow>
<mml:mi>&#x3bd;</mml:mi>
</mml:mrow>
<mml:mo stretchy="false">)</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> in (9).</p>
<p>All Matlab routines used in this paper have been included as <xref ref-type="sec" rid="s12">Supplementary Material</xref>. For convenience, Dhamala&#x2019;s most recent implementation (<xref ref-type="bibr" rid="B23">Henderson et al., 2021</xref>) was also included and essentially leads to the same results we report next.</p>
</sec>
</sec>
<sec id="s4">
<title>4 Numerical Illustrations</title>
<p>In the following illustrations, the data comprise <italic>n</italic>
<sub>
<italic>s</italic>
</sub> &#x3d; 16,384 observed points to minimize misinterpretation due to short time series effects. In all cases, the theoretical models can be used to compute the theoretical total PDC as in (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>). In each case, the mean-squared frequency domain approximation error of each estimation method was computed and is presented in <xref ref-type="table" rid="T1">Table 1</xref> after averaging over <italic>R</italic> &#x3d; 100 realizations. Here Wilson estimates employed 256-point long data tappers.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Table containing means squared error to fits of the theoretical tPDC according to estimation method for each Example. Missing values portray when certain estimation approaches were not used.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">Example</th>
<th align="center">
<italic>n</italic>
<sub>
<italic>s</italic>
</sub>
</th>
<th align="center">VMA</th>
<th align="center">VAR</th>
<th align="center">VARMA</th>
<th align="center">WN</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="3" align="left">1</td>
<td align="center">16,384</td>
<td align="center">1.27 &#xd7; 10<sup>&#x2212;5</sup>
</td>
<td align="center">6.84 &#xd7; 10<sup>&#x2212;6</sup>
</td>
<td align="left"/>
<td align="center">0.15 &#xd7; 10<sup>&#x2013;2</sup>
</td>
</tr>
<tr>
<td align="center">4,096</td>
<td align="center">5.76 &#xd7; 10<sup>&#x2212;5</sup>
</td>
<td align="center">3.06 &#xd7; 10<sup>&#x2212;5</sup>
</td>
<td align="left"/>
<td align="center">0.61 &#xd7; 10<sup>&#x2212;2</sup>
</td>
</tr>
<tr>
<td align="center">1,024</td>
<td align="center">2.45 &#xd7; 10<sup>&#x2212;4</sup>
</td>
<td align="center">1.57 &#xd7; 10<sup>&#x2212;5</sup>
</td>
<td align="left"/>
<td align="center">3.26 &#xd7; 10<sup>&#x2212;2</sup>
</td>
</tr>
<tr>
<td rowspan="3" align="left">2</td>
<td align="center">16,384</td>
<td align="center">0.21 &#xd7; 10<sup>&#x2212;2</sup>
</td>
<td align="center">1.72 &#xd7; 10<sup>&#x2212;4</sup>
</td>
<td align="center">2.96 &#xd7; 10<sup>&#x2212;8</sup>
</td>
<td align="center">0.67 &#xd7; 10<sup>&#x2212;2</sup>
</td>
</tr>
<tr>
<td align="center">4,096</td>
<td align="center">0.84 &#xd7; 10<sup>&#x2212;2</sup>
</td>
<td align="center">7.09 &#xd7; 10<sup>&#x2212;4</sup>
</td>
<td align="center">5.02 &#xd7; 10<sup>&#x2212;7</sup>
</td>
<td align="center">2.77 &#xd7; 10<sup>&#x2212;2</sup>
</td>
</tr>
<tr>
<td align="center">1,024</td>
<td align="center">3.10 &#xd7; 10<sup>&#x2212;2</sup>
</td>
<td align="center">2.60 &#xd7; 10<sup>&#x2212;3</sup>
</td>
<td align="center">6.65 &#xd7; 10<sup>&#x2212;6</sup>
</td>
<td align="center">12.36 &#xd7; 10<sup>&#x2212;2</sup>
</td>
</tr>
<tr>
<td rowspan="3" align="left">3</td>
<td align="center">16,384</td>
<td align="center">6.11 &#xd7; 10<sup>&#x2212;4</sup>
</td>
<td align="center">3.20 &#xd7; 10<sup>&#x2212;5</sup>
</td>
<td align="left"/>
<td align="center">0.13 &#xd7; 10<sup>&#x2212;2</sup>
</td>
</tr>
<tr>
<td align="center">4,096</td>
<td align="center">0.20 &#xd7; 10<sup>&#x2212;2</sup>
</td>
<td align="center">1.37 &#xd7; 10<sup>&#x2212;4</sup>
</td>
<td align="left"/>
<td align="center">0.57 &#xd7; 10<sup>&#x2212;2</sup>
</td>
</tr>
<tr>
<td align="center">1,024</td>
<td align="center">0.90 &#xd7; 10<sup>&#x2212;2</sup>
</td>
<td align="center">5.30 &#xd7; 10<sup>&#x2212;4</sup>
</td>
<td align="left"/>
<td align="center">3.50 &#xd7; 10<sup>&#x2212;2</sup>
</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Next we present three examples whose allied graphs contain the real and imaginary parts of tPDC plotted against the background of the expected theoretically computed results. These examples share the property of being generated by minimum phase (1) models.</p>
<p>Finally, a fourth example generated by a nonminimum phase (1) is examined. Its numerical results are contrasted to the theoretical tPDC computed with help of the actual generating model parameters.</p>
<p>
<statement content-type="example" id="example_1">
<label>Example 1</label>
<p>&#x7c; Vector Moving Average Model (VMA)</p>
<p>We start with conceivably the simplest possible kind of vector moving average example with unidirectional influence and with the clear presence of iGC described by<disp-formula id="equ1">
<mml:math id="m36">
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>with innovations noise covariance<disp-formula id="e19">
<mml:math id="m37">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="bold">&#x3a3;</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mi mathvariant="bold">w</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>5</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:math>
<label>(19)</label>
</disp-formula>whose influence of <italic>x</italic>
<sub>2</sub>(<italic>n</italic>) onto <italic>x</italic>
<sub>1</sub>(<italic>n</italic>) is clear due to its lagged dependence on <italic>w</italic>
<sub>2</sub>(<italic>n</italic>) which is the sole input that determines <italic>x</italic>
<sub>2</sub>(<italic>n</italic>). The presence of iGC is clear from (19)&#x2019;s non diagonal nature.</p>
<p>From <xref ref-type="fig" rid="F1">Figure 1</xref>, it is clear that for large n<sub>s</sub>, all estimates of total PDC agree with the theoretically expected one within the constraints of estimator nature. A case in point is Wilson&#x2019;s factorized version computed from the nonparametric power spectral estimates which is rippled as expected (red line<bold>s</bold>), following what happens with the original spectral estimates.</p>
</statement>
</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Superimposed graphs of <bold>total partial directed coherence, tPDC</bold>, estimates for the VMA model simulated for <italic>n</italic>
<sub>
<italic>s</italic>
</sub> &#x3d; 16, 384 data points (<xref ref-type="other" rid="example_1">Example 1</xref>) and three estimation methods (VAR, VMA, WN), where the <bold>real</bold> <bold>(</bold>
<sans-serif>
<bold>A</bold>
</sans-serif>
<bold>)</bold>, and the <bold>imaginary</bold> <bold>(</bold>
<sans-serif>
<bold>B</bold>
</sans-serif>
<bold>)</bold> components are plotted separately. The theoretical tPDCs are depicted in blue lines. WN estimates ripple around theoretical values (topmost red lines), yet they closely resemble that of theoretical values. VAR and VMA estimation methods results&#x2014;plotted as the two bottommost <bold>black lines</bold>&#x2014;are visually indistinguishable from the theoretical values (blue lines).</p>
</caption>
<graphic xlink:href="fnetp-02-845327-g001.tif"/>
</fig>
<p>
<statement content-type="example" id="example_2">
<label>Example 2</label>
<p>&#x7c; Vector Autoregressive Moving Average Model (VARMA)</p>
<p>The next example is a bit more elaborate. It has a VARMA (2, 2) data generating procedure described by<disp-formula id="equ2">
<mml:math id="m38">
<mml:mfenced open="{" close="">
<mml:mrow>
<mml:mtable class="aligned">
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mspace width="0.28em"/>
<mml:mi>r</mml:mi>
<mml:mspace width="0.28em"/>
<mml:mi mathvariant="italic">cos</mml:mi>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mi>r</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mi>b</mml:mi>
<mml:mspace width="0.28em"/>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>a</mml:mi>
<mml:mspace width="0.28em"/>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="right">
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
<mml:mtd columnalign="left">
<mml:mo>&#x3d;</mml:mo>
<mml:mi>c</mml:mi>
<mml:mspace width="0.28em"/>
<mml:msub>
<mml:mrow>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:mfenced>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mrow>
<mml:mi>w</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mfenced open="(" close=")">
<mml:mrow>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:mfenced>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
</mml:math>
</disp-formula>where <italic>r</italic> &#x3d; 0.95, <italic>&#x3b8;</italic> &#x3d; <italic>&#x3c0;</italic>/3, <italic>b</italic> &#x3d; 0.5, <italic>a</italic> &#x3d; &#x2212;0.5, <italic>c</italic> &#x3d; 0.7 and <bold>&#x3a3;</bold>
<sub>
<bold>w</bold>
</sub> equal to the identity matrix.</p>
<p>As in the previous example, total PDC estimates match one another regardless of method, see <xref ref-type="fig" rid="F2">Figure 2</xref>.</p>
<p>Albeit at little surprise, it is important to realize that the use of the VARMA modelling scheme (<xref ref-type="sec" rid="s3-2">Section 3.2</xref>) yields substantially better fit. This is confirmed by <xref ref-type="table" rid="T1">Table 1</xref> results.</p>
</statement>
</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>
<bold>tPDC</bold> estimates by all four methods&#x2014;VAR, VMA, VARMA, and WN&#x2014;for the VARMA model in <xref ref-type="other" rid="example_2">Example 2</xref> simulated for <italic>n</italic>
<sub>
<italic>s</italic>
</sub> &#x3d; 16, 384 data points are depicted, with <bold>real</bold> <bold>(</bold>
<sans-serif>
<bold>A</bold>
</sans-serif>
<bold>)</bold>, and <bold>imaginary</bold> <bold>(</bold>
<sans-serif>
<bold>B</bold>
</sans-serif>
<bold>)</bold> components plotted separately. As before, the theoretical tPDCs are also shown (blue lines). Again note that WN estimates (topmost red lines) ripple around theoretical values. In this case, VMA estimates (purple lines) also ripple around theoretical values (blue lines) illustrating estimator accuracy limitations. This is also apparent on <xref ref-type="table" rid="T1">Table 1</xref>. VAR and VARMA results&#x2014;plotted as the two black bottommost lines just underneath the theoretical values&#x2014;represent much closer approximations.</p>
</caption>
<graphic xlink:href="fnetp-02-845327-g002.tif"/>
</fig>
<p>
<statement content-type="example" id="example_3">
<label>Example 3</label>
<p>&#x7c; Vector Autoregressive Model (VAR)</p>
<p>The third toy example covers the one used in (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>) and was borrowed from (<xref ref-type="bibr" rid="B19">Faes, 2014</xref>) involving three channels whose connectivity is assessed <italic>via</italic> a VAR model taking iGC effects into account through tPDC. One obtains essentially the same results irrespective of the computational approach, see <xref ref-type="fig" rid="F3">Figures 3A, B</xref>.</p>
</statement>
</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>
<bold>tPDC</bold> estimates for <xref ref-type="other" rid="example_3">Example 3</xref> are shown for VAR, VMA, and WN methods (<italic>n</italic>
<sub>
<italic>s</italic>
</sub> &#x3d; 16, 384) with <bold>real</bold> <bold>(</bold>
<sans-serif>
<bold>A</bold>
</sans-serif>
<bold>)</bold>, and <bold>imaginary</bold> <bold>(</bold>
<sans-serif>
<bold>B</bold>
</sans-serif>
<bold>)</bold> components plotted separately. As before, theoretical tPDCs are also shown (blue lines). Once again, WN estimates (topmost red lines) ripple around theoretical values. Here so do too VMA estimates (purple lines) signalling their poor expected accuracy when fitting VAR data. This is confirmed by results presented on <xref ref-type="table" rid="T1">Table 1</xref>. VAR results are plotted as the two bottommost black lines underneath the theoretical values.</p>
</caption>
<graphic xlink:href="fnetp-02-845327-g003.tif"/>
</fig>
<p>
<statement content-type="example" id="example_4">
<label>Example 4</label>
<p>&#x7c; Nonminimum Phase Data</p>
<p>Consider a moving average data generation scheme using (1) with<disp-formula id="e20">
<mml:math id="m39">
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>0</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="0.28em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>2</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>1</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
<mml:mspace width="0.28em"/>
<mml:msub>
<mml:mrow>
<mml:mi mathvariant="fraktur">B</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfenced open="[" close="]">
<mml:mrow>
<mml:mtable class="matrix">
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>4</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>2</mml:mn>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd columnalign="center">
<mml:mn>0</mml:mn>
</mml:mtd>
<mml:mtd columnalign="center">
<mml:mn>2</mml:mn>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mfenced>
<mml:mo>,</mml:mo>
</mml:math>
<label>(20)</label>
</disp-formula>whose allied (4) roots <inline-formula id="inf18">
<mml:math id="m40">
<mml:mrow>
<mml:mo stretchy="false">{</mml:mo>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#x2b;</mml:mo>
<mml:mo>&#xb1;</mml:mo>
<mml:mi mathvariant="bold">j</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:mn>3</mml:mn>
</mml:mrow>
</mml:msqrt>
<mml:mo>,</mml:mo>
<mml:mo>&#xb1;</mml:mo>
<mml:mi mathvariant="bold">j</mml:mi>
<mml:msqrt>
<mml:mrow>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
<mml:mo stretchy="false">}</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> have magnitudes that are larger than 1, making this a nonminimum phase data generating mechanism as opposed to all previous examples, as computing their (4) easily shows. It is clear from (20) that <italic>x</italic>
<sub>2</sub>(<italic>n</italic>) Granger-causes <italic>x</italic>
<sub>1</sub>(<italic>n</italic>) but not otherwise. This is reflected in the computed tPDCT blue lines of <xref ref-type="fig" rid="F4">Figure 4</xref>. Here, (19) was adopted as the innovations covariance matrix.</p>
<p>Use of <xref ref-type="sec" rid="s3">Section 3</xref> algorithms leads to the results of <xref ref-type="fig" rid="F4">Figure 4</xref> where the estimation methods agree among themselves, but are markedly different from the tPDC computed using (20).</p>
<p>The reader may easily verify using the <xref ref-type="sec" rid="s12">Supplementary Material</xref> that the estimated solution using VMA modelling leads to (4) roots whose magnitudes are all smaller than 1.</p>
<p>Most importantly, however is that this example shows that GC causal relationships imposed through nonminimum phase systems can be wrongly inferred. The consequences of this are further elaborated in the discussion.</p>
</statement>
</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>
<bold>tPDC</bold> estimates for VARMA model with nonminium phase data in <xref ref-type="other" rid="example_4">Example 4</xref> using the VAR, VMA, and WN methods (<italic>n</italic>
<sub>
<italic>s</italic>
</sub> &#x3d; 16, 384) portraying its <bold>(</bold>
<sans-serif>
<bold>A</bold>
</sans-serif>
<bold>)</bold> <bold>real</bold> and <bold>(</bold>
<sans-serif>
<bold>B</bold>
</sans-serif>
<bold>)</bold> <bold>imaginary</bold> components. As before theoretical tPDCs are shown as blue lines. Here, WN estimates (topmost red lines) also ripple and agree with VMA (<bold>black lines</bold>) and VAR estimates (gray lines) are very close to one another but differ significantly from the theoretical values (blue lines). Note the parameters in (20) imply no connection from <italic>x</italic>
<sub>1</sub>(<italic>n</italic>) onto <italic>x</italic>
<sub>2</sub>(<italic>n</italic>), yet all three estimation methods wrongly indicate a non zero <bold>tPDC</bold> real component reflecting strong estimated GC.</p>
</caption>
<graphic xlink:href="fnetp-02-845327-g004.tif"/>
</fig>
</sec>
<sec id="s5">
<title>5 Discussion</title>
<p>It is perhaps surprising that PDC/DTF have so long, and unnecessarily so, remained inextricably associated with VAR modelling even in view of early evidence to the contrary (<xref ref-type="bibr" rid="B24">Jachan et al., 2009</xref>). Partial explanation may lie in the early virtual exclusive reliance on VAR modelling that also dominated initial approaches to Granger Causality characterization (<xref ref-type="bibr" rid="B22">Granger, 1969</xref>; <xref ref-type="bibr" rid="B21">Geweke, 1984</xref>). This scenario in connection to time series modelling in the time domain slowly changed as VMA and VARMA approaches have been shown viable and possibly desirable depending on the nature of the data under study (<xref ref-type="bibr" rid="B15">Boudjellaba et al., 1992</xref>; <xref ref-type="bibr" rid="B14">Boudjellaba et al., 1994</xref>). The latter methods are attractive because they more parsimoniously fit the underlying data as in <xref ref-type="other" rid="example_2">Example 2</xref> <italic>via</italic> fewer parameters. This reflects Parzen&#x2019;s Parsimony Principle which formalizes the statistical advantage of describing data via the least possible number of parameters (<xref ref-type="bibr" rid="B39">Yaffee and McGee, 2000</xref>) that in the present case leads to lower average estimation error (see <xref ref-type="table" rid="T1">Table 1</xref>). More details on alternative time domain characterization can be appreciated in <xref ref-type="bibr" rid="B28">L&#xfc;tkepohl (2005)</xref>.</p>
<p>Because of its prediction improvement ethos, Granger Causality, when originally defined, rested on VAR modelling&#x2019;s predictive ability (<xref ref-type="bibr" rid="B22">Granger, 1969</xref>). Moreover, at that time it was the only practical alternative from a computational perspective. It is thus unsurprising that other predictive methods like VMA and VARMA modelling also can fit the purpose.</p>
<p>Given PDC/tPDC&#x2019;s frequency domain ties with Granger causality (<xref ref-type="bibr" rid="B9">Baccal&#xe1; and Sameshima, 2021a</xref>) (with the inclusion of full instantaneous effects) (<xref ref-type="bibr" rid="B7">Baccal&#xe1; and Sameshima, 2021b</xref>), it is therefore no wonder that they too can be carried out <italic>via</italic> other methods like VMA or VARMA modelling.</p>
<p>Thus we have shown that PDC/DTF (total or otherwise) are <bold>not</bold> irrevocably tied to VAR data modelling, though today, VAR remains the best studied and most widely applied option. It has the advantage of having rigorous asymptotic results in the squared PDC/DTF case (<xref ref-type="bibr" rid="B5">Baccal&#xe1; et al., 2013</xref>; <xref ref-type="bibr" rid="B10">Baccal&#xe1; et al., 2016</xref>). Work is in progress to provide the asymptotics to the allied total PDC/DTF quantities.</p>
<p>Further research is needed to pinpoint which of the latter methods is best for what purpose. It is comforting to know that many methods provide equivalent descriptions if used properly.</p>
<p>For example, even though it is possible to combine the response of different trials in event-related experiments while employing VAR models (<xref ref-type="bibr" rid="B32">Rodrigues and Baccal&#xe1;, 2015</xref>), this feat may also, and perhaps more easily in some cases, be achieved through the application of Wilson&#x2019;s method to estimate nonparametric spectra and cross-spectra averaged over trials. Other methods have been proposed to deal with spectral matrix factorization that still need proper practical appraisal (<xref ref-type="bibr" rid="B1">Amblard, 2015</xref>).</p>
<p>Though Wilson-type spectral factorization methods seem less effective in practice, it does not mean that they should be discarded. Here we only used Welch&#x2019;s spectral estimator. More research is needed, by employing other spectral estimation procedures like multitappering for instance (<xref ref-type="bibr" rid="B31">Percival and Walden, 1993</xref>) that could improve accuracy as they may more appropriately fit certain spectral shapes.</p>
<p>Here we have employed large data sets, but one should expect substantial performance differences for shorter time series. In this case, too, as hinted by <xref ref-type="table" rid="T1">Table 1</xref> results, VAR methods remain quite efficient, except when better approximation can be made through models that portray the data more closely as in the VARMA <xref ref-type="other" rid="example_2">Example 2</xref>.</p>
<p>Other approaches have been proposed to obtain Granger-type estimates, namely state space modelling is one such example (<xref ref-type="bibr" rid="B12">Barnett and Seth, 2015</xref>); present research is on-going to evaluate them. In fact, as <xref ref-type="bibr" rid="B34">Sayed and Kailath (2001)</xref>&#x2019;s theoretical appraisal of univariate spectral density factorization methods suggests, even state-space models can be seen as spectral factorization providers.</p>
<p>All the above methods, by providing minimum phase spectral factors to the spectral density matrix, ideally portray <bold>identical</bold> Granger relationship representations within the accuracy and characteristic limitations of the employed spectral estimation/factorization techniques.</p>
<p>At this point, before we examine the nonminimum phase data generation issue, and even if the theoretical realization that GC connectivity reduces to a spectral factorization problem were not important, the practice oriented reader might be wondering why so much ado about a VAR &#x2018;myth&#x2019; if in the end VAR remains a reasonable practical compromise? To answer this, please have in mind that spectral fitting is a method of approximation of whatever the real spectra are. According to Parzen&#x2019;s principle the best conceivable statistical inference reliability rests on having the least number of descriptive parameters for a given approximation error (which one can gauge by the residual covariance matrix). Hence, though at present VMA and VARMA methods are not as mature as VAR methods in so far as inference is concerned, they hold the promise of potential higher inference accuracy in appropriate cases as they get to be further developed.</p>
<p>Another question that may be bothering those who are practice oriented is: why use nonparametric methods if their performance is not so good and if they call for much longer data sets to furnish the same level of accuracy? In fact, this may well behind their infrequent use in the past. First remember the issue of ease of use as in the analysis of event related cases we mentioned. Remember too that many investigators remain uneasy about parametric methods because they require model order decisions added to the often glossed over problem of model diagnostic checking (<xref ref-type="bibr" rid="B26">Li, 2003</xref>). Despite their wiggly nature, Wilson nonparametric methods dispense with these decisions and can be helpful in providing hints to the approximative quality of contending parametric models. They have issues of their own that also merit further examination. These problems lie in nonparametric spectral estimation shortcomings (<xref ref-type="bibr" rid="B31">Percival and Walden, 1993</xref>) that many applied users often overlook.</p>
<p>In short, having more options in one&#x2019;s analysis toolkit is beneficial and should not be discarded.</p>
<p>Now moving on, there is the important caveat we have shown: due to their intrinsic minimum phase limitation, the methods we explored here are unable to properly capture GC-type relations when the underlying data generation mechanism is nonminimum phase as in <xref ref-type="other" rid="example_4">Example 4</xref>. This happens because these methods, either through classical time series modelling or direct spectral factorization, employ only second order statistics.</p>
<p>Though we do not show this explicitly here, Geweke-based approaches also suffer from the same limitations. This is easy to realize if one takes into account that they lead to conclusions that are similar to those reached <italic>via</italic> PDC/DTF-type approaches.</p>
<p>This scenario evokes two intertwined questions: 1) whether dynamical (viz. physical, physiological or economical) observations of phenomena actually conform to nonminimum phase generation mechanisms that might obscure their connectivity inference and 2) whether real data using GC methods in the past actually hold in view of this observation.</p>
<p>As an example consider a situation when nonminimum phase signals are a practical reality. It happens in wireless communication, and is due to signal propagation through dispersive multipathway media that leads to serious bit-error rate impairment. As a man made system, this problem is circumvented by the transmission of pre-arranged pseudo random data (training) sequences the receiver uses to estimate channel nonminimality. Use of these sequences maps the receiver &#x201c;output-only&#x201d; problem into an equivalent &#x201c;input-output&#x201d; problem that can reveal nonminimum phase effects through second order statistics alone. This solution is sometimes unsatisfactory as it imposes a penalty on the transmission rate of useful data. During the 1990&#x2019;s a considerable body of literature appeared to address this problem by dispensing with training sequences and using the received (output) data only (<xref ref-type="bibr" rid="B35">Haykin, 1994</xref>). This is possible when the data is nongaussian, i.e., there is information beyond the ordinary second order statistics of the spectrum, something that can be made by design in telecommunication systems. Signal diversity in both time and space, via telecom signal characteristics or through employment of redundant receiver antennas is also an option. This general field has been known as that of &#x201c;<italic>blind</italic>&#x201d; identification/equalization (see <xref ref-type="bibr" rid="B16">Chi et al., 2006</xref>, for an overview). Whereas real data properties cannot be &#x2018;designed&#x2019; as in man made systems, they are often nongaussian and this could in principle be exploited to overcome the nonminimum phase generation limitation on GC inference we described here.</p>
<p>The answer to 2) must thus await further analysis in what is a matter for further exciting research that may entail the revision of many conclusions regarding formerly analysed real data.</p>
</sec>
<sec id="s6">
<title>6 Conclusion</title>
<p>The first take home lesson is that PDC/DTF-type estimators of Granger connectivity/influentiability (<xref ref-type="bibr" rid="B2">Baccal&#xe1; and Sameshima, 2014a</xref>) even in their latest and most general total form (tPDC/tDTF), incorporating instantaneous Granger effects, do not require vector autoregressive modelling as a mandatory step but can be obtained through any other means of spectral factorization of the spectral density matrix into minimum phase factors. The second lesson is that, though not mandatory, VAR modelling, since it can be used to obtain consistent spectral factors, and because of its practicality and efficiency, remains the method of choice, specially for short data sets. The third no less important lesson is that care as to conclusions about real data must be exercised as possible unknown nonminimum phase data generating mechanisms may be at play that can confound results as to the actual true underlying connectivity when methods of the present spectral factorization class are used.</p>
</sec>
</body>
<back>
<sec id="s7">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>Both authors have equally shared in the conception, writing, and editing the paper.</p>
</sec>
<sec id="s9">
<title>Funding</title>
<p>LB was funded by CNPq, Grant number 308073/2017-7. LB and KS were partially supported by FAPESP Grant 2017/12943-8.</p>
</sec>
<sec sec-type="COI-statement" id="s10">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s11">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ack>
<p>KS is affiliated with LIM 43&#x2013;HCFMUSP. Both are attached to the Center for Interdisciplinary Research on Applied Neurosciences (NAPNA), Universidade de S&#xe3;o Paulo, S&#xe3;o Paulo, Brazil.</p>
</ack>
<sec id="s12">
<title>Supplementary Material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/fnetp.2022.845327/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/fnetp.2022.845327/full&#x23;supplementary-material</ext-link>
</p>
<supplementary-material xlink:href="DataSheet1.PDF" id="SM1" mimetype="application/PDF" xmlns:xlink="http://www.w3.org/1999/xlink"/>
<supplementary-material xlink:href="DataSheet2.zip" id="SM2" mimetype="application/zip" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Amblard</surname>
<given-names>P.-O.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>A Nonparametric Efficient Evaluation of Partial Directed Coherence</article-title>. <source>Biol. Cybern.</source> <volume>109</volume>, <fpage>203</fpage>&#x2013;<lpage>214</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-014-0636-0</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2014a</year>). in <source>Causality and Influentiability: The Need for Distinct Neural Connectivity Concepts</source>. Editors <person-group person-group-type="editor">
<name>
<surname>&#x15a;l&#x229;zak</surname>
<given-names>D</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>AH</given-names>
</name>
<name>
<surname>Peters</surname>
<given-names>JF</given-names>
</name>
<name>
<surname>Schwabe</surname>
<given-names>L</given-names>
</name>
</person-group> (<publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>), <fpage>424</fpage>&#x2013;<lpage>435</lpage>. <comment>Brain Informatics and Health</comment>. <pub-id pub-id-type="doi">10.1007/978-3-319-09891-3_39</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2001a</year>). <article-title>Overcoming the Limitations of Correlation Analysis for many Simultaneously Processed Neural Structures</article-title>. <source>Prog. Brain Res.</source> <volume>130</volume>, <fpage>33</fpage>&#x2013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1016/s0079-6123(01)30004-3</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Takahashi</surname>
<given-names>D. Y.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2007</year>). &#x201c;<article-title>Generalized Partial Directed Coherence</article-title>,&#x201d; in <conf-name>2007 15th International Conference on Digital Signal Processing</conf-name>, <conf-loc>Cardiff, UK</conf-loc>, <conf-date>1-4 July 2007</conf-date>, <fpage>163</fpage>&#x2013;<lpage>166</lpage>. <pub-id pub-id-type="doi">10.1109/ICDSP.2007.4288544</pub-id> </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>De Brito</surname>
<given-names>C. S. N.</given-names>
</name>
<name>
<surname>Takahashi</surname>
<given-names>D. Y.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Unified Asymptotic Theory for All Partial Directed Coherence Forms</article-title>. <source>Phil. Trans. R. Soc. A.</source> <volume>371</volume>, <fpage>20120158</fpage>. <pub-id pub-id-type="doi">10.1098/rsta.2012.0158</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Ballester</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Do Valle</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Timo-Iaria</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>1998</year>). <article-title>Studying the Interaction between Brain Structures via Directed Coherence and Granger Causality</article-title>. <source>Appl. Sig. Process.</source> <volume>5</volume>, <fpage>40</fpage>&#x2013;<lpage>48</lpage>. </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2021b</year>). <article-title>Frequency Domain Repercussions of Instantaneous Granger Causality</article-title>. <source>Entropy</source> <volume>23</volume>, <fpage>1037</fpage>. <pub-id pub-id-type="doi">10.3390/e23081037</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2001b</year>). <article-title>Partial Directed Coherence: a New Concept in Neural Structure Determination</article-title>. <source>Biol. Cybern.</source> <volume>84</volume>, <fpage>463</fpage>&#x2013;<lpage>474</lpage>. <pub-id pub-id-type="doi">10.1007/PL00007990</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2021a</year>). <article-title>Partial Directed Coherence: Twenty Years on Some History and an Appraisal</article-title>. <source>Biol. Cybern.</source> <volume>115</volume>, <fpage>195</fpage>&#x2013;<lpage>204</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-021-00880-y</pub-id> </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Baccala</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Takahashi</surname>
<given-names>D. Y.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Directed Transfer Function: Unified Asymptotic Theory and Some of its Implications</article-title>. <source>IEEE Trans. Biomed. Eng.</source> <volume>63</volume>, <fpage>2450</fpage>&#x2013;<lpage>2460</lpage>. <pub-id pub-id-type="doi">10.1109/TBME.2016.2550199</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2014b</year>). &#x201c;<article-title>Partial Directed Coherence</article-title>,&#x201d; in <source>Methods in Brain Connectivity Inference through Multivariate Time Series Analysis</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Sameshima</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Baccal&#xe1;</surname>
<given-names>LA</given-names>
</name>
</person-group> (<publisher-loc>Boca Raton</publisher-loc>: <publisher-name>CRC</publisher-name>), <fpage>57</fpage>&#x2013;<lpage>73</lpage>. <comment>Frontiers in Neuroengineering Series</comment>. <pub-id pub-id-type="doi">10.1201/b16550-11</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barnett</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Seth</surname>
<given-names>A. K.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Granger Causality for State-Space Models</article-title>. <source>Phys. Rev. E Stat. Nonlin Soft Matter Phys.</source> <volume>91</volume>, <fpage>040101</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.91.040101</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Barrett</surname>
<given-names>A. B.</given-names>
</name>
<name>
<surname>Barnett</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Seth</surname>
<given-names>A. K.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Multivariate Granger Causality and Generalized Variance</article-title>. <source>Phys. Rev. E Stat. Nonlin Soft Matter Phys.</source> <volume>81</volume>, <fpage>041907</fpage>&#x2013;<lpage>041914</lpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.81.041907</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boudjellaba</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Dufour</surname>
<given-names>J.-M.</given-names>
</name>
<name>
<surname>Roy</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>1994</year>). <article-title>Simplified Conditions for Noncausality between Vectors in Multivariate ARMA Models</article-title>. <source>J. Econom.</source> <volume>63</volume>, <fpage>271</fpage>&#x2013;<lpage>287</lpage>. <pub-id pub-id-type="doi">10.1016/0304-4076(93)01568-7</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Boudjellaba</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Dufour</surname>
<given-names>J.-M.</given-names>
</name>
<name>
<surname>Roy</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>1992</year>). <article-title>Testing Causality between Two Vectors in Multivariate Autoregressive Moving Average Models</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>87</volume>, <fpage>1082</fpage>&#x2013;<lpage>1090</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.1992.10476263</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Chi</surname>
<given-names>C. Y.</given-names>
</name>
<name>
<surname>Feng</surname>
<given-names>C. C.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>C. H.</given-names>
</name>
</person-group> (<year>2006</year>). <source>Blind Equalization and System Identification: Batch Processing Algorithms, Performance and Applications</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Springer</publisher-name>. </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cohen</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Sasai</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tsuchiya</surname>
<given-names>N.</given-names>
</name>
<name>
<surname>Oizumi</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>A General Spectral Decomposition of Causal Influences Applied to Integrated Information</article-title>. <source>J. Neurosci. Methods</source> <volume>330</volume>, <fpage>108443</fpage>. <pub-id pub-id-type="doi">10.1016/j.jneumeth.2019.108443</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dhamala</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Rangarajan</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Ding</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Estimating Granger Causality from Fourier and Wavelet Transforms of Time Series Data</article-title>. <source>Phys. Rev. Lett.</source> <volume>100</volume>, <fpage>018701</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.100.018701</pub-id> </citation>
</ref>
<ref id="B19">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Faes</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>Assessing Connectivity in the Presence of Instantaneous Causality</article-title>,&#x201d; in <source>Methods in Brain Connectivity Inference through Multivariate Time Series Analysis</source>. Editors <person-group person-group-type="editor">
<name>
<surname>Sameshima</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Baccal&#xe1;</surname>
<given-names>LA</given-names>
</name>
</person-group> (<publisher-loc>Boca Raton</publisher-loc>: <publisher-name>CRC Press</publisher-name>), <fpage>87</fpage>&#x2013;<lpage>112</lpage>. <pub-id pub-id-type="doi">10.1201/b16550-13</pub-id> </citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Faes</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Nollo</surname>
<given-names>G.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Extended Causal Modeling to Assess Partial Directed Coherence in Multiple Time Series with Significant Instantaneous Interactions</article-title>. <source>Biol. Cybern.</source> <volume>103</volume>, <fpage>387</fpage>&#x2013;<lpage>400</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-010-0406-6</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Geweke</surname>
<given-names>J. F.</given-names>
</name>
</person-group> (<year>1984</year>). <article-title>Measures of Conditional Linear Dependence and Feedback between Time Series</article-title>. <source>J. Am. Stat. Assoc.</source> <volume>79</volume>, <fpage>907</fpage>&#x2013;<lpage>915</lpage>. <pub-id pub-id-type="doi">10.1080/01621459.1984.10477110</pub-id> </citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Granger</surname>
<given-names>C. W. J.</given-names>
</name>
</person-group> (<year>1969</year>). <article-title>Investigating Causal Relations by Econometric Models and Cross-Spectral Methods</article-title>. <source>Econometrica</source> <volume>37</volume>, <fpage>424</fpage>&#x2013;<lpage>438</lpage>. <pub-id pub-id-type="doi">10.2307/1912791</pub-id> </citation>
</ref>
<ref id="B35">
<citation citation-type="book">
<person-group person-group-type="editor">
<name>
<surname>Haykin</surname>
<given-names>S.</given-names>
</name>
</person-group> (Editor) (<year>1994</year>). <source>Blind Deconvolution</source> (<publisher-loc>New Jersey</publisher-loc>: <publisher-name>PTR Prentice Hall</publisher-name>). </citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Henderson</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Dhamala</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Robinson</surname>
<given-names>P. A.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Brain Dynamics and Structure-Function Relationships via Spectral Factorization and the Transfer Function</article-title>. <source>NeuroImage</source> <volume>235</volume>, <fpage>117989</fpage>. <pub-id pub-id-type="doi">10.1016/j.neuroimage.2021.117989</pub-id> </citation>
</ref>
<ref id="B24">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jachan</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Henschel</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Nawrath</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Schad</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Timmer</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Schelter</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2009</year>). <article-title>Inferring Direct Directed-Information Flow from Multivariate Nonlinear Time Series</article-title>. <source>Phys. Rev. E</source> <volume>80</volume>, <fpage>011138</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevE.80.011138</pub-id> </citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kaminski</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Blinowska</surname>
<given-names>K. J.</given-names>
</name>
</person-group> (<year>1991</year>). <article-title>A New Method of the Description of the Information Flow in the Brain Structures</article-title>. <source>Biol. Cybern.</source> <volume>65</volume>, <fpage>203</fpage>&#x2013;<lpage>210</lpage>. <pub-id pub-id-type="doi">10.1007/BF00198091</pub-id> </citation>
</ref>
<ref id="B26">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>W. K.</given-names>
</name>
</person-group> (<year>2003</year>). <source>Diagnostic Checks in Time Series</source>. <publisher-loc>Boca Raton</publisher-loc>: <publisher-name>Taylor &#x26; Francis</publisher-name>. </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lima</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Dellajustina</surname>
<given-names>F. J.</given-names>
</name>
<name>
<surname>Shimoura</surname>
<given-names>R. O.</given-names>
</name>
<name>
<surname>Girardi-Schappo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Kamiji</surname>
<given-names>N. L.</given-names>
</name>
<name>
<surname>Pena</surname>
<given-names>R. F. O.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Granger Causality in the Frequency Domain: Derivation and Applications</article-title>. <source>Rev. Bras. Ensino F&#xed;s.</source> <volume>42</volume>, <fpage>e20200007</fpage>. <pub-id pub-id-type="doi">10.1590/1806-9126-rbef-2020-0007</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>L&#xfc;tkepohl</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>2005</year>). <source>New Introduction to Multiple Time Series Analysis</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Springer</publisher-name>. </citation>
</ref>
<ref id="B29">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Marple</surname>
<given-names>S. L.</given-names>
<suffix>Jr</suffix>
</name>
</person-group> (<year>1987</year>). <source>Digital Spectral Analysis with Applications</source>. <publisher-loc>New Jersey</publisher-loc>: <publisher-name>Prentice-Hall</publisher-name>. </citation>
</ref>
<ref id="B30">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Nuzzi</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Stramaglia</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Javorka</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Marinazzo</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Porta</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Faes</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Extending the Spectral Decomposition of Granger Causality to Include Instantaneous Influences: Application to the Control Mechanisms of Heart Rate Variability</article-title>. <source>Phil. Trans. R. Soc. A.</source> <volume>379</volume>, <fpage>0263</fpage>. <pub-id pub-id-type="doi">10.1098/rsta.2020.0263</pub-id> </citation>
</ref>
<ref id="B31">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Percival</surname>
<given-names>D. B.</given-names>
</name>
<name>
<surname>Walden</surname>
<given-names>A. T.</given-names>
</name>
</person-group> (<year>1993</year>). <source>Spectral Analysis for Physical Applications</source>. <publisher-loc>New York</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>. </citation>
</ref>
<ref id="B32">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Rodrigues</surname>
<given-names>P. L.</given-names>
</name>
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>A New Algorithm for Neural Connectivity Estimation of EEG Event Related Potentials</article-title>,&#x201d; in <source>Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE</source> (<publisher-name>IEEE</publisher-name>), <fpage>3787</fpage>&#x2013;<lpage>3790</lpage>. <pub-id pub-id-type="doi">10.1109/embc.2015.7319218</pub-id> </citation>
</ref>
<ref id="B33">
<citation citation-type="web">
<comment>[Dataset]</comment> <person-group person-group-type="author">
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>Asymp_PDC Package</article-title>. <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://www.lcs.poli.usp.br/%7Ebaccala/pdc/CRCBrainConnectivity/AsympPDC/index.html">https://www.lcs.poli.usp.br/&#x223c;baccala/pdc/CRCBrainConnectivity/AsympPDC/index.html</ext-link> (Accessed 12 19, 2021)</comment>. </citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sayed</surname>
<given-names>A. H.</given-names>
</name>
<name>
<surname>Kailath</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2001</year>). <article-title>A Survey of Spectral Factorization Methods</article-title>. <source>Numer. Linear Algebra Appl.</source> <volume>8</volume>, <fpage>467</fpage>&#x2013;<lpage>496</lpage>. <pub-id pub-id-type="doi">10.1002/nla.250</pub-id> </citation>
</ref>
<ref id="B36">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Stoica</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Moses</surname>
<given-names>R. L.</given-names>
</name>
</person-group> (<year>2005</year>). <source>Spectral Analysis of Signals</source>. <publisher-loc>Upper Saddle River</publisher-loc>: <publisher-name>Pearson/Prentice Hall</publisher-name>. </citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Takahashi</surname>
<given-names>D. Y.</given-names>
</name>
<name>
<surname>Baccal&#xe1;</surname>
<given-names>L. A.</given-names>
</name>
<name>
<surname>Sameshima</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Information Theoretic Interpretation of Frequency Domain Connectivity Measures</article-title>. <source>Biol. Cybern.</source> <volume>103</volume>, <fpage>463</fpage>&#x2013;<lpage>469</lpage>. <pub-id pub-id-type="doi">10.1007/s00422-010-0410-x</pub-id> </citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wilson</surname>
<given-names>G. T.</given-names>
</name>
</person-group> (<year>1972</year>). <article-title>The Factorization of Matricial Spectral Densities</article-title>. <source>SIAM J. Appl. Math.</source> <volume>23</volume>, <fpage>420</fpage>&#x2013;<lpage>426</lpage>. <pub-id pub-id-type="doi">10.1137/0123044</pub-id> </citation>
</ref>
<ref id="B39">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Yaffee</surname>
<given-names>R. A.</given-names>
</name>
<name>
<surname>McGee</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2000</year>). <source>An Introduction to Time Series Analysis and Forecasting: With Applications of SAS and SPSS</source>. <edition>1 edn</edition>. <publisher-loc>New York</publisher-loc>: <publisher-name>Academic Press</publisher-name>. </citation>
</ref>
</ref-list>
</back>
</article>