Electrocardiographic Imaging for Atrial Fibrillation: A Perspective From Computer Models and Animal Experiments to Clinical Value

Electrocardiographic imaging (ECGI) is a technique to reconstruct non-invasively the electrical activity on the heart surface from body-surface potential recordings and geometric information of the torso and the heart. ECGI has shown scientific and clinical value when used to characterize and treat both atrial and ventricular arrhythmias. Regarding atrial fibrillation (AF), the characterization of the electrical propagation and the underlying substrate favoring AF is inherently more challenging than for ventricular arrhythmias, due to the progressive and heterogeneous nature of the disease and its manifestation, the small volume and wall thickness of the atria, and the relatively large role of microstructural abnormalities in AF. At the same time, ECGI has the advantage over other mapping technologies of allowing a global characterization of atrial electrical activity at every atrial beat and non-invasively. However, since ECGI is time-consuming and costly and the use of electrical mapping to guide AF ablation is still not fully established, the clinical value of ECGI for AF is still under assessment. Nonetheless, AF is known to be the manifestation of a complex interaction between electrical and structural abnormalities and therefore, true electro-anatomical-structural imaging may elucidate important key factors of AF development, progression, and treatment. Therefore, it is paramount to identify which clinical questions could be successfully addressed by ECGI when it comes to AF characterization and treatment, and which questions may be beyond its technical limitations. In this manuscript we review the questions that researchers have tried to address on the use of ECGI for AF characterization and treatment guidance (for example, localization of AF triggers and sustaining mechanisms), and we discuss the technological requirements and validation. We address experimental and clinical results, limitations, and future challenges for fruitful application of ECGI for AF understanding and management. We pay attention to existing techniques and clinical application, to computer models and (animal or human) experiments, to challenges of methodological and clinical validation. The overall objective of the study is to provide a consensus on valuable directions that ECGI research may take to provide future improvements in AF characterization and treatment guidance.


Physics description of ECGI
Electrocardiographic imaging (ECGI) is a medical tool that allows estimating electrical activity on the heart surface based on non-invasive signals obtained from 3D geometries of the torso (Rudy, 2013). This technology has been widely used in the study of several heart diseases, as it allows analyzing the electrical activity of the heart in a way comparable to invasive techniques (catheterization), but with greater coverage and offering improved versatility, as it allows the study and planning of surgical therapy before the procedure (Rudy, 2013).
The physics of ECGI is governed by the Maxwell electromagnetic wave equation, which allows us to infer that the average value of the electric potential around a point P is equal to the value of the potential at t point P, this is, the electromagnetic wave equation is given by where 2 represents the Laplacian operator and (mV/m) is the electric field generated by the excitation of the heart, and which is propagated to the surface of the body through a volume with passive electrical conduction (Rudy and Burnes, 1999;Rudy, 2013). In the ECGI method, potentials in the heart ( ) are calculated using the Laplace equation within the torso volume conductor, using both the electrical potentials from the torso, ( ), and the geometric relationship between the 3D surfaces of the heart and torso ( ) (Rudy and Burnes, 1999;Rudy, 2013). The first step in this procedure is the discretization of the 3D surfaces of the heart and torso into triangular elements allowing the relationship of and to be calculated through the transfer matrix as follows, = Discretization is achieved through the Finite Difference, Finite Element or Boundary Element Methods (FDM, FEM and BEM, respectively), resulting in a linear relationship matrix (this transfer matrix ), which contains the geometric information and electrophysiological properties of the volume conductor that relates the two surfaces (atrium and torso) (Rudy and Burnes, 1999;Rudy, 2013).The direct solution of Eq. (2) for calculating is a well-posed mathematical problem in the sense of Hadamard (that is, the solution exists, is unique and stable) (Rudy and Burnes, 1999;Kabanikhin, 2008;Rudy, 2013). This means that given a geometric relationship between the heart and the torso and a set of potentials on the epicardium, the potentials on the surface of the torso can be calculated with high accuracy (Rudy and Burnes, 1999;Rudy, 2013). This does not hold for the inverse problem, because typically is not a rectangular matrix, or if it is square, it is ill-conditioned or degenerate. Moreover, we must observe that assuming as a constant matrix reality is simplified, since e.g. there is an interaction between and (movement and contraction of the heart) and is dynamic due to respiration. A further complicating factor is that we can only estimate .
The objective of ECGI is to obtain the epicardial potentials ( ) from the potentials measured on the torso ( ) (Bertero and Boccacci, 1998). As argued, this inverse problem is ill-posed, leading to stability issues, i.e. small perturbations either in the measurement of the torso signals (i.e. noise and / or position of the electrodes), or in the 3D geometries of the torso and heart, can result in large errors in the calculation of (Colli-Franzone et al., 1985;Rudy and Burnes, 1999;Kabanikhin, 2008;Pullan et al., 2011;Rudy, 2013). Moreover, the solution for in (Eq. 2) may not exist for all elements of , or there might be many solutions.
Regularization methods are used to force a unique and smooth solution, with Tikhonov regularization (Tikhonov and Arsenin, 1977) being one of the main set of methods for this aim, as it performs as well as other more complex methods under realistic fibrillatory conditions (Figuera et al., 2016). This method imposes limits in delimiting the amplitudes or derivatives of the epicardial potentials in space, time, or both, to be within the electrophysiological and electric field limits of the heart (Rudy, 2013). Regularization is an approach to find an approximation of that depends continuously on the noisy measurement data , but the chosen regularization should bring the estimate of as close to its true value as the noise levels (on both and ) permit. Prior information can be added to the regularization, such as further limiting the solution space (Kabanikhin, 2008). Through regularization, it is possible to find a solution to the inverse problem of electrocardiography. Figure 1 in the main text illustrates the standard steps needed for the implementation of ECGI systems. Values in the correspond to estimated electrophysiological properties of nodes in a mesh describing the heart and body geometries as well as tissues and organs between the heart and body surface. The mesh is usually generated from medical images (either CT or MRI). Through this approach, the regularized inverse of the matrix that relates the geometries of the torso and heart ( ) in Eq.
(2) can be calculated. After calculating the regularized inverse matrix, the estimation of the potentials in the atrium can be estimated through the product of this matrix with the potentials acquired from the torso using body surface potential mapping (BSPM) systems.

Calculation of the elements of the transfer matrix
To calculate the elements of transfer matrix , it is necessary to consider first the unknown potential ( ⃗)(mV) given by the solution to the following contour integral equation ( de Munck 1992): In this equation, we consider the calculation of the potential ( ⃗) caused by a current source with constant and isotropic conductivity, and within an area bounded by surface S. Here, ∞ ( ⃗) is the potential generated in a medium of infinite extension and Ω( ; ⃗) is the solid angle of surface S, as seen from point ⃗. If ⃗ is positioned on S, the solid angle equals 2 . We denote as S\∃( ⃗) to indicate that the integral is assumed on surface S with which an environment of ⃗ is excluded. Then, it is assumed the limit to Ê( ⃗)→{ ⃗}.
Solution of Eq. (3) can be approximated by choosing a set of functions {ℎ ( ⃗ )} N n=1 and a set of discretized points { ⃗m} N n =1 so that: where represents a Delta function as the result of evaluating ℎ ( ⃗ )in the discretized point ⃗ . The unknown potential field ( ⃗) is expanded in terms of ℎ ( ⃗ ), this is, where are the coefficients of said expansion. With the above definitions, Eq. (1) can be approximated to a set of linear equations with N different values of where the matrix elements are calculated by The simplest choice for the functions is obtained from the triangulation of S and by making ℎ ( ⃗) equal to 1 at the n th triangle and zero at other triangles. The discretized points ⃗ are given by the centers of the triangles. In this approach, the potential is described as a constant function defined by parts, therefore, many triangles are necessary to accurately represent the potential. In the case where the approximation is made by linear interpolation of the potential, surface S is also subdivided into small triangles, but unlike the approximation by a constant potential, the discretization points are given by the vertices of the mesh. The interpolation functions in this approximation are given by: where ⃗ , ⃗ , ⃗ are the vertices of triangle ∆ , and ℎ ( ⃗) is a linear function defined by parts of ⃗ and satisfying the condition in Eq. (2). To calculate the matrix elements, the integration area is partitioned into triangles adjacent to ⃗ . In each triangle, a normal ⃗⃗ is independent of ⃗ . Moreover, the diagonal elements are given separately, in such a way that we can assume ≠ . Then, it can be readily shown that where the sum is on all adjacent triangles to ⃗ and where Akln is the area of triangle ∆ . To find the diagonal elements, and knowing that ∑ ℎ ( ⃗ ′)=1, the vector (1, 1, …, 1) is the eigenvector with B, corresponding to eigenvalue zero, as follows: Through this property of B, we have that its diagonal elements can be expressed in terms of the other remaining matrix elements, such that: Therefore, for the calculation of the linear relation matrix (ATC) that relates the geometries of the torso and the atria, it is necessary to triangulate the surfaces to generate meshes defined by vertices and faces, where each vertex is a point of discretization. This triangulation is often derived from segmentation of medical images of the torso.

Non-proprietary algorithms
From the studies that used non-proprietary algorithms for inverse solution, mostly applied the zeroth-order Tikhonov regularization method (n=13; 40%). Bayes maximum-a-posteriori regularization method outperformed the most common regularization techniques but this technique requires prior information about the epicardial potentials, which usually is not available in the majority of cases in clinics. Tikhonov-based methods performed as well as more complex techniques in realistic fibrillatory conditions.

Tikhonov regularization method
The Tikhonov regularized solution is obtained by minimizing an appropriate objective function (Pullan et al., 2011) where R is the N × N Tikhonov matrix and ∥ ⋅ ∥ is the p-norm. Matrix R helps to restrict (hence to regularize) the inverse solution. The first term ∥ − || 2 2 represents the L2 squared error, while the second term restricts, in the spatial domain, the energy of the solution according to the choice of the particular Tikhonov matrix R. Here λ is called the regularization parameter, and it determines to what extent the final inverse solution will depend on R. A higher λ leads to a smoother solution (i.e. reduces more noise), but it can remove localised activation patterns (an over-smoothed solution) (MacLeod and Buist, 2010). There are three Tikhonov matrices (Pullan et al., 2011) that are commonly used in inverse electrocardiography. Zero-order Tikhonov regularization sets R = I, the identity matrix, which effectively limits the total magnitude of the solution. First-order Tikhonov regularization sets R = G, a discrete approximation of the surface gradient operator that limits the slope of the solution. Finally, Second-order sets R = L, a discrete approximation of the Laplacian surface operator, to restrict the rate of change in the slope. The use of zero-order Tikhonov method, even with a constant regularization parameter, is justified by the fact that it is a good alternative to solve the inverse problem of electrocardiography during AF. Comparisons made between different regularization methods used to calculate atrial potentials during AF suggest that the zero-order Tikhonov regularization method might be insensitive to moderate changes in regularization parameters. In addition, results for different values of signalto-noise ratio (SNR) showed that no algorithm was significantly more robust regarding to changes in noise level (Figuera et al., 2016).
With Tikhonov regularization, we can write a closed solution for Eq. (13) as Hence, in the unregulated case (λ=0), the Moore-Penrose pseudoinverse is obtained. Replacing Eq. (14) in Eq. (3), the solution for zero-order Tikhonov regularization (Pullan et al., 2011) is with denoting the transpose matrix of .
The GMRes method is an iterative numerical method that does not require to impose constraints on the solution (Ramanathan et al., 2003, Calvetti et al., 2000. In this method, the inverse of matrix is approximated by its projection ( ) onto a Krylov subspace K(n), which comprises the set of all linear combinations of the vectors in (Ramanathan et al., 2003). The GMRes method requires to be square and to be normalised (Calvetti et al., 2000). As such, the solution to be minimised becomes (cfr. Eq. 13) With denoting an upper Hessenberg matrix, 1 the first axis vector of an Arnoldi decomposition of and ̂the approximate solutions to the torso and cardiac potentials.
The most popular SVD algorithms are truncated (TSVD) and damped (DSVD) SVD. In TSVD, the matrix is truncated such that all its singular value components that represent noise are removed. As components representing noise usually have small singular values, the truncation is implemented by maintaining a set of k components with the highest singular values (Hansen, 2010). The value of k needs to be set a priori to obtain a solution, and the function to be minimised reads: DSVD is a less 'brute force' application compared to TSVD, as it allows a filtering of singular value components rather than forcing to make an inclusion/exclusion decision on the components to include in the final solution (Figuera et al., 2016).
TV applies an 1 -norm penalization on the inverse solution, rather than an 2 -norm in Tikhonov regularization. The function to minimise therefore becomes (Figuera et al., 2016): The Bayesian approach is based on a priori knowledge of the spatial covariance matrix and mean of the epicardial potentials. Setting this mean to zero gives (Figuera et al., 2016): With and the covariance matrices of the epicardial potentials and noise. This approach only accounts for spatial correlation of the potentials. A temporal correlation can be included based on the isotropy assumption described by Greensite (2003). Here, the covariance matrix is extended to = ⊗ , with the temporal covariance matrix and the spatial covariance matrix (Figuera et al., 2016).