Impact of the Endocardium in a Parameter Optimization to Solve the Inverse Problem of Electrocardiography

Electrocardiographic imaging aims at reconstructing cardiac electrical events from electrical signals measured on the body surface. The most common approach relies on the inverse solution of the Laplace equation in the torso to reconstruct epicardial potential maps from body surface potential maps. Here we apply a method based on a parameter identification problem to reconstruct both activation and repolarization times. From an ansatz of action potential, based on the Mitchell-Schaeffer ionic model, we compute body surface potential signals. The inverse problem is reduced to the identification of the parameters of the Mitchell-Schaeffer model. We investigate whether solving the inverse problem with the endocardium improves the results or not. We solved the parameter identification problem on two different meshes: one with only the epicardium, and one with both the epicardium and the endocardium. We compared the results on both the heart (activation and repolarization times) and the torso. The comparison was done on validation data of sinus rhythm and ventricular pacing. We found similar results with both meshes in 6 cases out of 7: the presence of the endocardium slightly improved the activation times. This was the most visible on a sinus beat, leading to the conclusion that inclusion of the endocardium would be useful in situations where endo-epicardial gradients in activation or repolarization times play an important role.


INTRODUCTION
Electrocardiographic imaging aims at reconstructing cardiac electrical events from electrical signals measured on the body surface. The most common approach relies on the inverse solution of the Laplace equation in the torso to reconstruct epicardial potential maps from the body surface electrical potential maps (BSPM) (Wang and Rudy, 2006). This technique requires a regularization strategy to deal with the ill-posedness of the problem, for example Tikhonov regularization. However, as this regularization is applied to potential patterns, it suppresses the steep voltage gradients that characterize activation wavefronts. This leads to prominent errors such as artefactual block lines in the reconstructed activation map (Duchateau et al., 2017;Ravon et al., 2017).
Other methods have been designed to reconstruct directly the activation times (van Oosterom and Oostendorp, 1992;Liu et al., 2006). While Liu et al. (2006) look for the threedimensional activation sequence in the ventricular muscle, van Oosterom and Oostendorp (1992) reconstruct activation on both the epicardium and the endocardium. van Dam et al. (2009) proposed a method that solved both the activation and the repolarization. Based on an equivalent double layer model, it updates activation and repolarization times alternatingly. Ghodrati et al. (2006) developed two methods to reconstruct epicardial information. One optimizes the position of the depolarization front at each time. The second reconstructs epicardial potentials with a regularization term based on the estimation of the wavefront behavior. These approaches still rely on a Tikhonov-like regularization technique. Recently, studies that reconstruct both the activation and the recovery, with a novel regularization technique, have been published (Cluitmans et al., 2017(Cluitmans et al., , 2018. The regularization is done through an electrophysiological input and the potentials on the torso are sparsely represented to deal with the ill-posedness of the problem. Others used a probabilistic approach to find parameters (Rahimi et al., 2016;Dhamala et al., 2018). The former used the two-variable Aliev-Panfilov model (Aliev and Panfilov, 1996) to model the AP. Their aim was to probabilistically personalize a model parameter using machine learning methods. The estimation was made on a whole-heart 3D model, from BSPMs or extracellular potentials. In the latter the parameters of the model are assumed and the behavior of the wavefront is optimized. The same group worked on regularizing both the spatial and the temporal propagation of action potential (Wang et al., 2010). The method relies on a two-variable propagation model with fixed parameters in a volumetric myocardium. It was then improved in Ghimire et al. (2017). Note that in these studies constraints in the spatial distribution are considered.
In a previous study (Ravon et al., 2017) we introduced a new technique that aims at recovering directly both the activation and repolarization maps on the epicardium. The general idea consists in looking for an ansatz of an action potential (AP) under the form of a function v(P; t) parameterized by a small number of parameters P, e.g., less than three. The upstroke of this AP is supposed to be at t = 0. From the knowledge of the activation times τ (x) on the heart, we can map the AP to a space-and time-dependent function V m (t, x) = v(P; t − τ ). In addition, the parameters P may have space-dependent values distributed on the surface, which enriches the model, but increases the number of unknown parameters. Then this transmembrane voltage function V m (t, x) is projected to body surface potential signals. The method searches for the parameters P and activation map τ that realize the best fit to the target body surface signals on a given time interval. It amounts to solving a nonlinear least squares parameter identification problem with a small number of (possibly distributed) parameters. We previously represented the action potential as the product of two logistic functions, as proposed by Van Oosterom and Jacquemet (2005). The final parameter identification problem (Ravon et al., 2017) consisted of identifying three distributed parameters, given the BSPM of a complete ventricular activation and repolarization sequence (i.e., a QRST waveform). This method was demonstrated to give a better range of activation times (ATs) and a smoother AT distribution than a solution based on the Laplace equation with Tikhonov regularization of order zero. However, it only reconstructed APs on the epicardium. In general, large and physiologically very relevant differences in AT and repolarization time (RT) can exist across the wall. Therefore, in this study we investigated whether including the endocardium improves the results.
To this aim, we tested our method on in silico data with and without important transmural gradients. The parameter identification problem was solved either on the epicardium only, or on both the epicardium and endocardium. We found that the quality of the reconstructed activation and repolarization maps (in terms of correlation coefficients) was similar when transmural gradients were small, but that inclusion of the endocardium improved the solution in a case where these gradients were important.
As compared to Ravon et al. (2017), we also changed the representation of the AP from the product of two logistic functions to the solution of the two-variable ionic model of Mitchell and Schaeffer (2003), to have a more relevant AP shape without increasing the number of parameters.
We resorted to a discretize-then-optimize strategy: we first set the direct problem that maps the parameters P and activation map τ to the voltage V m (t, x), and then to the BSPM φ T . This problem was discretized using triangulated surfaces. The parameters were identified in the discrete problem using a gradient descent method on a discrete least squares cost function.

Mapping the Parameters to the Transmembrane Voltage
The parameterization was based on the two-current model proposed by Mitchell and Schaeffer (2003). This model describes the dynamics of two functions: the voltage v and an auxiliary variable h. Both quantities are dimensionless and scaled between 0 and 1, and solve the following ordinary differential equations: The five parameters were originally chosen as (Mitchell and Schaeffer, 2003): τ in = 0.3 ms, τ out = 6 ms, τ open = 120 ms, τ close = 150 ms, and v gate = 0.13. The steady state for this model is (v, h) = (0, 1). The voltage v takes the shape of an AP if we set the initial condition as v(0), h(0) = (0.15, 1), see the red curve in Figure 1.
The function v(t) defined for t ≥ 0 as the solution of the initial value problem (1)-(2) with v(0), h(0) = (0.15, 1) was completed by 0 for t < 0. It was our ansatz of an AP, denoted by v(P; t) for t ∈ R, and in general P = {τ in , τ out , τ open , τ close , v gate }. For instance, the blue curve in Figure 1 is the graph of v(P, t − τ ) for an activation time τ = 50 ms and the default values for P stated above.
In practice, the parameters τ in and τ open define the upstroke of the AP, and were fixed with their default values τ in = 0.3 ms and τ open = 120 ms. Similarly, the parameter v gate defines the excitability threshold and was fixed at v gate = 0.13. Hence, only the parameters τ out and τ close were searched as unknown parameters, because they are directly related to the AP duration. τ close can be seen as the plateau phase duration whereas τ out is linked to the speed of the repolarization. τ out also has a small impact on the amplitude of the voltage v.
In addition, we rescaled the voltage v by a factor A, so as to fit the scaling of the measured BSPM. Hence, we considered the FIGURE 1 | Red curve: voltage v(P, t) with the default parameters P. Blue curve: TMP V m (t) = v(P; t − τ ) with τ = 50 ms.
The parameter τ was distributed on the heart surface by the design of the method. Meanwhile, the parameters A, τ out , and τ close may be constant or distributed. Since AP duration varies across the heart surface, we would rather consider varying distributed parameters τ out and τ close .

Projecting the Transmembrane Voltage to the Body Surface Potential Map
Afterwards, we mapped the transmembrane voltage V m (x, t) to extracellular potentials φ e (x, t) as in Potse et al. (2009): where V m (t) was a fixed spatial average of where S is the heart surface (epicardium only, or epicardium and endocardium). The rationale of the formula is a rewriting of the bidomain model coupled with the hypothesis that conductivity tensor fields in both extra-and intra-cellular domains are homogeneous and isotropic. Here the ratio of conductivities was hidden in the factor A. Finally, we projected the extracellular potentials φ e (x, t) to the body surface potentials φ T (y, t) for any point y on the torso surface as follows: This amounted to approximating the solution of the Laplace equation outside the heart domain, assuming it is an infinite homogeneous medium (Malmivuo and Plonsey, 1995;Macfarlane et al., 2010).

Discrete Surfaces and Approximations
In practice, the endocardial and epicardial surfaces were discretized by two separate triangular meshes (Figure 2) with For the sake of computational simplicity, the mappings (4) and (5) were replaced by their discrete counterparts: where V m (x i , t) was given by the mapping (3) for given Hence there are 1 + 3N H parameters to be identified.

The Parameter Identification Problem
We looked for the parameter set P = (A, τ out , τ close , τ ) ∈ R 1+3N H that minimized the least squares error where (y j ) j=1...N T were the N T electrode locations on the body surface, (t k ) k=1...T max was the time sequence of interest, (φ ⋆ (y j , t k )) were the measured BSPMs, and (φ T (y j , t k )) were the BSPMs computed according to equations (6). For each time t k , the spatial averages φ T (t k ) and . Potentials are given up to a constant. This constant can be a reference electrode on the torso, the WCT or the mean of all the electrodes. We chose the mean. As Wilson's Central Terminal it was also a way to reduce noise. Moreover, it rescaled the data around their mean value.
The total number of data elements is finally T max N T , which may be compared to the number of unknown parameters 1 + 3N H . This nonlinear least squares problem was solved by the gradient descent method with the RMSprop update (Tieleman and Hinton, 2012). This is an adaptive learning rate method: at each iteration, the update reads: with κ ∈ R 1+3N H an intermediate variable, η ∈ R the learning rate and γ = 0.9. The learning rate was not fixed, an optimal value for η was chosen at each iteration in the range 10 −5 , 10 2 . In equations 8 and 9 the operators ⊗, ⊘, and • denote the Hadamard product, division, and power, respectively. The gradient of the cost function J with respect to the unknown parameters P was calculated analytically. For the gradient descent method, an initial guess was required. We arbitrarily chose A = 10, the default values τ out,i = 6 ms and τ close,i = 150 ms for all i, and τ i constant τ i = τ 0 ∈ R. Since the initialization was the same for all the nodes, the initial torso potentials were zero. The optimization ended when the cost function J and its gradient remained constant. The code was in Matlab and not parallel. Computational time was quite long and similar for all the cases, namely about one day. A more flexible stopping criterion and parallelism would reduce computational time.

Validation Data
In order to create testing data, simulations were run on an anatomically realistic 3D geometry of the torso, including heart, blood vessels, lungs, and skeletal muscle (Figure 3). Each organ had its own conductivity. Propagating AP were generated using a monodomain reaction-diffusion model with a TNNP membrane model (Ten Tusscher et al., 2004) on an anisotropic heart model at 0.2 mm resolution. To compute φ T the computed transmembrane current density in the myocardium was projected on an inhomogeneous heart-torso model with anisotropic skeletal muscle layer at 1 mm resolution and the potential field φ T was found by solving an anisotropic Laplace problem using a finite-difference method (Potse, 2018). Boundary conditions did not match between the monodomain model and the Laplace equation. This approach leads to slightly different extracellular potentials within a few hundred FIGURE 3 | Heart-torso mesh used for the computation of validation data. The 252-electrode body surface mapping set is shown. Red electrodes mark two locations used in Figure 8.
Frontiers in Physiology | www.frontiersin.org micrometers from the surface only (Potse et al., 2006). All simulations were performed with a recent version of the Propag-5 software (Krause et al., 2012) on a BullX cluster machine.
We had access to the activation times on the epicardium and the endocardium (named reference ATs in the following). Repolarization times were computed from extra-cellular potentials as the time with highest positive slope during the repolarization phase.

RESULTS
On the same model anatomy, seven different simulations were run: one sinus rhythm (SR) and six different pacing cases. The description of the cases can be found in Table 1. For all the cases, we solved the parameter identification problem on the epicardium-only mesh (Mesh1) and on the epicardium and endocardium mesh (Mesh2). Mesh1 and Mesh2 had 641 and 534 vertices respectively. We will describe the results in detail for two cases: right-ventricular pacing and sinus rhythm.

Epicardial Ventricular Pacing
The reconstructed activation maps in case of right-ventricular pacing were of the same quality on both meshes. In particular the late ATs were not well reconstructed in both cases (first row, dark blue part in Figure 4). The correlation coefficient (CC) and relative error (RE) between ATs were close for both meshes, about 0.7 and 0.3 respectively. However, Figure 5 shows that a part of the reference ATs between 120 and 160 ms was less well reconstructed with Mesh1 than with Mesh2. For both meshes some reference ATs between 100 and 150 ms were not well reconstructed (Figure 5, left, black box). These points were located between the two valves, where the reconstruction is more difficult. The pacing site was better localized with Mesh1 (11.4 mm from the actual position, geodesic distance) than with Mesh2 (16 mm), as shown in Figure 12. For Mesh2 we also calculated CC for the points on the epicardium (CC = 0.72) and on the endocardium (CC = 0.77). With the endocardium we did not improve the accuracy on the epicardium compared to the results with the epicardium only.
The benefit of considering the endocardium was to look for gradients of depolarization between the endocardium and epicardium. For each point on the epicardium, we selected the closest point on the endocardium and computed the delay in Pacing on the septum, halfway up to the right ventricle the activation. Figure 6 presents box plot of these delays for the 7 cases. Delays existed in the reference ATs (first box) and the delays we obtained were smaller on average. We also obtained large delays (more than 20 ms and up to 135) that were not consistent with the actual ones. On both meshes, the quality of repolarization maps was less good than the activation maps (Figure 4, second row). The CC was slightly better with Mesh1 (0.55 vs. 0.51). It was highlighted on the scatter plot, especially for the earlier RTs (Figure 5, right). Figure 7 shows the evolution in time of the CC between the measured BSPM and the reconstructed ones. Reconstructed torso potentials were computed from equation (3), (4), and (5) with the optimized parameters and the corresponding mesh Mesh1 or Mesh2. On both meshes, the behavior was similar: at the beginning and the end of the simulation the reconstruction was less accurate. As shown by Figure 8, after 400 ms, measured and reconstructed BSPMs are close to zero, which explained that the CC dropped. On average, the CC was 0.88 with Mesh1, and 0.9 with Mesh2. On both electrodes, depolarization, and repolarization phases were quite well fitted for the two meshes. There were just slight differences between the reconstructed BSPMs. We also calculated the root mean square error (RMSE) between the measured BSPMs and the reconstructed ones (Figure 9). Two peaks can be seen: one corresponding to the depolarization phase and the second to the repolarization phase. They were mainly due to the amplitude: the optimized amplitude did not allow to fit the signals on all the electrodes (Figure 8). RMSE was similar for the 2 meshes.

Sinus Rhythm
It is well known that the QRS duration is shorter in sinus beat than in a paced beat. Moreover, there were multiple breakthroughs in the myocardium. For these reasons it was harder to obtain a satisfying reconstruction than in the pacing cases. For both meshes the reconstructed total activation time was longer than the actual. The CC and RE were better with the endocardium than without, but still not as good as in the pacing cases (Figure 10, left). For Mesh2 we also calculated CC for the points on the epicardium (CC = 0.64) and on the endocardium (CC = 0.57). With the endocardium we improved the accuracy on the epicardium (CC = 0.64) compared to the results with Mesh1 (CC = 0.49).
We also looked at the delays between endocardium and epicardium (Figure 6). These were similar on the reference ATs for the SR and RV pacing case (first and third boxes). Since the total activation time (TAT) is smaller in a sinus beat, the relative values of these gradients to the TAT were more important than in RV pacing. We reconstructed different delays for this two cases. The delays were not reconstructed as well for the SR as for the pacing cases. Indeed as shown in Figure 11, there was a gradient of activation on the left ventricular free wall that we did not recover. Similarly there were delays in the activation of the septum that we did not reconstruct.
CC and RE for the repolarization times were better with Mesh1: 0.51 and 0.18 respectively with the endocardium and 0.68 and 0.1 without (Figure 10, right). Indeed with the endocardium the range of RTs was much larger, from t = 108 ms to t = 628 ms, whereas the actual range was from t = 259 ms to t = 393 ms.
Finally we compared the signals on the torso. As in the pacing case, CC and RMSE evolved in the same way for both meshes, with close values over time. In both cases the CC dropped after 350 ms because reconstructed T waves sometimes ended later than the real ones. In the simulation the heart was almost at rest after 350 ms, which was not the case with our optimized parameters. On average, the CC was 0.83 with Mesh1, and 0.87 with Mesh2.

Sensitivity to the Initialization
In order to test if the method was sensitive to the initialization, we solved the inverse problem with two other triplets. The results we previously presented were obtained from the triplet (τ i , τ out,i , τ close,i ) = (60, 6, 150). The second and third triplets were (75, 5, 130) and (75, 6, 15) respectively. The results are presented in Table 2. The three initializations ended with very close results: CC for ATs and RTs were in the same range, as well as for the BSPM. Moreover, for the three triplets, the method gave a better accuracy of the ATs with Mesh2, while RTs were better reconstructed with Mesh1. Changing the initial ATs did not improve the accuracy on the reconstructed ATs. Finally, the reconstructed torso potentials were very close to each other for the three initializations (CC between 0.83 and 0.9). Especially, the QRS complex and the T wave were fitted in the same way.

All the Cases
We present the results for all the cases in Table 3. A box plot representation can be found on Supplementary Material, as well as activation, repolarization, and APD90 maps for all the cases. In cases 4 and 6 CC of ATs were better with Mesh2. In all others cases, CC were similar for both meshes. In all cases, solving the inverse problem with Mesh2 gave at least as accurate ATs on the epicardium as with Mesh1. Optimized RTs were better with Mesh2 in only 2 cases: pacing on the basis of the pulmonary vein (case 6) and pacing on the septum (case 7). Figure 6 shows the delays in activation. On average we reconstructed smaller delays in all cases.  Concerning the reconstructed BSPMs, averaged CC and RMSE are given in Table 3. Except in case 7, the averaged CC were very similar for both meshes. They kept very close values over time. We observed the same behavior for the RMSE in all the cases. The lower averaged CC in case 7 with Mesh2 was due to a shorter total activation time: late ATs were not well reconstructed.
A statistical T-test was performed on the CC for ATs, RTs, and BSPM. The resulting p-values were 0.5, 0.41, and 0.28  respectively, showing no significant differences between the two meshes.
We computed the geodesic distance between the actual pacing site locations and the one given by the inverse solution for cases 1, 4, and 6 (epicardial pacing). For endocardial pacing (cases 3, 5, and 7) we computed the distance between actual and reconstructed breakthrough on the epicardium.
From the optimization, the pacing site (or breakthrough) was identified as the mesh node with the earliest AT (resp. on the epicardium). We added a visual validation to exclude irrelevant, isolated, early ATs. Results can be found in Figure 12. In most of the cases, the distance was smaller with Mesh1 than Mesh2. However, except for case 6, the identified site with Mesh2 was a neighbor of the actual site. So the differences in the mesh density could explain the smaller distances with Mesh1.
We looked at the AP duration. For the 7 cases the reference APD90 varied between 225 and 285 ms. A difference was clearly visible between the endocardium and the epicardium. We were not able to reproduce this difference with Mesh2. However, APD90 were similar on the epicardium for both meshes. Our method tended to reconstruct maximal APD90s much higher than 285 ms, especially in cases 1, 2, and 7.

DISCUSSION
We presented a new ECGI method designed to recover both the depolarization and the repolarization sequence, by solving a parameter identification problem. We hypothesized that this method would work better when both the endocardium and epicardium are included in the model, since important and physiologically relevant differences in both depolarization and repolarization timing exist between these surfaces. Therefore, we tested the method on two different heart meshes: the one a FIGURE 10 | Scatter plot of the ATs (left) and RTs (right) for the SR case. For each point, the x coordinate is the reference AT (resp. RT) and the y coordinate is the corresponding reconstructed AT (resp. RT). The dashed lines represent the linear fitting. closed surface of the epicardium alone, and the other including both epicardium and endocardium. Tests were performed using in silico data for a sinus beat and six different ventricularly paced beats. Results were very similar for both meshes in 6 cases: all the characteristics we looked at were of the same good quality. The presence of the endocardium slightly improved the ATs on the epicardium. In contrast, for the RTs the effect of including the endocardium was variable.
In two other cases (sinus rhythm, case 2, and septal pacing case 7), the reconstruction of AT with Mesh1 was poor. In the sinus rhythm case, inclusion of the endocardium (Mesh2) improved the reconstruction substantially. This was the only case where endo-epicardial gradients, with respect to the total activation time, were significant. In all cases, the repolarization times were better reconstructed with the epicardium only.
We showed that our method was not sensitive to the initialization. Especially the choice for τ out and τ close did not impact the reconstruction of ATs, since these two parameters play a role only during the repolarization. Similarly, imposing global instead of distributed parameters will not worsen ATs reconstruction. The quality of the estimation of the Mitchell-Schaeffer parameters can only be seen through RTs and APD90 reconstructions. CC for RTs were smaller than the ones for ATs which may suggest that the reconstruction of τ out and τ close was less precise than ATs reconstruction. APD90 maps confirmed that, on a same case, we can overestimate as well as underestimate APD90 on large areas.
In general, our method underestimated AT delays between endocardium and epicardium (Figure 6). A possible explanation is that from the torso surface the two heart surfaces are too close to be seen separately. The endocardial activity is masked by the epicardial one, even in the case of endocardial pacing. The problems we solved, with Mesh1 or Mesh2, were actually the same; we ended with similar results. It may also explain why we did not reconstruct APD differences between the epicardium and the endocardium. Another possible explanation is the difference in density between the two meshes. We chose to have about the same number of nodes in each mesh, so that the difference in the number of parameters to identify could not alone explain the results. However, it implied that Mesh2 was coarser than Mesh1. A test was made on a refined mesh of Mesh2 (Figure 2, right). This third mesh had 1328 nodes and a density similar to the one of Mesh1. We solved the inverse problem on this mesh for the ventricular pacing case 1. The results we obtained were very similar to those with Mesh2: the CC for ATs was 0.79 (0.77 for Mesh2) and the average CC for the BSPM was 0.86 (0.9 for Mesh2). This test may suggest that the density of the mesh does not have an impact on the results.
We solved the inverse problem with a constant factor A over the whole heart. However, this factor (proportional to the amplitude of the AP) may not be constant, e.g., in the case of ischemia. We attempted to consider a distributed factor, more relevant from a physiological point of view. In that case the method was not converging, or converged to both positive and negative amplitudes.
So far we did not add noise to the testing data. Even if the models to create the data and to solve the inverse problem are different, it would be helpful to assess the robustness of the method.
Validation data were created from a volumetric heart mesh with a much higher density than Mesh1 and Mesh2. The reference values (AT, RT) were the values on the mesh nodes. In contrast, the inverse problem on a surface leads to values that contain information averaged over a considerable volume. This may explain why the delays between reconstructed ATs were smaller than the delays between the reference ATs.

Comparison With Other Methods
Currently, most ECGI methods are based on a Laplace problem for the potential in the torso. Using the MFS (Wang and Rudy, 2006) or boundary-element models (Sapp et al., 2012;Bear et al., 2018) these methods reconstruct instantaneous potential patterns on the surface of the heart. These methods use Tikhonov or similar forms of regularization to counter the ill-posedness of this problem. This form of regularization leads to smooth solutions for the potential distribution, while the actual pattern, especially in case of an activation wavefront, is characterized by steep gradients. This leads to unrealistic solutions for the activation pattern, featuring large areas that appear to be activated nearly simultaneously, separated by artefactual lines of conduction block (Duchateau et al., 2017;Ravon et al., 2017). Various methods have been proposed to counter this effect, e.g., by reconstructing AT maps from local delays estimated from the whole signal morphology (Duchateau et al., 2017) or by simply smoothing the activation map (Bear et al., 2018). The latter method claims that it does not wipe out true block lines, as well as the artefactual ones, without any validation yet. The method that we proposed here does not require such postprocessing. It imposes a predefined action potential waveform, parameterized in terms of AT and parameters of the Mitchell-Schaeffer model, and does not require further regularization. We have previously shown that our method leads to more realistic activation maps than the MFS (Ravon et al., 2017). In the larger sample of this study we also did not observe the clustering of AT that is typical for MFS methods.
A similar parameter optimization approach, also in terms of endocardial and epicardial AT and RT, was used by van Dam et al. (2009). In contrast to our method it still relied on a (Laplacian) regularization of the AT field, and ahead of the parameter estimation phase it performed an initial estimate based on an exhaustive search. On the other hand, it used a more realistic volume conductor model that took the boundedness and inhomogeneity of the torso into account. Unlike our method they showed that the choice of the initial estimates had an impact on the quality of the inverse procedure. This importance had also been reported by Potyagaylo et al. (2016) and Erem et al. (2014).
Others have worked on the impact of the endocardium in the case of atrial fibrillation Schuler et al. (2017). Considering that atria are very thin, they imposed similar TMP values on the epicardium and the endocardium. Due to the greater thickness of the ventricles, this hypothesis would not be suitable in our study. In a previous study (Potyagaylo et al., 2014) the same group proposed a local regularization of the two surfaces to localize ectopic beats. The regularization parameter can differ between the endocardium and the epicardium. It was a way to better distinguish endocardial events from epicardial events. This approach might be applicable in our case with two different factors A.

Conclusion
Our parameter optimization method reconstructs accurate activation times and, to a lesser extent, repolarization times. In some cases inclusion of the endocardium in the solution helps to improve the reconstruction of activation times, while in general it does not improve the reconstruction of repolarization times.

AUTHOR CONTRIBUTIONS
All authors have made substantial contributions to this study. GR designed the study, implemented the algorithms, analyzed, and interpreted the results, and drafted the manuscript. YC and RD helped conceive the study, provided feedback about the implementation of the methods and the interpretation of the results, and revised the manuscript. MP provided the validation data and feedback about the results, and revised the manuscript.

FUNDING
This study received financial support from the French Government as part of the Investments for the Future program managed by the National Research Agency (ANR), Grant reference ANR-10-IAHU-04. This work was granted access to the HPC resources of TGCC under the allocation x2016037379 made by GENCI.