ORIGINAL RESEARCH article
Efficient Numerical Solution of the EMI Model Representing the Extracellular Space (E), Cell Membrane (M) and Intracellular Space (I) of a Collection of Cardiac Cells
- 1Simula Research Laboratory, Lysaker, Norway
- 2Department of Informatics, University of Oslo, Oslo, Norway
The EMI model represents excitable cells in a more accurate manner than traditional homogenized models at the price of increased computational complexity. The increased complexity of solving the EMI model stems from a significant increase in the number of computational nodes and from the form of the linear systems that need to be solved. Here, we will show that the latter problem can be solved by careful use of operator splitting of the spatially coupled equations. By using this method, the linear systems can be broken into sub-problems that are of the classical type of linear, elliptic boundary value problems. Therefore, the vast collection of methods for solving linear, elliptic partial differential equations can be used. We demonstrate that this enables us to solve the systems using shared-memory parallel computers. The computing time scales perfectly with the number of physical cells. For a collection of 512
Traditionally, numerical simulations of excitable tissue have been performed using homogenized mathematical models. For cardiac tissue, the bidomain model and the associated monodomain model have been very popular and provided important insights into the electrochemical processes underpinning every heartbeat. Introductions to these models can be found in, e.g., Refs. 1 and 2 and numerical methods used to solved the models are presented in numerous papers; see, e.g., Refs. 3–7. In the homogenized models, the extracellular space, the cell membrane and the intracellular space all exist everywhere in the computational domain. This approach has proven to be very useful in studying effects on a relatively large length scale (millimeters), but has obvious limitations when the length scale approaches the size of a cell (micrometers). The main reason for this is that the cell itself is missing in the homogenized models.
As an alternative to the homogenized models, we have recently presented and applied a cell-based model; see, e.g., Refs. 8–10. In this model, the extracellular domain (E), the cell membrane (M) and the intracellular domain (I) are all present in the mathematical model and it is therefore referred to as the EMI model. The EMI model is motivated by earlier work where homogenization has been avoided; see, e.g., Refs. 11–19.
The obvious advantage of the EMI approach over the homogenized counterparts is that the cell itself is present in the model. This enables the modeler to change cell membrane properties locally (e.g., varying ion channel densities along the cell membrane) which may be of importance for the conduction velocity; see, e.g., Refs. 9, 20 and 21. Furthermore, the presence of the cell allows for study of the effect of changes to the cell size and shape which may be of great importance; see, e.g., Refs. 22–25. For instance, the T-tubule system can only be represented in an averaged manner in the classical bidomain/monodomain models, but in the EMI model it would be possible to represent the T-tubule system explicitly albeit at the cost of handling complex geometries.
On the other hand, the obvious disadvantage of the EMI approach is the computational complexity of the model. One issue is that in order to represent the geometry of the cell and the intercalated discs to a reasonable degree of accuracy, we need a very fine computational mesh. For the finite difference computations we have conducted, the mesh resolution is usually between 1 and 4 μm. Using a finite element method, the number of nodes may be reduced by applying an adaptive mesh, but the number of grid points remains high. For the bidomain/monodomain models a typical mesh resolution is around 0.25 mm (see, e.g., Ref. 26). For a cube with sides measuring 1 cm, the EMI model (with
The EMI model is useful for gaining detailed insights into the dynamics close to excitable cells. It may also be used to study small collections of specialized cells like the sinus node, AV node (atrioventricular node), Purkinje junctions, etc. For instance, the volume of the sinus node (see Ref. 27) is about 80 mm3 and a mesh with
The mouse heart is frequently used as an experimental model to study cardiac electrophysiology. The size of the sinus node and AV node for this animal is much smaller than for humans; the volume of the sinus node of a mouse is about 1.5 mm
Another difficulty associated with the EMI model is the characteristics of the linear system arising from the discretization of the model. The system turns out to be difficult to solve and it does not fall into a standard category of systems where well-developed methods and software can be applied. The method used to solve the system in Refs. 8 and 10 is based on an temporal operator splitting algorithm where the non-linear part is handled in one step, and then the linear equations arising from E, M and I are spatially coupled in one common linear system. Attempts to split the system into separate equations for E, M and I, and solve these systems in a sequential manner, lead to severe restrictions on the time step
The remainder of this paper is organized as follows: First, in Section 2, we present the EMI model for a single cell and a finite difference approach for discretizing and solving the linear model equations in a coupled manner. We then describe the spatial operator splitting algorithm for splitting the EMI model into separate equations for the extracellular space, the membrane and the intracellular space, and outline a finite difference discretization for the splitting approach. In Section 2.5, we investigate the accuracy of the spatial splitting method by comparing the solution of the splitting algorithm to the solution of the spatially coupled system of equations. Next, in Section 3, we present the EMI model for connected cells and explain how the spatial splitting approach can be extended to this case. We also describe a finite difference discretization of the coupled system and the spatial splitting approach, and investigate the accuracy of the splitting method for a collection of connected cells. In Section 4, we describe a parallel shared-memory implementation of the spatial splitting algorithm and report performance results of the parallel solver. Finally, in Sections 5 and 6 the results of the paper are discussed and summarized.
2. The Single Cell Case
We begin by describing a spatial splitting method for the EMI model for a single cell surrounded by an extracellular space, as illustrated in Figure 1. This splitting method may also be straightforwardly extended to collections of isolated cells, as illustrated in Figure 2. However, when collections of cells are connected by gap junctions (see Figure 6), the splitting method must be extended to account for the gap junctional coupling between cells. This extension is described in Section 3.
FIGURE 1. Illustration of a two-dimensional version of the domain for the EMI model for a single cell. The domain consists of an intracellular domain,
FIGURE 2. Illustration of the EMI model domain for a collection of isolated cells. The spatial splitting approach described in Section 2.2 for a single cell may also be applied for this type of cell collection.
2.1. The EMI Model
In the EMI model for a single cell, like illustrated in Figure 1, we consider an intracellular domain,
The current density
On the outer boundary of the extracellular space, we apply the homogeneous Dirichlet boundary condition Eq. 3 on a part of the boundary,
2.2. The Coupled Discrete Model
In order to illustrate how the linear part of the EMI model Eqs 1–8 can be discretized and solved in a coupled manner, we describe below how to set up a finite difference discretization of the full EMI system. However, because of the possibly nonlinear terms included in F and
2.2.1. Spatial and Temporal Splitting
The challenge we address in the present paper is to split the spatial coupling of the EMI model such that a solution algorithm can be founded on well-studied equations. In addition, we use temporal splitting of Eq. 7.
2.2.2. Step 1: Membrane Dynamics
In the discrete model, we seek discrete approximations
for a time step of
2.2.3. Step 2: Coupled Linear EMI System
In the next step of the temporal operator splitting procedure, the remaining EMI system Eqs 1–7 with
FIGURE 3. Illustration of the computational mesh used in the coupled (A) and split (B) versions of the finite difference discretization of the EMI model. The mesh consists of three types of nodes; extracellular nodes marked by ×, intracellular nodes marked by ○, and membrane nodes marked by ⊗.
For the intracellular nodes, there is one unknown,
Likewise, for the majority of the extracellular nodes, there is one unknown,
For the nodes located at the membrane, there are three unknowns,
Collecting the equations of the discrete version of the linear part of the EMI model, we end up with a spatially coupled system of equations consisting of one equation for every computational node plus two extra equations for each membrane node.
2.3. Robin/Neumann Spatial Splitting of the EMI Model
The coupled discrete version of the EMI model may be used to find accurate solutions of the system (see, e.g., Refs 8–10). However, in some cases it would be much more convenient to be able to rewrite the EMI system as a set of more standard problems that can be solved individually using existing well-developed solution methods for such standard problems. Linear systems arising from discretizations of elliptic partial differential equations have been under intense scrutiny for decades and the efforts have resulted in optimal solution strategies; see, e.g., Ref. 30. Our aim is therefore to introduce a spatial operator splitting algorithm for the EMI model that results in sub-problems formulated in terms of standard elliptic problems, thus enabling the use of optimal solvers.
In this section, we describe a method for constructing such a splitting of the EMI model. In this approach, for each time step, we will solve the equations of the intracellular and extracellular spaces separately as standard Laplace equations with Robin, Neumann or Dirichlet boundary conditions on the boundary of the domains. The algorithm described in this section is summarized in Algorithm 1.
2.3.1. Step 1: Membrane Dynamics
In the first step of the splitting procedure, we solve the nonlinear part of the EMI model just like in the coupled discrete model. In other words, we update the solution of the membrane potential, v, and the remaining membrane state variables, s, by solving the system of ordinary differential equations
2.3.2. Step 2: Intracellular System
In the next step of the approach, we find an approximation of the intracellular potential,
In Eq. 7, we discretize the time derivative using the implicit approximation
Inserting Eq. 16 yields
and inserting the definition of
This defines a Robin boundary condition at
to find an approximation of
2.3.3. Step 3: Extracellular System
In the third step of the procedure, we update the extracellular potential by solving Eq. 1 with boundary conditions given by Eqs 3–5. Here,
from the solution of
to find an approximation
2.3.4. Iteration for the Intracellular-Extracellular Coupling
After solving Step 2 and Step 3 once, we obtain a new approximation
2.3.5. Step 4: Updating the Membrane Potential
In the final step of the procedure, the membrane potential, v, is updated using the solutions of
2.4. Discrete Version of the Splitting Approach
An illustration of the computational mesh used in each of the steps of the finite difference discretization of the splitting method is found in Figure 3B. In the discrete version of the splitting approach, we solve Step 1 just like for the coupled version by taking m forward Euler steps of size
2.5. Numerical Comparison of Solution Methods
In order to investigate the accuracy of the splitting method introduced above, we set up a simple test case with a single cell surrounded by an extracellular space. The cell is located in the center of the domain, and the size of the entire domain
TABLE 1. Default parameter values used in the simulations, based on Ref. 9.
Figures 4 and 5 show a comparison of the solutions of the coupled approach and the splitting approach applied to this simple test problem. The left panel shows the extracellular potential,
FIGURE 4. Comparison of the solutions of the coupled (C) and splitting (S) approaches for the EMI model for a single cell. The left panel shows ue and the right panel shows ui . Furthermore, the upper panel shows the solution of the C approach, the middle panel shows the solution of the S approach, and the lower panel shows the absolute value of the difference between the two solutions. The solutions are collected at t = 3.5 ms, and the plots show the solutions in a sheet in the center of the domain in the z‐direction. The parameters used in the simulations are specified in Table 1, and we use Nit = 1 in the S approach.
FIGURE 5. Comparison of the solutions of the coupled and splitting approaches for the EMI model like in Figure 4, except that we use five iterations, Nit = 5, instead of one in the splitting approach.
In Table 2, we report the difference between the solutions of the two approaches in the form of the relative, discrete
TABLE 2. Difference between the solution of the coupled approach (
3. Connected Cells
As observed in the previous section, the proposed splitting method for the EMI model for a single cell appears to provide accurate solutions for reasonable time step sizes. However, in cases where cells are connected by gap junctions like illustrated in Figure 6, the splitting approach needs to be extended to account for the coupling between cells through the gap junctions. Again, the aim of the splitting is to obtain standard uncoupled Laplace problems in each domain and thereby allow solution of the originally coupled system by solving decoupled systems for the individual cells in parallel. In this section, we describe such an extension of the splitting method.
FIGURE 6. Two-dimensional illustration of the EMI model domain for two connected cells. The domain consists of two cells, Ω1i and Ω2i , with cell membranes denoted by Γ1 and Γ2, respectively. The cells are connected to each other by the intercalated disc, Γ1,2, and surrounded by an extracellular space, denoted by Ωe. The figure is reused from Ref. 48 with permission from the publisher.
3.1. The EMI Model for Connected Cells
A two-dimensional version of the domain for the EMI model for two connected cells is illustrated in Figure 6. Two cells,
for each cell k and each neighboring cell
3.2. Discrete Coupled Version of the Model for Connected Cells
A two-dimensional version of the computational mesh used in the finite difference discretization of the EMI model for two connected cells is illustrated in Figure 7A. We observe that in this case, there are four different types of nodes; nodes located in the extracellular space, marked by
FIGURE 7. Illustration of the computational mesh used in the coupled (A) and split (B) versions of the finite difference discretization of the EMI model for two connected cells. The mesh consists of four types of computational nodes; nodes located in the extracellular domain, marked by
In the coupled discrete version of the EMI model for connected cells, we split the EMI model into a nonlinear part and a linear part using the same temporal operator splitting procedure as for the discrete coupled model for a single cell (see Section 2.2). Moreover, the first nonlinear step, solving the ordinary differential equations describing the membrane dynamics, is solved in exactly the same manner as for the single cell case (see Section 2.2.2).
Furthermore, in the finite difference discretization of the linear part of the EMI model we use the exact same approach to discretize the equations for the intracellular, extracellular and membrane nodes as for the single cell case (see Section 2.2.3). The only exceptions requiring special attention in the case of connected cells are the extracellular nodes located directly below or above the intercalated disc (see Figure 7A). For these nodes, we enforce a no-flow boundary condition and set the extracellular potential equal to the potential in the purely extracellular nodes located directly above or below the corner nodes.
The nodes located at the intercalated disc are treated in exactly the same manner as the membrane nodes except that
where a first-order finite difference approximation of
3.3. Splitting Approach for the EMI Model for Connected Cells
We now want to split the intracellular subdomains in order to obtain classical elliptic equations defined on small domains. In this splitting approach, the linear part of the EMI model is for each time step split into standard Laplace problems with Robin, Neumann or Dirichlet boundary conditions. These problems can be solved separately in each of the domains
Steps 1, 3, and 4 of the splitting approach are exactly the same for connected cells as for the single cell case (see Section 2.3), but Step 2 is extended to account for the cell coupling at the intercalated discs,
3.3.1. Step 2: Intracellular System
In Step 2 of the splitting approach for connected cells, we update the intracellular potential by solving Eq. 28 in each of the intracellular domains,
We define this boundary condition in an iterative manner, where we for each iteration use an approximation of
This defines an approximation of the Neumann boundary condition (Eq. 35) for
3.4. Discrete Version of the Splitting Approach
Figure 7B shows an illustration of the computational mesh used in each of the steps of the finite difference discretization of the splitting approach for connected cells. The discrete versions of steps 1, 3, and 4 of the procedure are set up in the same manner as for a single cell (see Section 2.4). For each cell and in each iteration of Step 2, we consider the nodes representing the intercalated disc, the membrane and the purely intracellular points, and we have one equation and one unknown,
3.5. Numerical Comparison of Solution Methods
In order to investigate the accuracy of the splitting method for the EMI model for connected cells, we extend the simple test case introduced in Section 2.5 to include a collection of 5
FIGURE 8. Two-dimensional illustration of the geometry used for a single cell in the simulations of collections of connected cells. The intracellular domain is a composition of the subdomains
TABLE 3. Specification of the size of the subdomains of a single cell used in the simulations of connected cells (see Figure 8).
Note that in the current implementation of the model, the intracellular and extracellular conductivities,
In Figure 9, we compare the solutions of the coupled and splitting approaches applied to this test problem. In the splitting approach, we use five inner and outer iterations
FIGURE 9. Comparison of the solutions of the coupled (C) and splitting (S) approaches for the EMI model for connected cells. The left panel shows
Table 4 compares the solution of the splitting approach for different values of
TABLE 4. Difference between the solution of the coupled approach (
In Figure 10, we investigate how the absolute value of the difference between the intracellular potentials, computed using the two solution methods, depends on the time step,
FIGURE 10. Absolute values of the difference in
Figure 11 shows a similar comparison of the solutions of the membrane potential, v, at the center of the membrane of the center cell as a function of time. We consider three different time steps
FIGURE 11. Comparison of v at the center of the membrane of the center cell as a function of time computed using the coupled and splitting approaches for different combinations of the time step,
In Figure 12, we examine how the maximum difference in the computed intracellular potentials between the coupled and splitting approaches decreases as the number of iterations used in the splitting approach increases. In the left panel, we fix the number of inner iterations,
FIGURE 12. Maximum absolute difference in
4. Shared-Memory Parallel Implementation of the Splitting Algorithm
In this section, we will describe a parallel implementation of the splitting algorithm. The parallel implementation targets a hardware platform with a shared-memory system. Such parallel platforms are currently widely available, e.g., a server or desktop computer that has multiple sockets each with tens of CPU cores. The entire memory is composed of the memory domains that are directly connected to the sockets, allowing each CPU core to access any address in the whole memory space. The shared address space considerably eases the parallelization. However, care in programming is needed to maximize data locality, because accessing non-local parts of the memory is more costly than accessing the local part.
Important details of programming and performance enhancement will be discussed below, separately for the three decoupled numerical components of the splitting algorithm. An underlying principle is to use, whenever possible, existing software libraries for the different components. Such software libraries either use multiple threads to parallelize the computation internally, or are serially invoked inside a loop of parallel iterations.
4.1. The Intracellular Subdomains
Recall that the intracellular space consists of the individual excitable cells. Due to the splitting algorithm, the intracellular systems associated with the cells can be solved independently. This immediately brings a cell-level parallelism, because the individual intracellular systems can be assigned to the different CPU cores. With a spatial resolution of
It is possible to adopt several threads to parallelize each forward-backward substitution step, but the amount of parallelism at this level is limited. Such a parallelization approach is thus not recommended unless the number of available CPU cores greatly exceeds the number of cells.
For computing the LU factorization and performing forward-back substitutions, we have chosen the serial version of the SuperLU library . We assign each of the linear systems
Apart from the aforementioned advantage that the LU factorization of each
One remark is in order here. Although we discuss in this paper a two dimensional collection of cells, it is important to keep in mind that all the cell geometries are in 3D and all cells are surrounded by a 3D extracellular space. In fact, the EMI model for a mesh of cells can only be applied in 3D because in 2D the extracellular space would become disconnected.
4.2. The Extracellular Domain
The splitting algorithm requires the solution of a 3D Laplace equation for the entire extracellular domain with Neumann boundary conditions on the cell membranes. With a careful treatment of the boundary conditions, the finite difference discretization produces a symmetric and positive-definite matrix for the linear system associated with the extracellular domain. These properties of the matrix permit the use of a conjugate gradient (CG) iterative solver with an algebraic multigrid (AMG) preconditioner to solve the linear system. Under optimal conditions, the CG+AMG solver should require a constant number of iterations, irrespective of the number of degrees of freedom in the linear system, resulting in a solution time that grows linearly with the size of the linear system .
We used the CG and AMG implementations provided by ViennaCL  in our simulations, as ViennaCL supports shared-memory parallelization via OpenMP. The termination criterion
4.3. The Membrane
On each mesh node along the membrane, the ODE system given in Step 1 of Algorithm 2 is solved with a forward Euler scheme using a substepping time step
When initializing the memory for the ODE-model’s state variables and parameters, we perform a so-called “first touch” to ensure that all CPU cores work on the parts of the memory that are located closest to the cores in terms of connectivity. This step is important for achieving proper scaling on modern servers where the system memory is typically split into multiple NUMA (non-uniform memory access) domains.
4.4. Hardware Platform Setup
Two hardware testbeds were used to measure the performance of our parallel solver that implements the splitting algorithm. The specifications are listed in Table 5. The first testbed (System A) is a dual-socket server housing two AMD EPYC 7601 32-core CPUs, each configured with 8-channel memory operating at 2666 MT/s. This server has 2 TiB of memory in total. The second testbed (System B) is a quad-socket server with four Intel Xeon Platinum 8,168 24-core CPUs, each configured with 6-channel memory operating at 2,666 MT/s. The total memory size is 384 GiB. Notably, System A has four NUMA domains per socket,2 whereas System B only has one NUMA domain per socket. Both systems support 2-way simultaneous multi-threading (SMT), presenting two logical processors to the operating system for each physical CPU core. Our hardware test systems are arguably in the high end of the present-day CPU server market and thus allow us to explore the limits of what can be achieved in a shared-memory environment. Simulations with an even larger number of cells, or a finer resolution inside each cell, will eventually require an enhanced parallel simulator capable of utilizing multiple nodes on a distributed-memory system. Speedup due to increasing the number of nodes used can be expected as long as the communication overhead does not exceed the computation time per node.
Our solver is mainly written in the Python programming language, but the most performance-critical parts are executed in C code which we call from Python via the ctypes module. The resulting multi-threaded parallel solver permits rapid prototyping while also being reasonably performant. In the results presented below, we have set environment variables to impose a “thread binding” that fixes the mapping of OpenMP threads to the logical processors such that the same core will work on the same part of the memory for every time step. All logical processors were assigned one thread each, resulting in 128 threads for System A and 192 threads for System B.
4.5. Performance Results
The mesh resolution was set to
Tables 6 and 7 show the time usage per time step needed for 16 cells up to 65,536 and 16,348 cells for Systems A and B, respectively (The memory size on System B is not large enough to simulate
TABLE 6. Time required to solve a time step of size
TABLE 7. Time required to solve a time step of size
In Table 8, we consider the case of a 4 μm resolution for System A, and we are in this case able to perform simulations of up to 512
TABLE 8. Time required to solve a time step of size
The mesh resolution was set to
Accurate modeling of excitable cells is important in order to understand how collections of such cells work. The EMI model provides a promising framework for such simulations, but it is challenging from a computational point of view. In this paper, we have developed an optimal splitting-based method that scales perfectly with the number of cells in the simulations. The main result of the paper is that we have been able to formulate a solution strategy that relies on intermediate steps that all are covered by standard theory in scientific computing.
We have seen in Table 8 that a collection of 512
The volume of the mouse ventricles is about 100 mm3 (see Ref. 42) or about 440 times larger than the mouse sinus node. In order to be able to simulate mouse ventricles within a 24-hour wall-clock time frame, we therefore would need a computer that is about 440 times faster than System A above, assuming that linear scaling would hold for even larger systems. This is actually well within today’s technological capability, because the world’s most powerful supercomputer at the time of writing, named Fugaku (see, e.g., Refs. 43 and 44), consists of 158,976 computers, each more powerful than System A (if we consider the available memory bandwidth). Although theoretically feasible, it remains to be seen whether a full action potential of a complete mouse heart can be simulated based on the EMI model.
The computing time may be reduced by applying adaptive time stepping and the number of nodes may be reduced by using an adaptive finite element method instead of the finite difference method. However, the finite difference scheme is exceptionally well suited for the computer systems A and B and fewer mesh points in the finite element method could therefore result in increased computing time per computational node (see Ref. 8). A possible path of future work can be to develop a distributed-memory parallel implementation of the splitting algorithm, so that the number of cells to be simulated is not limited by the computing speed and/or memory capacity of a single shared-memory platform.
A weakness of the current implementation is that, whereas the finite difference method is well suited for representing very simple geometries, it is less suited for complex geometrical shapes. For instance, representation of real geometries of the heart, and also including the fiber orientation of the heart muscle is necessary in many applications; see, e.g., Refs. 1, 45 and 46. In order to analyze problems with complex geometries using the EMI model, we will need to use the finite element method. Also, representing realistic geometries of individual cells is very challenging using the finite difference method, but will be easier using the finite element method. A finite element method based on the operator splitting scheme derived in the present report is currently under development. This code will clearly offer greater flexibility with respect to geometries, but the linear systems are less structured and therefore harder to solve in an optimal manner.
We have introduced a splitting scheme for the linear part of the EMI equations that allows us to use well developed theory for solving linear systems arising from the discretization of linear, elliptic partial differential equations. The resulting method is stable and is optimal in the sense that the CPU efforts scale linearly with the number of computational nodes. The method therefore enables simulations using far larger computing facilities than has been available in the present work.
Data Availability Statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Operator splitting algorithm: KJ, AT. Parallelization: KH, XC. Programming: KJ, KH. Simulations: KJ, KH. All authors contributed to the writing of the paper.
The research presented in this paper has benefited from the Experimental Infrastructure for Exploration of Exascale Computing (eX3), which is financially supported by the Research Council of Norway under contract 270053.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
1Note that in the first iteration of this procedure, the approximation of the capacitive current density,
2This stems from the CPU package being comprised of 4 dies, each with 8 cores and two memory channels.
10. Tveito A, Jæger KH, Lines GT, Paszkowski Ł, Sundnes J, Edwards AG, et al. An evaluation of the accuracy of classical models for computing the membrane potential and extracellular potential for neurons. Front Comput Neurosci (2017). 11:27. doi:10.3389/fncom.2017.00027
13. Ying W, Henriquez CS. Hybrid finite element method for describing the electrical response of biological cells to applied fields. IEEE Trans Biomed Eng (2007). 54(4):611–20. doi:10.1109/tbme.2006.889172
14. Agudelo-Toro A, Neef A. Computationally efficient simulation of electrical activity at cell membranes interacting with self-generated and externally imposed electric fields. J Neural Eng (2013). 10(2):026019. doi:10.1088/1741-2560/10/2/026019
15. Roberts SF, Stinstra JG, Henriquez CS. Effect of nonuniform interstitial space properties on impulse propagation: a discrete multidomain model. Biophys J (2008). 95(8):3724–37. doi:10.1529/biophysj.108.137349
17. Stinstra J, MacLeod R, Henriquez C. Incorporating histology into a 3D microscopic computer model of myocardium to study propagation at a cellular level. Ann Biomed Eng (2010). 38(4):1399–414. doi:10.1007/s10439-009-9883-y
21. Tsumoto K, Ashihara T, Haraguchi R, Nakazawa K, Kurachi Y. Roles of subcellular Na + channel distributions in the mechanism of cardiac conduction. Biophys J (2011). 100(3):554–63. doi:10.1016/j.bpj.2010.12.3716
22. Louch WE, Sejersted OM, Swift F. There goes the neighborhood: pathological alterations in t-tubule morphology and consequences for cardiomyocyte Ca2+ handling. BioMed Res Int (2010). 2010:503906. doi:10.1155/2010/503906
24. Veeraraghavan R, Salama ME, Poelzing S. Interstitial volume modulates the conduction velocity-gap junction relationship. Am J Physiol Heart Circ Physiol (2012). 302(1):H278–86. doi:10.1152/ajpheart.00868.2011
27. Csepe TA, Zhao J, Hansen BJ, Ning L, Lidiya VS, Lim P, et al. Human sinoatrial node structure: 3D microanatomy of sinoatrial conduction pathways. Prog Biophys Mol Biol (2016). 120(1–3):14–178. doi:10.1016/j.pbiomolbio.2015.12.011
28. Liu J, Noble PJ, Xiao G, Mohamed A, Dobrzynski H, Boyett MR, et al. Role of pacemaking current in cardiac nodes: insights from a comparative study of sinoatrial node and atrioventricular node. Prog Biophys Mol Biol (2008). 96(1–3):294–304. doi:10.1016/j.pbiomolbio.2007.07.009
36. Bangerth W. Finite element methods in scientific computing Available from: https://www.math.colostate.edu/bangerth/videos/676/slides.34.pdf (Accessed July 1, 2020)..
39. Rupp K, Tillet P, Rudolf F, Weinbub J, Morhammer A, Grasser T, et al. ViennaCL—Linear algebra library for multi- and many-core architectures. SIAM J Sci Comput (2016). 38(5):S412–39. doi:10.1137/15m1026419
41. Hake J, Finsberg H, Hustad KG, George B. Gotran–general ODE TRANslator Available from: https://github.com/ComputationalPhysiology/gotran (Accessed July 1, 2020).
43.TOP500 Supercomputer Sites. Available from: https://www.top500.org/lists/2020/06/ (Accessed July 1, 2020).
44.Supercomputer Fugaku. Available from: https://www.fujitsu.com/global/about/innovation/fugaku/index.html (Accessed July 1, 2020).
45. Roth BJ, Beaudoin DL. Approximate analytical solutions of the bidomain equations for electrical stimulation of cardiac tissue with curving fibers. Phys Rev (2003). 67(5):051925. doi:10.1103/physreve.67.051925
Keywords: electrophysiological model, cardiac conduction, cell modeling, finite difference method, operator splitting algorithm
Citation: Jæger KH, Hustad KG, Cai X and Tveito A (2021) Efficient Numerical Solution of the EMI Model Representing the Extracellular Space (E), Cell Membrane (M) and Intracellular Space (I) of a Collection of Cardiac Cells. Front. Phys. 8:579461. doi: 10.3389/fphy.2020.579461
Received: 16 September 2020; Accepted: 27 October 2020;
Published: 13 January 2021.
Edited by:Aljaz Godec, Max Planck Institute for Biophysical Chemistry, Germany
Reviewed by:Raymond Spiteri, University of Saskatchewan, Canada
Simone Scacchi, University of Milan, Italy
Copyright © 2021 Jæger, Hustad, Cai and Tveito. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Karoline Horgmo Jæger, email@example.com