Introduction to Numerical Relativity

Numerical Relativity is a multidisciplinary field including relativity, magneto-hydrodynamics, astrophysics and computational methods, among others, with the aim of solving numerically highly-dynamical, strong-gravity scenarios where no other approximations are available. Here we describe some of the foundations of the field, starting from the covariant Einstein equations and how to write them as a well-posed system of evolution equations, discussing the different formalisms, coordinate conditions and numerical methods commonly employed nowadays for the modeling of gravitational wave sources.


I. INTRODUCTION
General Relavity is the theory that identifies gravity as the curvature of a four dimensional space-time manifold. The consequences of this identification deeply changed our conception of Nature. From the physics point of view, Relativity introduced new ideas, like that time and space are not absolute but depend on the observer, that the effects of gravity propagate at the speed of light, or that energy and matter are equivalent and can modify the structure of both space and time, among others. From the mathematical point of view, the main consequence is that gravity can be described by using the tools of differential geometry, where the basic object to represent a manifold is the metric g ab that allow us to compute distances between neighboring points. The famous Einstein equations describe the dynamics of the fourdimensional space-time metric and how it is deformed by a given mass-energy distribution. On the other hand, the Bianchi identities from differential geometry ensure that the divergence of the Einstein tensor vanishes, implying the conservation of the stress-energy tensor (i.e., corresponding to energy and linear momentum conservation), which describes how matter moves in a curved spacetime.
One of the greatest achievements of General Relativity was the prediction of gravitational waves, space-time deformations produced by acceleration of masses which behave like waves as they propagate away from the sources. Gravitational waves are essentially unscattered between emission and detection, thereby giving direct information about the sources powering these phenomena. Precisely due to the weak interaction of these waves with matter, their existence was initially only confirmed indirectly by observations of the orbital dynamics of binary pulsars [1,2]. However, current kilometer-scale interferometric gravitational wave (GW) detectors, such as Advanced LIGO (aLIGO) [3] and Advanced Virgo (ad-Virgo) [4] facilities, since 2015 have directly detected gravitational waves on the kiloHertz frequency regime, consistent with the merger of binary black holes and binary neutron stars [5]. Further improvements on these detectors, as well as new ones added to the array of GW observatories, will allow to establish many routinary GW observations in the next few years. These new observa-tions allow us a new way to study some of the most energetic and exotic processes in the universe and start a new era of gravitational wave astronomy that will inevitably lead to unprecedented discoveries and breakthroughs not only in Astrophysics and Cosmology, but also in fundamental theories like gravity and nuclear physics. The detection, identification, and accurate determination of the physical parameters of sources is crucial to validate (and challenge) not only our theories but also our astrophysical models, which rely both on precise experimental data and on the availability of template banks of theoretical waveforms. For the slow inspiral, when the neutron stars (NSs) or black holes (BHs) are widely separated, analytical approximations for the gravitational waveforms are provided by perturbative post-Newtonian (PN) expansion techniques [6]. For the last orbits and merger, where the fields are particularly strong and most might be gained in terms of insight on fundamental physics, the PN expansion breaks down and the full Einstein equations have to be solved numerically. This has only become possible after a series of breakthroughs in the field of Numerical Relativity [7][8][9], calling for an incorporation of this new type of information into data analysis strategies and methods. Since then, outstanding progress has been made to explore the late stage of binary coalescence with numerical methods. The next sections summarize some of the foundations of Numerical Relativity, with a view on the modeling of gravitational sources, from the construction of a well-posed evolution system to the numerical methods commonly employed to solve them. Notice that this review focus on Cauchy formulations, excluding other alternatives. For a wider overview of all the possible formulations, please see [10].

A. Einstein Equations
The equations of motion of a classical theory like General Relativity can be derived directly from a suitable action by using the Euler-Lagrange equations, leading to the well-known Einstein equations [11], where G ab is the Einstein tensor, R ab is the Ricci tensor of the spacetime represented by the metric g ab , R ≡ g ab R ab is the Ricci or curvature scalar, and T ab is the stressenergy tensor describing generically the matter-energy distributions in the spacetime. We have chosen geometric units such that G = c = 1 and adopt the convention where roman indices a, b, c, ... denote space-time components (i.e., from 0 to 3), while i, j, k, ... denote spatial ones (i.e., from 1 to 3). The Ricci tensor can be written in terms of the Christophel symbols Γ a bc as follows Notice that Eqs. (1-3) form a system of ten non-linear partial differential equations (PDEs) for the spacetime metric components g ab , which are coupled to the matter fields by means of the stress-energy tensor. On the other hand, an important relation in differential geometry, known as the (contracted) Bianchi identities, implies the covariant conservation law for the Einstein tensor and, consequently, for the stress-energy tensor, where ∇ a is the covariant derivative, the generalization of the partial derivative on a manifold. These covariant equations correspond to conservation laws for both the energy and linear momentum, which are the basic physical equations to describe any matter field. Notice also that the Bianchi identities imply that four of the ten components of Einstein's equations cannot be independent. This redundancy gives rise to both the four coordinate degrees freedom and the four constraint equations, which will be clearly manifested in the 3+1 decomposition described in the next section.

B. The 3+1 decomposition
Despite its elegance and compactness, the covariant form of the four-dimensional Einstein equations is not suitable to describe how the gravity fields evolve from an initial configuration towards the future. In such case, it is more intuitive to consider instead a time succession of three-dimensional spatial slice geometries, called foliation, where the evolution of a given slice is given by the Einstein equations (for more detailed treatments see for instance [12][13][14][15][16]). This 3+1 decomposition, in which the four-dimensional manifold is splitted into "space+time" components and the covariant Einstein equations are converted into evolution equations for three-dimensional geometric fields, can be summarized in the following steps: • specify the choice of coordinates.
The covariance of Einstein equations implies that they can be written in the same generic way on any system of coordinates, which can be defined by a set of observers. The spacetime can be foliated by a family of spacelike hypersurfaces Σ, which are intersected by a congruence of time lines that will determine our observers (i.e., our system of coordinates). This congruence is described by the vector field t a = αn a + β a , where n a is the timelike unit vector normal to the spacelike hypersurfaces, α is the lapse function which measures the proper time of the Eulerian (orthogonal) observers and β a is the shift vector that measures the displacement, between consecutive hypersurfaces, of the time line t a followed by the observers with respect to the normal n a (see Figure 1).
• decompose every 4D object into its 3+1 components. The choice of coordinates allows for the definition of a spatial projection tensor γ a b ≡ δ a b + n a n b . Any four-dimensional tensor can be decomposed into 3+1 pieces using either the spatial projector to obtain the spatial components, or contracting with n a for the time components. For instance, the line element measuring the distance between neighboring points can be written by using these generic 3+1 coordinates as where the spatial three-dimensional induced metric γ ij is just the projection of the four-dimensional metric g ab into the space-like hypersurface Σ. Other objects, like the stress-energy tensor, can also be decomposed into its various components, namely τ ≡ T ab n a n b , S i ≡ −T ab n a γ b i , S ij ≡ T ab γ a i γ b j (7) • write down the field equations in terms of the 3+1 components. Within the framework outlined here, the induced metric γ ij is the only unknown, since both lapse and shift are set by our choice of coordinates. In differential geometry it is also common to define an additional tensor K ij with a strong geometrical meaning, as it describes the change of the induced metric along the congruence of normal observers. This definition involves the Lie derivative L n , a generalization of the directional derivative along the vector n in a manifold. Therefore, the definition of the extrinsic curvature and the 3+1 decomposition of Einstein equations form an hyperbolic-elliptic system of PDEs, commonly known as the Arnowitt-Deser-Misner (ADM) formalism [17,18], which can be written as where we have defined the trace of any three-dimensional tensor C ij as trC = γ ij C ij . The evolution hyperbolic equations (8,10) for the evolved fields {γ ij , K ij } are complemented with the energy and momentum constraint equations (11,12), that have to be satisfied at each hypersurface. This system of equations needs to be completed with a specification of the coordinate system, that is, by a choice of lapse and shift {α, β i }. The ADM formalism still preserves the covariance under spatial or time coordinate transformations (i.e., 3+1 covariance). Notice that, although manifest four-dimensional covariance is lost when performing the 3+1 decomposition, the solution space is still invariant under general coordinate transformations. One can take advantage of the contracted Bianchi identities to prove that the constraint equations (11,12) are just first integrals of the evolution ones (8,10), so that if the constraints are satisfied on an initial hypersurface (i.e., H = M i = 0 at Σ t ), they will remain satisfied for all times. This redundancy of the equations allows for different evolution approaches. The most straightforward choice, the constrained evolution approach, involves solving simultaneously both the evolution equations and the constraints, but it presents several difficulties. From the theoretical point of view, it is not clear how to split the dynamical modes, solved through evolution equations, from the constrained ones, enforced by elliptic equations. From the computational point of view, elliptic equations are computationally more expensive and difficult to solve efficiently than hyperbolic ones. A simpler alternative is given by the free evolution approach, where the fields are obtained uniquely from the evolution equations, while the constraints are enforced only at the initial time (i.e., although they can be computed during the evolution to estimate the validity of the solution). Notice however that discarding the constraint equations breaks the underlying invariance of the solutions. Due to its simplicity, the free evolution approach has traditionally been the most common choice in Numerical Relativity applications without symmetry assumptions, particularly in efforts associated to the modeling of gravitational wave sources.
It is important to stress that any astrophysical scenario, except those including only black holes, involves some type of matter, which will evolve on a curved spacetime as described by Eq. 4. We can also perform the 3+1 decomposition on this equation to obtain the evolution for the matter energy density τ and the momentum density S i , namely Notice that these equations need a closure relation S k i = S k i (τ, S i ) that will depend on the type of matter considered.

C. Formulations of the Einstein Equations
Any mathematical model representing a physical system must be described by a well-posed system of equations, meaning that there exists a unique bounded solution that depends continuously on the initial data. Such requirement is relevant not only from a conceptual point of view, but it is of crucial importance in computational applications: a numerical solution solving an ill-posed problem is not enforced to converge to its corresponding continuum solution. A clear example of this undesired behavior can be observed in the ADM free evolution system resulting directly from the 3+1 decomposition of Einstein equations. Although the ADM formalism was extensively used at the dawn of Numerical Relativity due to its simplicity, the presence of several numerical instabilities in the three-dimensional case made it unsuitable for computational applications. The reason behind these instabilities, as it was shown in the nineties, was the ill posedness of the ADM system in 3+1 dimensions when supplemented with standard gauge conditions.
Since then, there have been several attempts to construct well-posed free-evolution formalisms, either by selecting a particular gauge or by mixing the constraints with the evolution equations to modify the principal part of the system. The mathematical structure of the Einstein field equations was first investigated on a specific coordinate choice, called the harmonic gauge, in which the spacetime coordinates follow wave equations and can be written as Γ a ≡ g bc Γ a bc = 0 [17]. This choice allowed to greatly simplify Einstein equations, which could then be written as a set of (well-posed) generalized wave equations, g cd ∂ c ∂ d g ab = H ab (g, ∂g), where H ab is a quadratic function in the metric first derivatives. This Harmonic formalism, written for different set of fields and for generalized harmonic conditions [19,20], was used successfully to model the coalescence of compact objects, like black holes [21,22], boson stars [23] and neutron stars [24].
Another very convenient way to write down Einstein equations is the Baumgarte-Shapiro-Shibata-Nakamura(BSSN) formalism [25,26], which relies in three important modifications of the ADM system. First, it applies a conformal decomposition on the evolved fields, partially motivated by the fact that the Schwarschild black hole solution is conformally flat. Therefore, a conformal metricγ ij , with unit determinant, and a conformal, trace-less, extrinsic curvatureÃ ij can be introduced asγ These new definitions involve the appearance of two new constraints,γ = 1 and trÃ ≡γ ijÃ ij = 0, which will be denoted as conformal constraints from now on to distinguish them from the energy-momentum physical constraints. The second modification consists on extending the space of solutions by introducing a new evolved field Γ i =γ jkΓi jk = −∂ jγ ij , namely the contraction of the Christoffel symbols associated to the conformal metric. The third modification, which is essential to achieve a well-posed system, is to add the momentum constraint in a specific way to the evolution equation for this new quantityΓ i (i.e., which is originally calculated, as usual, by taking the time derivative of its definition). Notice that the last two modifications are analogous to rewrite the momentum constraint as an evolution equation and affect strongly the principal part of the system (i.e., the terms with derivatives of highest order), transforming the free-evolution ADM ill-posed system into a well-posed one, when supplemented with appropriate gauge conditions [27,28]. This formalism, with the 1+log slicing and the gamma-freezing shift conditions described below, has been used successfully to model the coalescence of black holes without the need of excising the interior of the apparent horizons to remove the physical singularity from the computational domain [29,30], making them especially convenient for black hole simulations [31][32][33]. Notice however that the BSSN formalism was already being used successfully to model the coalescence of binary neutron stars [34,35], although the lack of advanced computational techniques like Adaptive Mesh Refinement(AMR) prevented the calculation of accurate waveforms until several years later.
An asymmetry of the BSSN formalism is manifested on the different ways to treat the physical constraints, since the momentum constraint is mixed with the evolution equations but the energy constraint is not. Related to this, and like many other contemporary formalisms, BSSN does not include any mechanism to control dynamically unavoidable constraint violations, which could grow significantly during a numerical simulation, even if they are only seeded by tiny discretization errors [36]. The Z4 formalism, which was introduced as a extension of the Einstein equations to achieve a well posed, hyperbolic evolution system free of constraints [37,38], allowed to address these issues in an elegant general-covariant way. The equations of motion can be derived from a suitable action via a Palatini-type variation [39], obtaining where Z a is introduced as a new four-vector measuring the deviation from Einstein's solutions, which are those satisfying the algebraic condition Z a = 0. Although the original formulation, corresponding to the choice κ z = 0, is completely covariant, additional damping terms were included to enforce dynamically the decay of the physical constraint violations associated to Z a [40]. As it is shown in [41], all the physical constraint modes are exponentially damped if κ z > 0. However, since the damping terms are proportional to the unit normal of the time slicing n a , the full covariance of the system is lost due to the presence of this privileged time vector. The 3+1 decomposition of the Z4 formalism given by eq. (16) leads to evolution equations for the evolved fields {γ ij , K ij , Z i , Θ}, where we have defined the normal projection Θ ≡ −n a Z a . Notice that now there are ten evolution equations to solve ten unknowns; the original elliptic constraints in the Einstein Equations have been converted into evolution equations for the new fourvector Z a , which can be understood roughly as the time integral of the energy and momentum constraints. Einstein's solutions are recovered when the algebraic constraint Z a = 0 is satisfied. Finally, the most important feature is that the evolution system, when combined with suitable gauge conditions, is directly well-posed, without the need of further modifications [42].
The Z4 formalism has also been useful to understand also the constraint evolution system (i.e., subsidiary system) and the connection among different formalisms. For instance, the Harmonic formalism can be recovered from the Z4 one by substituting the harmonic condition with Γ a = −2Z a [37], and a version of the BSSN by a symmetry-breaking mechanism [38]. Along these lines, one can take advantage of the Z4 formalism flexibility to incorporate the ability to deal with black hole singularities without excision. The conformal and covariant Z4 (CCZ4) formalism [43] was constructed by performing the same conformal transformations as in the BSSN formalism (i.e., see also [44] for other conformal but noncovariant Z4 formulations) but using, instead of trK and Z i , the following quantities as evolved fields, so that the evolution equations are closer to those in the BSSN formulation. The full list of evolved fields is then given by {χ,γ ij , trK,Ã ij ,Γ i , Θ} and follow the evolution equations [45], where the expression [. . .] TF indicates the trace-free part with respect to the metricγ ij . The non-trivial terms inside this expression can be written aŝ Notice that damping terms proportional to a free parameter κ c have been included in order to dynamically control the conformal constraints, exactly in the same way as it is done with the physical ones.

D. Gauge conditions
The principle of general covariance implies that the laws of physics, and in particular Einstein equations, must take the same form for all observers. This implies that they have to be written in a generic tensor form for any system of coordinates. The choice of coordinates is commonly referred as gauge freedom, and it corresponds to define the congruence of our observers, i.e., the time vector t a by setting the lapse and shift. Notice that setting gauge conditions is not only necessary to close the system of equations: these additional degrees of freedom can also be useful both to avoid coordinate or physical singularities and to adapt to the underlying symmetries appearing in our simulations. Besides the summary presented here, further details on the different gauge conditions can be found for instance in [12][13][14][15][16].
The simplest gauge conditions, known as geodesic coordinates, are obtained setting α = 1 and β i = 0, so that the time coordinate coincides with the proper timer of the Eulerian observers (i.e., those following timelike geodesics). A simple perturbation analysis shows however that any formalism supplemented with this choice of coordinates might suffer of unstable non-physical modes. Even worse, this gauge condition might also lead to coordinate singularities, since Eulerian observers will focus into a single point such that the spatial volume √ γ → 0.
Coordinate pathologies can be prevented by imposing suitable geometrical conditions, which usually involve some type of elliptic equations [46]. This is the case, for instance, in the maximal slicing condition trK = 0, which, when imposed at all times, implies This slicing condition is called singularity-avoiding condition because the lapse function α goes to zero when the spatial volume √ γ goes to zero, avoiding the coordinate singularities during the evolution by slowing-down the proper time of the observers near strong-gravity regions. Another interesting geometrical property to be satisfied would be the minimal distortion condition, which can be written as (25) This shift condition minimizes the changes in the shape of the volume elements, independently of their size. Both the maximal slicing and the minimal distortion conditions (24,25) are elliptic equations. These type of equations are computationally much more expensive than hyperbolic evolution ones, and are usually avoided or trans-formed into hyperbolic ones in the context of free evolution formalisms.
Indeed, hyperbolic evolution equations are preferred and were already adopted to enforce some interesting property, like for instance the harmonic coordinates, which ensured the well-posedness of the Harmonic formalism [17]. A suitable family of evolution equations for the lapse is given by the Bona-Massó slicing condition [47], which, for any f (α) ≥ 1, is not only singularity avoiding, but also maintains the well-posedness of the formalism. The case f (α) = 1 correspond to the harmonic slicing condition, while that f (α) → ∞ mimics the maximal slicing condition Eq. (24). A common choice in numerical applications, especially those involving black holes, is to use the so-called 1 + log slicing condition, corresponding to f (α) = 2/α. This choice has excellent singularity avoidance conditions, since near the physical singularity α → 0, mimicking the maximal slicing condition. A suitable family of hyperbolic dynamical equations for the shift-vector β i is given by the Gamma-driver condition [48], where g(α) is an arbitrary function depending on the lapse function and η a constant damping parameter introduced to avoid strong oscillations during the shift evolution. This gauge condition not only maintains the well posedness of BSSN and CCZ4 formalisms, but also mimics the minimal distortion condition Eq.(25), trying then to minimize the stretching of the spatial coordinates. For numerical simulations involving black holes and neutron stars, standard values are g(α) = 3/4 and η ≈ 2/M , being M the mass of the compact object. Notice that in most of the literature the evolution of the shift is written in terms of an auxiliary field B i , which however does not seem necessary for most of the relevant numerical scenarios [29].

III. NUMERICAL METHODS
In the same way that any reasonable physical model must be described by a well-posed PDE system, any numerical solution must satisfy the following three conditions: (i) consistency, meaning that the discrete derivative operators reduce to the continuum ones as the resolution (i.e., the amount of discrete points sampling the continuum domain) increases; (ii) stability, such that the numerical solution is bounded and depends continuously on the initial data, and (iii) convergence, that is, the numerical solution tends to the continuum one as resolution increases.
Fortunately, it is not necessary to prove these three conditions, since Lax-Richtmyer equivalence theorem states that the numerical approximation of well-posed problems is convergent if and only if the scheme is stable and consistent [49]. Since consistency can be obtained quite trivially, the relevant question here is how to discretize the equations such that the well-posedness at the continuum problem translate into stability at the discrete one. Let us consider the following generic set of hyperbolic PDE at the continuum, where P is the evolution operator, which can depend on arbitrary spatial derivatives of u. A popular technique to discretize this continuum problem is by using the Method of Lines (MoL), which decouples the treatment of space and time coordinates [50]. In the first step, only the spatial dimensions are discretized, while leaving the time continuous. This semi-discrete problem consist on a set of ordinary differential equations for U i (t) = u(t, x i ), one for each discrete spatial point x i , separated by a mesh size ∆x. The semi-discrete equations can formally be written by substituting P → P , a discrete version of the evolution operator, written in terms of discrete derivative operators D. In the second step, the fully discrete problem is obtained after discretizing in time, such that the fully discrete solution is given by U n i = u(t n , x i ) at each discrete time t n , separated by a time-step ∆t. The discrete equations can formally be written again by substituting ∂ t by a discrete time integrator D t . As it is shown in [51], the fully discrete problem preserves the stability of the semi-discrete problem if it is integrated with a locallystable time integrator, as for instance any Runge-Kutta of at least 3 rd order. Thus, the problem is then reduced to ensure the stability of the semi-discrete problem by choosing a suitable space derivative discretization.

A. Smooth solutions
For sufficiently smooth solutions, the full procedure to ensure convergence of the numerical solution can be found in in [51] and summarized as follows: starting from a well-posed system at the continuum, apply the MoL, discretize in space with derivative operators satisfying certain conditions and then integrate with a Runge-Kutta of third order or higher. A problem at the continuum is well-posed if the solution satisfies an energy estimate which bounds some norm of the solution at some fixed time. A tool that is used in the derivation of such energy estimates is the integration by parts rule. Analogously, a semi-discrete problem can be shown to be stable if the discrete difference operators D satisfies the summation by parts rule, which is the discrete version of the integration by parts (see [52] and references within for early works introducing these techniques in Numerical Relativity). For non-linear equations it is usually necessary to remove the high-frequency (unphysical) modes not accurately represented in the grid, which can grow continuously in time at any fixed resolution. The easiest way to damp these modes is by adding a filtering operator Q d U to the right-hand-side of the semi-discrete equations, like for instance the Kreiss-Oliger dissipation operator [53]. This operator vanishes at infinite resolution, such that the semi-discrete problem is still consistent, and it is designed not to spoil the accuracy of the numerical scheme. Notice that not only Einstein equations, but any hyperbolic system of non-linear PDEs without the presence of either shocks or discontinuities, can be solved with these methods.

B. Non-smooth solutions
Although it is not the case with Einstein evolution systems, if the equations are genuinely non-linear like in fluid dynamics, discontinuities and shocks (i.e., a region with a crossing of the characteristics of the system) might appear even from smooth initial data. Discrete operators based on Taylor expansions, assuming smoothness of the solution, are going to fail near these regions and will produce artificial oscillations leading to unphysical solutions. Therefore, any spatial discretization able to handle shocks needs to take advantage of the integrated or weak-form of the equations [54]. Let us consider a system of non-linear PDEs, like the conservation of energy and momentum given by Eqs. (13,14), which can be written in the following balance law form [55] where the fluxes F k (u) and the sources S(u) depend on the fields but not on their derivatives. There are two popular different schemes to discretize these equations, based either on finite volumes or on finite differences [56]. The starting point of the finite-volume approach is the integral of the previous balance law equation in a spatial volume element dV , whereū andS are the volume integrals of the corresponding quantity in the cell and we have used Gauss theorem to convert the volume integral of the fluxes into a a surface one, being dS k the surface element. This weak form can be easily discretized with a conservative scheme, namely The problem is then reduced to compute (i) the solution at the grid points U i from the volume averagesŪ i , and (ii) the numerical flux at the interfaces F i±1/2 . These two steps must be performed in such a way that the semidiscrete solution is Total Variation Diminishing(TVD), or at least Total Variation Bounded(TVB), meaning essentially that no new extremes are allowed in the solution, which prevents the appearance of artificial oscillations. Notice that these conditions are more restrictive than stability, where the solution can still grow under certain tolerant bounds. The procedure to construct a shock-capturing scheme is the following. First, one needs to reconstruct the fields at the interfaces x i±1/2 using information either from the right (R) or from the left (L) to the interface (see Figure 2), namely (u R i±1/2 , u L i±1/2 ). Commonly used high-order reconstructions, preserving the monotonicity of the solution to prevent spurious oscillations, are for example the Weighted-Essentially-Non-Oscillatory (WENO) reconstructions [56,57] and MP5 [58]. Then, a suitable flux-formula is required to solve, at least approximately, the jump on the fields at each interface (i.e., Riemann problem), by combining information from the right and from the left, namely F i±1/2 = F (u R i±1/2 , u L i±1/2 ). This flux-formula usually requires information on the characteristic structure of the system (i.e., eigenvectors and eigenvalues). This approach has been the most commonly employed in binary neutron star simulations, see for instance [35,[59][60][61][62][63]. Higher-order schemes are relatively easy to achieve with the finite-difference approach, providing an efficient approach to high-order shock-capturing methods [64]. However, high-order finite-difference numerical schemes applied to the magneto-hydrodynamics (MHD) equations have not been as robust as those based on finite-volume. Nowadays that is not a great inconvenient, and the possibility to achieve high order accuracy is leading to more efforts on implementing these methods on computational MHD codes [65,66]. Although the derivation is different, the conservative scheme given by Eq.(31) is still valid, where nowŪ means just U i , the value of the field in the grid point. Again, the problem is reduced to compute a suitable numerical flux at the interfaces such that solution is essentially non-oscillatory and preserves, or at least bounds, the Total Variation. The procedure starts by performing a Lax-Friedrichs splitting, where it is introduced the following combination of fluxes and fields F ± i = 1/2(F i ± λU i ), being λ the maximum eigenvalue in the neighborhood of the point. These combinations are interpolated at the interfaces by using a monotonic reconstruction, like the high-order ones discussed before. The flux at the left of the interface F L i+1/2 is reconstructed using the values {F + }, while that the flux at the right F R i+1/2 is reconstructed using the values {F − }. The final numerical-flux is obtained just as F i+1/2 = F R i+1/2 + F L i+1/2 . At the lowest order reconstruction, F L i+1/2 = F + i and F R i+1/2 = F − i+1 , so that the final numerical-flux reduces to the popular and robust Local-Lax-Friedrichs flux [54].
Finally, notice that efforts considering other techniques to solve self-gravitation neutron stars, like the discontinuous Galerkin methods [67], are underway and might be an interesting option in the near future.