DREENA-A framework as a QGP tomography tool

We present a fully optimised framework DREENA-A based on a state-of-the-art energy loss model. The framework can include any, in principle arbitrary, temperature profile within the dynamical energy loss formalism. Thus, 'DREENA' stands for Dynamical Radiative and Elastic ENergy loss Approach, while 'A' stands for Adaptive. DREENA-A does not use fitting parameters within the energy loss model, allowing it to fully exploit differences in temperature profiles which are the only input in the framework. The framework applies to light and heavy flavor observables, different collision energies, and large and smaller systems. This, together with the ability to systematically compare data and predictions within the same formalism and parameter set, makes DREENA-A a unique multipurpose QGP tomography tool.

Successful production of this exotic state of matter at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) allowed systematical testing of different models of QGP evolution against experimental data. Up to now, it has been established that QGP is formed at the LHC and RHIC experiments through two main lines [4,5,8] of evidence: i) by comparison of low momentum (p ⊥ ) measurements with relativistic hydrodynamic predictions, which implied that created QGP is consistent with the description of a nearly perfect fluid [9][10][11], ii) by comparison of high-p ⊥ data [12][13][14][15][16] with pQCD predictions, which showed that high-p ⊥ partons (jets) significantly interact with an opaque medium. Beyond this discovery phase, the current challenge is to investigate the properties of this extreme form of matter.
While high-p ⊥ physics had a decisive role in the QGP discovery [4], it was rarely used for understanding the bulk medium properties. On the other hand, low-p ⊥ observables do not provide stringent constraints to all parameters of the models used to describe the evolution of QGP, and thus leave some properties of QGP badly constrained [17][18][19][20]. Thus, it is desirable to explore QGP properties through independent theory and data set. We argue that this is provided by jet energy loss and high-p ⊥ data, complementing the low-p ⊥ constraints to QGP.
To use high-p ⊥ theory and data as a QGP tomography tool, it is necessary to have a realistic high-p ⊥ parton energy loss model. We previously showed [21][22][23][24] that the dynamical energy loss formalism provides a reliable tool for such tomography. This formalism has the following properties necessary for inferring the bulk QGP medium parameters: i) It is based on finite size, finite temperature field theory [25,26], and takes into account that QGP constituents are dynamical (moving) particles. Consequently, all divergences are naturally regulated in the model.
ii) Both collisional [27] and radiative [28,29] energy losses are calculated in the same theoretical framework. iii) It is applicable to both light and heavy flavors, so it can provide predictions for an extensive set of probes. iv) Temperature is a natural variable in the framework [30], so that the T profiles resulting from bulk medium simulations are a direct input in the model. v) The non-perturbative effects related to screening of the chromo-magnetic and chromo-electric fields are included [31] so that the model can also capture the non-perturbative medium-related interactions. vi) No fitting parameters are used in comparing the dynamical energy loss predictions with high-p ⊥ data [32,33], i.e., all the parameters have been fixed to the standard literature values. For the bulk medium tomography, this allows concentrating only on bulk medium simulation parameters. While other available energy loss models (see e.g. [34][35][36][37][38]) have some of the above properties, none have all (or even most of them), making the dynamical energy loss a unique framework for QGP tomography.
Including full medium evolution in the dynamical energy loss is, however, a highly non-trivial task, as all the model properties have to be preserved [39], without additional simplifications in the numerical procedure. Furthermore, to be effectively used as a precision QGP tomography tool, the framework needs to efficiently (timewise) generate a comprehensive set of light and heavy flavor suppression predictions through the same numerical framework and the same parameter set. Such predictions can then be compared with the available experimental data, sometimes even repeatedly (i.e., iteratively) -for different combinations of QGP medium parameters -to extract medium properties that are consistent with both low and high-p ⊥ data.
To introduce the medium evolution in the dynamical energy loss, we took a step-by-step approach, allowing us to check the consistency of each consecutive step by comparing its results with the previous (simpler) framework versions. Consequently, we first developed the DREENA-C framework [40] ('C' stands for constant temperature), continuing to DREENA-B [41] ('B' stands for Bjorken expansion). In this manuscript, we present a fully optimised DREENA-A framework, where 'A' stands for 'adaptive' (i.e., arbitrary) temperature evolution. The convergence speed of the developed numerical procedure is analysed, as well as consistency with other (earlier) versions of the framework, as necessary for the reliable and efficient QGP tomography tool. Finally, as a utility check of the DREENA-A framework, the sensitivity of high-p ⊥ observables to different temperature profiles is presented. and computed at next to leading order [42,43] for light and heavy partons.
is the probability for energy transfer, which includes medium induced radiative [28,29] and collisional [27] contributions in a finite size dynamical QCD medium with running coupling [21].
Both contributions include multi-gluon fluctuations, introduced according to Refs. [21,44] for radiative and [45,46] for collisional energy loss (for more details, see below). Q to hadron H Q fragmentation is denoted by D(Q → H Q ). For charged hadrons we use DSS [47], for D mesons BCFY [48] and for B mesons KLP [49] fragmentation functions, respectively.
In DREENA-A, the medium temperature needed to calculate P (E i → E f ) depends on the position of the parton according to a temperature profile given as an input. Therefore, the temperature that the parton experiences along its path, becomes a function of the coordinates of its origin (x 0 , y 0 ), the angle of its trajectory φ, and the proper time τ : where T prof ile is, in principle, arbitrary. This temperature then appears in the expressions below.
The collisional energy loss is given by the following analytical expression [41]: Here we used the following notation: k is the 4-momentum of the incoming medium parton; T is the current temperature along the path, given by Eq.
corresponds to the off-shellness of the jet prior to the gluon radiation [28]. Note that, all α S terms in Eqs. (4) and (7) are infrared safe (and moreover of a moderate value) [21]. Thus, contrary to majority of other approaches, we do not need to introduce a cut-off in α S (Q 2 ).
We further assume that radiative and collisional energy losses are small, i.e., much smaller than initial jet energy, so that these contributions can be separately treated in jet quenching is performed via two independent branching processes [21,45].
To obtain the radiative energy loss contribution to the suppression [44], we start with Eq. (7) and, for a given trajectory, we first compute the mean number of gluons emitted due to induced radiation (further denoted as N tr (E)), as well as the mean number of gluons emitted per fractional energy loss x (i.e., dN tr (E) dx , for compactness further denoted as N tr (E, x)): where the subscript tr indicates that the value depends on the trajectory. Radiative energy loss suppression takes multi-gluon fluctuations into account and, if we assume that the fluctuations of gluon number are uncorrelated, the radiative energy loss probability can be expressed via Poisson expansion [21,44]: E i and E f are initial and final jet energy (before and after) radiative process.
To calculate the parton spectrum after radiative energy loss, we apply where the final spectra is obtained after integrating over p i > p f .
To find collisional energy loss contribution, Eq. (4) is first integrated over the given trajectory: For collisional energy loss, the full fluctuation spectrum is approximated by a Gaussian centered at the average energy loss E tr col (E) [45,46] with a variance where T tr is the average temperature along the trajectory, E i and E f are initial and final energy (before and after) collisional processes.
To calculate the quenched hadron spectrum after collisional energy loss, we apply where we assume E i,C = E f,R , i.e. the final jet energy after radiative quenching corresponds to the initial jet energy for collisional quenching. Since both collisional energy loss and gain contribute to the final spectra [27,45], both E i,C > E f and E i,C < E f have to be taken into account in Eq. (14). Finally, the hadron suppression R tr AA (p f , H Q ) for the single trajectory, after radiative and collisional energy loss, is equal to the ratio of quenched and unquenched momentum where E f d 3 σu(H Q ) dp 3 f is given by Eq. (2). R tr AA (p f , H Q ) then needs to be averaged over trajectories with the same direction angle φ to obtain the suppression as a function of angle, R AA (p f , φ, H Q ). This is an important intermediary step since, depending on the details of QGP temperature evolution and the spatial variations in the temperature profile, energy loss may significantly depend on the parton's direction of motion [90]. Once we have calculated R AA (p f , φ, H Q ), we can easily evaluate R AA and v 2 observables as [58] (we here omit H Q in the expressions, and While the general expressions of the dynamical energy loss formalism are the same as in the DREENA-B framework [41], the fact that, in DREENA-A, the temperature entering the Eqs. (4-7) explicitly depends on the current parton position, notably complicates the implementation of these formulas, as we discuss in the following section.

III. FRAMEWORK DEVELOPMENT
Our previous DREENA-C and DREENA-B frameworks were based on computationally useful, but rough, approximations of the medium evolution: while in DREENA-C, there was no evolution, and the temperature remained constant both in time and along spatial dimensions, in DREENA-B, the medium was assumed to evolve according to 1D Bjorken approximation [59].
Due to these approximations, parton energy loss depended on its path length independently of its direction or production point. This allowed to analytically integrate energy-loss formulas to a significant extent, which notably reduced the number of required numerical integrations. Furthermore R AA only needed to be averaged out over precalculated path-length distributions. Thus, these approximations of the medium evolution straightforwardly led to efficient computational algorithms for DREENA-C and DREENA-B.
DREENA-A framework, on the other hand, addresses fully general medium dynamics, with arbitrary spatio-temporal temperature distribution. The main input to the algorithm is the temperature profile T prof ile given as a three-dimensional matrix of temperature values at points with coordinates (x, y, τ ) (in the input file, the values should be arranged in an array of quartets of the form (τ, x, y, T prof ile ), and the lowest value of τ appearing in the data is taken to be τ 0 ). In addition to the temperature profile, the DREENA-A algorithm also takes, as inputs, the initial parton p ⊥ distributions d 2 σ dp 2 ⊥ (each as an array of (p ⊥ , d 2 σ dp 2 ⊥ ) pairs) and the jet production probability distribution (as a matrix of probability density values in the transversal plane, formatted analogously as the profile temperature values). This level of generality requires a different approach than in previous frameworks. Since the DREENA-A algorithm takes arbitrary medium temperature evolution as the input, the energy loss has to be individually calculated for each parton trajectory.
This means that for each trajectory -given by the coordinates x 0 and y 0 of the parton origin (in the transversal plane) and the direction angle φ -we must first numerically evaluate integrals (8) and (11). Since the current parton position -for a given trajectory -becomes a function of the proper time τ , integrands in (8) and (11) also become functions of τ , either through an explicit dependence, or via position and time dependent medium temperature (3). We numerically integrate these functions along the trajectory (parametrized by τ as x = x 0 + τ cos φ, u = y 0 + τ sin φ), starting from the origin at (x 0 , y 0 ) and moving in small integration steps along the direction φ (in practice, 0.1 fm step is sufficiently small for most of the profiles). The integration is terminated when the medium temperature at the current parton's position drops below T c = 155M eV [60], i.e., when the parton leaves the QGP phase. Also, we approximate that there are no energy losses before the initial time τ 0 (which is a parameter of the temperature profile evolution) and thus the first part of the trajectory, corresponding to τ < τ 0 , is effectively skipped (i.e., τ 0 is taken as the lower limit of integration in (8) and (11)).
Once we, for a given trajectory, compute the integrals (8) and (11), we then perform the rest of procedure laid out by Eqs. (8)(9)(10)(11)(12)(13)(14)(15). Most of the computation time is spent on numerical integrations, in particular for evaluating integrals in Eqs. (9,10). While, in principle, n → ∞ in Eq. (9), in practice we show that n = 5 is sufficient for convergence in the case of quark jets, while for gluon jets n = 7 is needed. In general, the Quasi-Monte Carlo integration method turned out to be the most efficient and is used for all these integrals (as quasirandom numbers, we use precalculated and stored Halton sequences). The result of the integration (15) is the final hadron suppression R tr AA (p ⊥ , H Q ) for the jet moving along the chosen trajectory, given as the function of its transversal momentum.
To obtain R AA (p ⊥ , φ, H Q ), we have to average this result over all production points (taking into account the provided jet production probability distribution) and repeat the procedure for many angles φ. In practice, this means that we must evaluate energy loss along a very large number of trajectories. This has significantly increased the computational complexity of the problem compared to DREENA-C and DREENA-B and required a number of optimisations.

A. Numerical optimisation of DREENA-A
We started by adapting optimisation methods that we successfully implemented in earlier versions. One useful approach was a tabulation and consequent interpolation of values for computationally expensive functions. In particular, this is crucial for the complicated integrals (4-7): while a two dimensional array is sufficient to tabulate dE col dτ (which is a function of T and p), values of d 2 N rad dxdτ (depending on τ, T, p and x) must be stored in a four-dimensional array. Tabulating such functions is done adaptively, with the density of evaluated points varying depending on the function behaviour (i.e., using a denser grid where the functions change rapidly and sparser where the behaviour is smooth). In the case of these two functions, not only that the consequent interpolation can significantly reduce the overall number of integral evaluations, but the corresponding tables (for each particle type) can be evaluated only once and then permanently stored and reused for all trajectories and even for different temperature profiles. To further optimise the algorithm, we also precalculate the integral d 2 N rad dxdτ dx values and store a corresponding three-dimensional array (since it is a function of τ, T , and p).
When using this table-interpolation method, it is often necessary to make a function transformation before tabulation: e.g., it is more efficient and accurate to sample and later interpolate logarithm of a rapidly (nearly or approximately-exponentially) increasing function than the function itself (similarly, it is sometimes more optimal to tabulate ratio, or a product of functions than each of the functions separately). For example, it is much more optimal to tabulate and consecutively interpolate R AA s (and other similarly behaving expressions) than the corresponding momentum distributions. This methodology is now extensively applied throughout DREENA-A (from some intermediate-level energy loss results to evaluating multi-dimensional integrals in the calculation of radiated gluon rates). Given the size of some of these tables and that many interpolations are needed, we ensured that the table lookup and interpolation algorithm are efficient.
As we encounter multiple numerical integrations at different stages of the computation, modifying their order was another type of optimisation, where the natural order (from the theoretical viewpoint) is not necessarily followed but is instead adapted to the particular function behaviour.
Specifically, it turned out that a different order of integration (for radiative contribution) is optimal for heavy flavor particles compared to gluons. I.e., while it is natural, from the physical perspective, to start with the initial momentum distributions of partons and integrate over the radiative energy loss (see Eqs. (9,10)), it turned out that (for heavy flavors) the shape of the initial distributions necessitates a very high number of integration points to achieve the required computation precision. Reorganising the formulas and postponing the integration over initial distributions to the very end turned much more computationally optimal for heavy flavor. A similar procedure in the case of light quarks allowed much of the integration to be carried out jointly for all quarks, since their effective masses are the same, but initial p ⊥ distributions differ.
Overall, this type of optimisation led to four-time faster execution times.
The crucial optimisation in DREENA-A is the method used for averaging over the particle trajectories. In suppression calculations, it is common to carry out the averaging over production points and directions by Monte Carlo (MC) sampling, but it turned out that the equidistant sampling of both jet production points and direction angles was here significantly more efficient. We initially implemented the Monte Carlo approach, randomly selecting both the origin coordinates and the angles of particle trajectories. The binary collision density was used as the probability density for coordinates of origins, while the angles were generated from a uniform distribution.
Convergence of the results by using this method required a large number of sampled trajectories, as illustrated in Fig. 1. The figure shows R AA and v 2 results obtained by the DREENA-A algorithm for a different total number of trajectories (the computation was done for D meson traversing the temperature evolution generated using a Glauber initialised viscous hydrodynamic code [61], at 30-40% centrality class). The plots in the right column of Fig. 1 show the magnitude of the deviation of the particular curve from the median curve, where the latter is the arithmetic mean of all curves in the plot (as the measure of deviation of a function f (p) from a reference ). We see that R AA convergence is easily achieved, where relative deviations of the order of 1% are obtained by taking into account only 2500 trajectories (see Fig. 1-A and Fig. 1-A  *  ). Computing the v 2 value requires much more trajectories, i.e., we see a substantial scattering of the Monte Carlo results with 2500 trajectories, while ∼ 10 6 trajectories are needed to reduce relative deviation below 1%. Note that a small number of sampled trajectories also causes a systematic error: the smaller the number of trajectories, the lower the averaged v 2 .
When using the equidistant sampling method instead of Monte Carlo, we divide the transverse plane into an equidistant grid, whose points are used as jet origins. Energy loss for each trajectory is then weighted with the jet production probability at each point, and summed up. As production probability, we used the binary collision density evaluated using the optical Glauber model. In Fig. 2, we see that, for already ∼ 10.000 evaluated trajectories, the integral has converged within 1% of the estimated 'proper' value. This modification resulted in a more than two orders of magnitude reduction of the execution time. We also tested two hybrid variants: i) where trajectory origins were randomly selected but directions equidistantly, and ii) where production points were equidistantly selected, but directions randomly sampled.  for R AA and v 2 , are shown in the upper panels of Figure 4, respectively. Lower panels of Figure 4 show the comparison of all three frameworks on the hard-cylinder collision profile constant in time (for this comparison, we modified the DREENA-B code to remove temperature dependence on time). We see that all frameworks lead to consistent results (up to computational precision), supporting the reliability of the DREENA-A.

IV. RESULTS
To demostrate the utility of the DREENA-A approach, we generated temperature profiles for Pb+Pb collisions at the full LHC energy ( √ s NN = 5.02 TeV) and Au+Au collisions at the RHIC energy ( √ s NN = 200 GeV) using three different initialisations of the fluid-dynamical expansion.
First, we used optical Glauber initialisation at initial time τ 0 = 1.0 fm without initial transverse flow. The evolution of the fluid was calculated using a 3+1D viscous fluid code from Ref. [61]. The parameters to describe collisions at the LHC energy were tuned to reproduce the low-p ⊥ data obtained in Pb+Pb collisions at √ s NN = 5.02 TeV [32]. In particular, shear viscosity over entropy density ratio was constant η/s = 0.12, there was no bulk viscosity, and the equation of state (EoS) parametrisation was s95p-PCE-v1 [62]. For RHIC energy we used to evolve the Glauber initialisation, but restricted to a boost-invariant expansion. In this case, the initial time was τ 0 = 0.2 fm, and parameters were the favoured values of a Bayesian analysis of the data from Pb+Pb collisions at √ s NN = 2.76 and 5.02 GeV, and from Au+Au collisions at √ s NN = 200 GeV using the EoS parametrisation s83s 18 [18]. In particular, there was no bulk viscosity and the minimum value of temperature-dependent η/s was 0.18.
Our third option was the T R ENTo initialisation [66] evolved using the VISH2+1 code [67] as described in [68,69]. To describe collisions at LHC, parameters were based on a Bayesian analysis of the data at the above mentioned two LHC collision energies [69], although the analysis was done event-by-event, whereas we carried out the calculations using simple event-averaged initial states. In particular, the calculation included free streaming stage until τ 0 = 1.16 fm, EoS based on the lattice results by the HotQCD collaboration [70], and temperature-dependent shear and bulk viscosity coefficients with the minimum value of (η/s) min = 0.081 and maximum of (ζ/s) max = 0.052. For RHIC, we used the 'PTB' maximum a posteriori parameter values from Ref. [71], but changed the temperature-dependent shear viscosity coefficient (η/s)(T ) to a constant η/s = 0.16.
All these calculations lead to an acceptable fit to measured charged hadron multiplicities, low-p ⊥ spectra, and p ⊥ -differential v 2 in 10 − 20%, 20 − 30%, 30 − 40%, and 40 − 50% centrality classes. As we may expect, different initialisations and initial times lead to a visibly different temperature evolution. This is demonstrated in Fig. 5 where we show the calculated temperature distributions in collisions at the LHC energy at various times. Even if the initial anisotropy of the Glauber initialisation is lowest, later in time, its anisotropy is largest, since the very early start of EKRT initialisation, or the early free streaming of T R ENTo, dilute the spatial anisotropy very fast. Similarly, the early start of EKRT leads to a large initial temperature.
To test if these visual differences can be quantified through high-p ⊥ data at the LHC and  observables. Since the differences in evolution are due to different initialisations, and different properties of the fluid (EoS and/or dissipative coefficients), R AA and v 2 observables can be used to provide further constraints to the fluid properties. We note here that even low-p ⊥ data could Charged hadron (left), D meson (middle) and B meson (right) predictions are generated for 20-30% centrality region. The h ± predictions are compared with π 0 data from PHENIX [81,82] and h ± data from STAR [83,84] -note that for v 2 10-40% centrality data is shown for STAR. For D mesons, the predictions are compared with STAR [85,86] data at 10-40% centrality. The boundary of each gray band corresponds to 0.4 < µ M /µ E < 0.6 [56,57].
be used to differentiate our three evolution scenarios, but such analysis would require evaluating χ 2 or a similar measure of the quality of the fit, or computing Bayes factors [71]. The high-p ⊥ observables, on the other hand, show clear differences visible by the naked eye.
Moreover, from Figs. 6 and 7, we see that all types of flavor, at both RHIC and LHC, show apparent sensitivity to differences in medium evolution, making them equally suitable for exploring the bulk QGP properties with high-p ⊥ data. With the expected availability of precision data from the upcoming high-luminosity experiments at RHIC and LHC (see e.g., [87][88][89]), the DREENA-A framework provides a unique opportunity for exploring the bulk QGP properties.
We propose that the adequate medium evolution should be able to reproduce high-p ⊥ observables in both RHIC and LHC experiments for different collision energies and collision systems, with reasonable accuracy. As demonstrated in this study, an equal emphasis should be given to light and heavy flavor, as they provide a valuable independent constraint for bulk medium evolution.
Overall, DREENA-A provides a versatile tool to put large amounts of data generated at RHIC and LHC experiments to optimal use.

V. SUMMARY
We here presented an optimised DREENA-A computational framework. The tool is based on state-of-the-art energy loss calculation and can include arbitrary temperature profiles. This feature allows fully exploiting different temperature profiles as the only input in the framework.
We showed that the calculated high-p ⊥ R AA and v 2 exhibit notable sensitivity to the details of the temperature profiles, consistent with intuitive expectations based on the profile visualisation.
The DREENA-A framework applies to different types of flavor, collision systems, and collision energies. It can, consequently, provide an efficient and versatile QGP tomography tool for further constraining the bulk properties of this extreme form of matter.