The top-quark mass: challenges in definition and determination

The top-quark mass is a parameter of paramount importance in particle physics, playing a crucial role in the electroweak precision tests and in the stability of the Standard Model vacuum. I will discuss the main strategies to extract the top-quark mass at the LHC and the interpretation of the measurements in terms of well-posed top-mass definitions, taking particular care about renormalon ambiguities, progress in Monte Carlo event generators for top physics and theoretical uncertainties.


INTRODUCTION
The mass of the top quark is a fundamental parameter of the Standard Model, since it enters in the electroweak precision tests [1] and constrained the mass of the Higgs boson even before its discovery at the LHC. It plays a role in Higgs inflation model (see Refs. [2,3] for some recent work on the subject), while the property of the electroweak vacuum to lie on the boundary between stability and metastability regimes [4] does depend on the actual values and definitions of top and Higgs masses used in the computation. 1 Also, in the determination of the lifetime of the Universe, undertaken in [5], part of the uncertainty is related to the top-quark mass.
In such calculations, one typically assumes that the measured top-quark mass, whose current world average reads m t = [173.34 ± 0.27(stat) ± 0.71(syst)] GeV [6], corresponds to the pole mass and eventually adds errors of the order of few hundreds MeV to account for possible deviations from this identification. For instance, possible changes of the central value or of the uncertainty on m t may affect the results in [4], to the point of even moving the vacuum position inside the stability or instability regions. It is therefore of paramount importance determining m t at the LHC with the highest possible precision, estimating reliably all sources of uncertainty and eventually interpreting the results in terms of field-theory mass definitions.
More generally, the top-quark mass is determined by comparing experimental data with theory predictions, so that the measured mass has to be identified with the parameter m t employed in the calculations. From the viewpoint of the techniques used in the extraction, one usually labels as 'standard measurements' those relying on the direct reconstruction of the top-decay products by means of the template, matrix-element or ideogram methods, and as 'alternative measurements' the topmass determinations which use suitably defined observables, such as total production cross section or peaks/endpoints of differential distributions. It is remarkable noticing that, up to now, such classes of mass determinations have never been combined.
From the theory side, as most top-mass extractions use Monte Carlo shower codes, one traditionally defines 'Monte Carlo mass' the quantity which is determined. On the other hand, one refers to pole-or MS-mass extraction whenever a measurement is compared with a fixed-order, possibly resummed QCD calculation employing a given field-theory mass definition. The distinction between Monte Carlo and well-posed mass definitions like the pole mass has been the core of several discussions within the topquark physics community, as we have authors trying to quantify the discrepancy between such masses, finding results of the order of a few hundreds of MeV [7,8,9,10,11] and others who instead present arguments against the classification of some measurements as Monte Carlo mass determinations [12,13] and try to interpret them still as pole-mass extractions, with an uncertainty which depends on the specific measurement strategy and details of the event generation. Furthermore, as will be discussed later, even the so-called pole or MS mass determinations are not completely Monte Carlo independent, since the evaluation of the experimental acceptance depends, though quite mildly, on the shower code which is employed and on the implemented mass parameter.
Another issue that was often used to argue against the employment of the pole mass has been the infrared renormalon ambiguity [14,15], namely the factorial growth of the coefficients of the expansion in powers of the strong coupling of the heavy-quark self energy, whenever it is expressed in terms of the pole mass. However, recent work on this topic [16,17] showed that, using the 4-loop relation between pole and (renormalon-free) MS masses [18], the renormalon ambiguity is actually of the order at most of 250 MeV, hence smaller than the current error on the top mass. Furthermore, although the projections for the future high-energy and high-luminosity runs of the LHC aim at even lower uncertainties, it should always be reminded that the top quark is an unstable particle with a width of the order of 1 GeV which, as long as it is included in the computation, acts as a cutoff for radiation off top quarks. 2 In the following, I shall give an overview of the up-to-date top-mass determinations and, above all, I will try to stress the main points of the existing controversies concerning mass definitions and interpretation of the LHC measurements, as well as the sources of theory uncertainty. In Section 2 I shall review the heavy-quark mass definitions; in Section 3 I will discuss the renormalon ambiguity; in Section 4 the main strategies to measure the top mass will be presented. The interpretation of the measurements and the theoretical uncertainties will be investigated in Section 5, while Section 6 will contain some final remarks.

TOP-QUARK MASS DEFINITIONS
Heavy-quark mass definitions are related to how one subtracts the ultraviolet divergences in the renormalized heavy-quark self energy Σ R . Higher-order corrections to the self energy are typically calculated in dimensional regularization, with d = 4 − 2ǫ dimensions. At one loop in QCD, for a heavy quark with four-momentum p and bare mass m 0 , the renormalized self energy reads: Σ R (m 0 , p, µ) = iα S 4π 1 ǫ − γ + ln 4π + A(m 0 , p, µ) / p − 4 1 ǫ − γ + ln 4π + B(m 0 , p, µ) m 0 where Z 2 and Z m are the wave-function and mass renormalization constants, respectively, γ = 0.577216 . . . the Euler-Mascheroni constant and µ is the renormalization scale. 3 The functions A and B in Eq. (1) depend on p, m 0 and µ and are independent of ǫ. The bare heavy-quark propagator is S 0 (p) = i/( / p − m 0 ), while the renormalized S R reads, in terms of the renormalized self energy: .
The on-shell renormalization scheme, leading to the pole mass, is defined so that the self energy and its partial derivative with respect to / p vanish whenever / p = 0: The minimal-subtraction (MS) scheme is indeed typical of dimensional regularization and fixes Z 2 and Z m in order to subtract just the contributions ∼ 1 ǫ − γ + ln 4π in Eq. (1). 4 Since pole and MS masses are the most popular top-mass schemes, hereafter I will devote some discussion on such definitions. In the on-shell (o.s.) and MS schemes S R (p) can then be expressed in terms of pole and MS masses, respectively, as follows: From Eq. (4), one can learn that m pole is still the pole of the propagator, even after the renormalization procedure, which is in agreement with the intuitive notion of the mass of a free particle, whereas m MS (µ) may be quite far from the pole. Also, unlike the pole mass, the MS mass depends on the renormalization scale µ. The relation between top-quark pole (m t,pole ) and MS (m t (m t )) masses was calculated up to four loops in [18] and reads: The last term in (5) yields an uncertainty of about 200 MeV on the pole-MS conversion. Beyond four loops, one can find in Ref. [20] the dependence of the five-and six-loop corrections to the pole-MS relation on the number of light flavours. 3 In d dimensions, the coupling g S , related to α S via α S = g 2 S /(4π), gets mass dimension ǫ, i.e. g S → gsµ ǫ r , µr being a regularization scale. After adding suitable counter-terms, Σ R is eventually expressed in terms of the renormalization scale µ. 4 Alternatively to working in d dimensions, one can use a mass regularization scheme, giving the gluon a fictitious mass λ. The renormalized self energy with a gluon mass λ can be obtained from (1) by means of the replacement: 1/ǫ − γ + ln[(4πµ 2 )/m 2 0 ] → ln(λ 2 /m 2 0 ).
As discussed in the introduction, higher-order corrections to the self energy, when expressed in terms of the pole mass, lead to infrared renormalons [14], namely the factorial growth of the coefficients of α n S : we shall discuss recent calculations on renormalons in the next section. For the time being, I just point out that the MS mass is renormalon-free and it is therefore a so-called short-distance mass, well defined in the infrared regime. However, differently from the pole mass, it is not a suitable mass definition at threshold, as it exhibits corrections (α S /v) k , v being the top velocity, that are large in the threshold limit v → 0. On the contrary, the MS mass is appropriate to describe processes far from threshold, i.e. at scales Q ≫ m t for top quarks, since, by setting the renormalization scale µ ≃ Q, one is capable of resumming large logarithms ln(Q 2 /m 2 t ) in the mass definition itself. As will be highlighted in the next section, Eq. (5), relating the pole mass to the renormalon-free MS one, can be used as a starting point to evaluate the renormalon ambiguity in the top pole mass.
Another mass definition, which has been employed especially in the framework of Soft Collinear Effective Theory (SCET), is the so-called MSR mass, which was introduced to interpolate between pole and MS masses [7]. Such a mass, labelled as m MSR t (R, µ) for top quarks, besides the renormalization scale µ, depends on an extra scale R, in such a way that: The MSR mass can be related to any other mass definitions, such as the pole mass, by means of a counterterm like: where the µ-dependence of m MSR (R, µ) follows renormalization group equations. As will be argued in the following, the MSR mass has often been adopted in the literature to connect the top-mass measurements with well-defined top-mass definitions, with R ∼ O(1 GeV).
For the sake of generality, although the present review will be mostly devoted to hadron-collider topmass determinations, I wish to remind some other top mass definitions which are often employed in analyses on the m t extraction at future lepton colliders. In fact, physical observables at threshold, such the tt cross section in e + e − collisions at √ s ≃ 2m t , require suitable mass schemes. One of such definitions is the 1S mass, defined as half the mass of a fictitious Υ(1S) resonance, made up of a bound tt state [21]: The 1S mass reads, in terms of the pole mass: [22]: The explicit expression of the ∆ terms can be found in [22], where the threshold e + e − → tt cross section was computed in the next-to-next-to-leading logarithmic approximation, and the superscripts LL, NLL and NNLL refer to the resummation of large logarithms of the top velocity v, which are large in the regime v ∼ α S ≪ 1 and α S ln v ∼ 1.
The potential-subtracted (PS) mass is instead constructed in terms of the tt Coulomb potential, in such a way that contributions below a factorization scale µ F are subtracted off, as to suppress renormalons [23]: In Eq. (10)Ṽ (q) is the transform in momentum space of the tt Coulomb potential. The PS mass is a threshold mass too, particularly suitable to deal with tt production at energies slightly above 2m t . The relation between PS and pole top-quark masses is given by the following equation [24]: More recently, the theoretical error on the possible extraction of 1S and PS masses in e + e − collisions just above the tt threshold was estimated. In detail, by using a NNLL threshold resummation of the ratio R = σ(e + e − → tt)/σ(e + e − → µ + µ − ), the 1S mass can be extracted with an uncertainty about 40 MeV [25], whereas, by employing a fixed-order NNNLO calculation, the PS mass can be determined with an error below 50 MeV [26]. It will be of course desirable to combine such fixed-order and resummed computations to possibly decrease further such an uncertainty.
Another threshold mass definition is the renormalon-subtracted (RS) mass, which removes from the pole mass the pure renormalon contribution [27]. The RS mass was determined in [27] after constructing its Borel transform and reads, in terms of the pole mass: where the expression for the coefficients N m and c k are given in [27] and b can be expressed in terms of the QCD β-function as b = β 1 /(2β 2 0 ). Potential-, renormalon-subtracted and 1S top-quark masses were related to the MS mass in Ref. [18] with four-loop accuracy in the conversion. The uncertainty in the conversion was gauged about 7, 11 and 23 11 MeV for PS, RS and 1S masses, respectively.
Finally, the so-called kinetic mass was defined in [28] for the purpose of improving the convergence of the perturbative expansion of the semileptonic B-meson decay width. It was constructed by subtracting from the pole mass the HQET (Heavy Quark Effective Theory) matrix elements, denoted byΛ(µ) in [28], expressing the shift between pole and meson masses. The kinetic bottom-quark mass reads, up to terms suppressed as the inverse of the quark/meson mass: In [24], the kinetic mass was generalized to tt bound states, obtaining the following expansion in terms of the pole mass: As underlined before, the 1S, PS and RS masses are threshold masses which, unlike the pole mass, do not exhibit the renormalon ambiguity. Recent calculations aimed at estimating the renormalon uncertainty in the pole mass will be the topic of next section.

THE RENORMALON AMBIGUITY IN THE TOP MASS
Problems with the renormalized heavy-quark self energy, when expressed in terms of the pole mass, were first understood in [14,15]. In fact, after including higher-order contributions in the strong coupling constant, the renormalized heavy-quark self energy exhibits the following expansion in powers of α S : where b 0 is first β-function coefficient entering in the MS strong coupling constant. 5 From Eq. (15), one learns that the coefficients of the expansion grow like n! at order α n+1 S .
After re-expressing α S in terms of the β function and of the QCD scale Λ, and inserting Σ R in the on-shell propagator (4), one will get a correction to the pole mass: which is the renowned renormalon ambiguity in m pole , i.e. an uncertainty of the order of the QCD scale in the pole-mass definition. This result can be related to the fact that a quark is not a free parton, but has to be confined into a hadron: in fact, one can prove that the renormalon uncertainty is due to the gluon self coupling, while it is not present when dealing with leptons. Therefore, the pole mass behaves like a physical mass for electrons or muons, whereas for heavy quarks it is not a short-distance mass, because of infrared renormalon effects, and one should choose on a case-by-case basis whether the pole mass or other definitions are adequate to describe a given physical process.
In order to quantify the renormalon ambiguity in the pole mass, one can employ the relation between pole and MS masses, relying on the fact that the MS mass is unaffected by renormalons. Equation (5) can be parametrized to all orders as in Ref. [16]: withm(µ m ) being the MS mass at some scale µ m and µ the renormalization scale at which the strong coupling is evaluated. The dominant renormalon divergence implies that the coefficients c n in the asymptotic expansion have to satisfy the following relation at large n: The expression for the asymptotic coefficients c as n can be found in [16] and is consistent with the fact that the renormalon factorial growth is due to the low-momentum region in the higher-order loop corrections to the heavy-quark self energy. The calculation of the normalization coefficient N is non trivial: in [16] N was extracted after fitting the third-and fourth-order coefficient in the exact four-loop MS-pole mass conversion and amounts to N ≃ 0.976 . . . for N C = 3 number of colours.
Furthermore, an alternative and possibly better method to deal with factorially divergent series consists in using the Borel transform, which, for a function f (α S ) reads: which implies The evaluation of the Borel integral (20) depends on a prescription: one typically takes its principal value and, following the so-called 'Im/Pi' method, the uncertainty is estimated as the modulus of the imaginary part, arising from the integration above and below the singular cuts in the complex plane, divided by π. In fact, in Ref. [16] the asymptotic expansion of the pole mass with respect to the MS one was computed as an inverse Borel transform, by using the Im/Pi method for the error, considering only three light flavours and accounting for charm and bottom masses. The final result is that the leading renormalon ambiguity amounts to about 110 MeV for top as well as bottom and charm pole masses.
A different strategy to gauge the renormalon ambiguity was instead tackled in [17], where the MSR mass m MSR (R) was used. In the relation between m pole and m MSR (R), the scale R is set to the MS top mass m t (m t ) and the series (21) is truncated at some fixed order n. A value n min is determined in such a way to minimize the difference ∆(n) = m pole (n) − m pole (n − 1) and a number f slightly above unity is defined. The set {n} f is thus constructed in such a way that ∆(n) ≤ f ∆(n min ): the midpoint of m pole (n) within {n} f is then chosen as the central value and half of the variation range of m pole (n) as an estimate of the ambiguity, accounting for the running of the renormalization scale as well. After observing that the results depend on f rather mildly, in [17] f = 5/4 was chosen, yielding an ambiguity about 253 MeV in the pole mass. Both in Ref. [16] and Ref. [17], some thorough discussion is devoted to the inclusion of charm and bottom masses. The results of 110 and 253 MeV would go down to 70 [16] and 180 [17] MeV if one treated charm and bottom quarks as massless. Some attempts to relate the different methods adopted in [16] and [17] were made in [13]. In fact, the result in [16] can be obtained even following the method in [17], but taking as central value half the sum of all ∆(n) and setting f = 1 + 1/(4π) in the uncertainty evaluation.
In the following, no strong statement supporting the calculation in [16] or [17] will be made. I just wish to point out that, on the one hand, as long as the uncertainties in the top-mass measurement stay around 500 GeV, both renormalon determinations are smaller and should not play any role in supporting the use of a given mass definition. This may not be the case if, in future perspective, one ideally aims at precisions about 200-300 MeV. However, as will be underlined when dealing with Monte Carlo modelling and theoretical errors, recent implementations of top production and decay in shower codes include width effects [29], in such a way that the top width, about 1.4 GeV and well above the energy range of both renormalon estimates, acts as a cutoff for the radiation off top quarks. 6 Of course, if one considers observables relying on top decays (t → bW ), the b-quark is allowed to emit soft radiation down to the shower cutoff and, in principle, in quantities depending on b-jets one may have to deal with renormalons.
A careful exploration of renormalon effects in observables depending on the top mass was carried out in Ref. [30]. The authors found that the MS mass is a better definition for quantities like the total tt cross section, while using the pole mass would lead to a linear renormalon and an ambiguity of O(100 MeV) on the m t extraction. Indications in favour of such a short-distance mass were also given whenever final-state jets are reconstructed using algorithms with a large jet radius R. As for the reconstructed top mass from, e.g., the b-jet+W invariant mass, in the pole-mass scheme a linear renormalon correction is present, whose coefficient is nevertheless pretty small if one employs a large R in the b-jet definition. Finally, leptonic observables exhibit a linear renormalon with both mass definitions, as long as one works in the narrowwidth approximation. On the contrary, there are no linear renormalons if one adopts a a short-distance mass and includes the finite top width.

TOP-QUARK MASS EXTRACTION AT LHC
Top-quark mass determinations at hadron colliders are classified as standard or alternative measurements and, according to the decay modes of the two W 's in top decays, as measurements in the dilepton, lepton+jet or all-hadronic channels. Standard top-mass analyses are based on the direct reconstruction of top-decay final states and compare observables, such as the b-jet+lepton invariant-mass distribution, with the predictions yielded by the Monte Carlo codes. So-called alternative measurements use instead other observables, such as total/differential cross sections or distribution peaks/endpoints. Since, as will be detailed in the following, Monte Carlo codes are of paramount importance for most top-mass analyses, I shall first sketch their main features, and then review the experimental methods to extract m t .

Monte Carlo generators for top physics
The last couple of decades has seen a tremendous progress in the implementation of Monte Carlo event generators, besides the reknowned general-purpose HERWIG [31,32] and PYTHIA [33,34], in such a way that several reliable programs are currently available for the top-mass analyses. On the one hand, strategies to match NLO calculations with parton showers were developed, on the other one a number of so-called matrix-element generators were released. In fact, matrix-element generators simulate multi-leg amplitudes and are interfaced to HERWIG or PYTHIA for shower and hadronization: besides top-quark signals, they are very useful to simulate backgrounds with high jet multiplicities, such as W/Z + n jets, which would be poorly described by HERWIG or PYTHIA for n > 1.
Regarding top phenomenology, standard Monte Carlo programs like simulate both top production and decays using leading order (LO) matrix elements, multi-parton emission in the soft or collinear limit and the interference between top-production and decay stages is neglected (narrow-width approximation). HERWIG parton showers satisfy angular ordering [35,36], with the latest version even allowing the option of dipole-like evolution [37]; PYTHIA cascades are instead ordered in transverse momentum. 7 Matrixelement corrections to parton showers are implemented for top decays [38,39], but not for production, and the total production cross section and top-decay width are still calculated at LO. Hadronization is included by adopting the cluster model [40], based on colour pre-confinement, in HERWIG and the string model [41] in PYTHIA. The underlying event used to be described assuming soft collisions between the proton spectators and tuning the model parameters to minimum-bias events at small transverse momentum.
Nevertheless, all modern codes implement it through multiple scatterings strongly ordered in transverse momentum: the underlying event is thus a secondary collision, whose transverse momentum is much lower than the primary hard scattering [42,43].
Among the new generation of Monte Carlo programs, SHERPA [44] can also be considered a multipurpose code, in the light of the wide spectrum of processes which it is capable of simulating. In detail, matrix elements are computed by means of the AMEGIC++ [45] and COMIX [46] codes, while the interface to one-loop generators, implemented along the lines of [47], allows one to include NLO QCD and possibly electroweak corrections. Parton showers are then accounted for according to the dipole formalism developed in [48], underlying event and hadronization follow the multiple-scattering and cluster models in PYTHIA and HERWIG, respectively.
For the purpose of the matching of NLO matrix elements and multi-parton cascades, NLO+shower programs, such as MadGraph5 aMC@NLO [49,50] and POWHEG [51], implement NLO hard-scattering amplitudes, but still depend on HERWIG and PYTHIA for parton cascades and non-perturbative phenomena. The earlier versions of such NLO+shower algorithms only included NLO corrections to tt production, while (LO) top decays and hadronization were still handled in the parton shower approximation. The later implementation of POWHEG [29] includes in the bb4ℓ code both top production and decay at NLO, accounting for the interference between top production and decay stages, as well as non-resonant contributions leading to (W + b)(W −b ) final states. 8 As for MadGraph5 aMC@NLO, strictly speaking, top decays are still at LO, however spin correlations are included through the MadSpin package [53] and, as discussed in [54], they account for a significant part of the NLO corrections. For the purpose of HERWIG, it has its own implmentation of NLO+shower merging/matching [55,56], working for top-quark production and decay in the narrow-width approximation [57].
Regarding matrix-element generators, suitable codes to describe top-quark signals and backgrounds are, among others, ALPGEN [58], MCFM [59], CalcHEP [60], HELAC [61] and WHIZARD [62]. In particular, ALPGEN and CalcHEP simulate multi-parton final states at LO and can be interfaced to HERWIG or PYTHIA for shower and hadronization. HELAC and WHIZARD have been lately provided with NLO corrections [63,64] and matching to shower and hadronization codes as well. MCFM is a NLO parton-level Monte Carlo code: top production and decay are handled at NLO, in the narrow-width approximation.
Before concluding this subsection, it is worthwhile saying a few words on the precision of the predictions yielded by Monte Carlo codes. As observed before, parton showers simulate multiple radiation in the soft or collinear approximation and, in general, the accuracy of a prediction depends on the specific observable under investigation. Although total cross sections and widths are (N)LO, for most quantities Monte Carlo predictions are equivalent to leading-logarithmic resummations, i.e. they resum double soft and collinear logarithms, and include some classes of subleading logarithms, i.e. only soft-or collinear-enhanced. 9 Reference [66] even proved that, in Deep Inelastic Scattering and Drell-Yan processes at large values of the Bjorken x, the HERWIG algorithm is capable of capturing all next-to-leading logarithms, i.e. all single logarithms, enhanced for soft or collinear emission, as long as one rescales the QCD scale Λ to a Monte Carlo value, labelled Λ MC . 10 8 See also Ref. [52] for an independent investigation of NLO and top-width effects on the top-mass determination. 9 A notable exception is given by leading non-global logarithms, sensitive to a limited portion of the phase space, which, as discussed in [65], are partially accounted for by the angular-ordered showers of HERWIG, while they are mostly absent in virtuality-or transverse-momentum-ordered PYTHIA. 10 With respect to Λ in the MS scheme, it is Λ MC = Λ exp[K/(4πb 0 )], with K = N C (67/18 − π 2 /6) − 5N f /9, with N C and N f being the number of colours and active flavours, respectively.

Standard and alternative top-mass measurements
In this subsection I shall briefly present the main strategies to measure the top mass at hadron colliders in tt events, taking particular care about the analyses carried out at the LHC.

Direct reconstruction methods
Strategies based on the direct reconstruction of the top-decay products, namely the template, matrixelement and ideogram methods, have been traditionally classified as standard top-mass determinations. As for ATLAS, the most up-to-dated measurements are given at 8 TeV and 19.7 fb −1 in Refs. [67,68,69] for dilepton, lepton+jets and all-hadronic modes, respectively. Regarding CMS, at the moment even results at √ s = 13 TeV and L = 35.9 fb −1 are available and are reported in [70] (dileptons), [71] (lepton+jets) and [72] (all hadrons). Regarding these analyses and summing in quadrature systematic and statistical errors, CMS quotes uncertainties about 0.73 GeV for dileptons, 0.62 GeV for leptons+jets and 0.61 GeV for the all-hadron channel. As for ATLAS, the uncertainties are 0.84 GeV (dileptons), 0.91 GeV (lepton+jets) and 0.73 GeV (all jets). The standard top-mass measurements have been the basis to determine the world average [6], already presented in the introduction, which, after summing statistical and systematic errors in quadrature, yields an overall uncertainty about 800 MeV. Work towards an updated world average is currently under way. The LHC collaborations have nevertheless released their own combined measurements using 7 and 8 TeV data together: details on such studies can be found in Refs. [68,73] for ATLAS and CMS, respectively. Both analyses yield a total error about 0.5 GeV, hence an overall precision on the top mass around 0.3%. Figure 1 summarizes the state of the art on topmass measurements carried out at the LHC, including the world average, as well as ATLAS and CMS combinations.
As discussed in the introduction, since the standard m t -reconstruction methods rely on the use of Monte Carlo generators, such measurements are usually quoted as 'Monte Carlo mass' and much debate has been taking place on whether the extracted mass can be related to any well-posed definition, with some calculable uncertainty, such as the pole mass. The ongoing discussion on the theoretical interpretation of the measured top mass will be the main topic of next session. Before moving to this issue, it is worthwhile reviewing the so-called 'alternative' strategies, making use of total/differential cross sections, endpoints or other kinematic properties of tt final states.

Total and differential tt cross section
The total tt cross section was calculated in QCD in the NNLO+NNLL approximation in Ref. [74] 11 and was used to determine m t by ATLAS in Refs. [75] (7 and 8 TeV data) and by CMS in [76] (7 and 8 TeV) and [77] (13 TeV). Since the calculation in [74] employed the pole mass definition, the results in Refs. [73,75,77] are quoted as pole mass measurements. Although to some extent this is mostly correct, it should always be reminded that even those analyses are not completely independent of the shower generator, and therefore of its mass parameter, which is still used to evaluate the acceptance. Nevertheless, it was proved that such a sensitivity is rather mild. Overall, the errors in [75,76,77] are larger than those in the standard methods, as they are about 2.5 GeV; however, they are expected to decrease thanks to the higher statistics foreseen in the LHC future runs. After the computation of the total cross section, even differential distributions were calculated at NNLO in [78], still using the top pole mass: this computation was used by the D0 Collaboration [79] to extract the top mass at the Tevatron 11 At NNLO the tt cross section is O(α 4 S ), whereas the threshold logarithms which are resummed in [74] are ∼ α n S [ln m (1 − z)/(1 − z)] + , with z = m 2 t /ŝ, s being the partonic centre-of-mass energy and m ≤ 2n − 1. hence competitive with those obtained at the LHC from the total production cross section.
Reference [80] explored the extraction of the top mass by using the NNLO total tt cross section and NLO differential distributions, such as transverse momentum, rapidity and tt invariant mass, expressed in terms of pole and MS masses. Overall, Ref. [80] found that using the running mass yields a milder scale dependence of such observables; nevertheless, implementing the full NNLO differential cross section or the four-loop pole-MS mass conversion, along the lines of [78] and [18], respectively, will be obviously very useful to shed light on the scale dependence.
Still on the tt total cross section, it is worthwhile pointing out the recent work carried out to merge NNLO QCD and NLO electroweak corrections in Ref. [81]. Such a computation was then used to predict the top-quark charge asymmetry at Tevatron and LHC and the electroweak corrections exhibited a remarkable impact, say about 20%, on the forward-backward asymmetry. It will be clearly very interesting determining the top pole mass from differential distributions, along the lines of [79], including electroweak contributions as well.

ttj cross section
The top mass was also extracted from the measurement of the tt+1 jet cross section, which has a stronger sensitive to m t than the inclusive tt rate. In Ref. [82], the NLO ttj cross section was calculated using POWHEG and its pole mass implementation, matched to PYTHIA. Detector and shower/hadronization effects were unfolded in order to recover the pure NLO ttj cross section. From the experimental viewpoint, the approach proposed in [82] was followed in [83] by ATLAS (7 TeV and 5 fb −1 ) and by CMS in [84] (8 TeV and 19.7 fb −1 ). The error on m t extracted from the ttj cross section is slightly smaller than from the inclusive tt one, but still much above the direct-reconstruction measurements. Such mass determinations are referred to as pole mass measurements, since this is the mass definition employed by POWHEG, while the PYTHIA mass parameter used in the parton shower has a mild effect in the determination of the acceptance. Reference [85] used the running MS top mass in the calculation of the NLO ttj rate and, after comparing with the cross section measurements, obtained results which are, within the errors, in agreement with the pole mass yielded by the approach in [82].
Other so-called alternative methods to reconstruct m t rely on the kinematic properties of top-decay final states: since they are based on the comparison with Monte Carlo predictions, the measured m t has to be identified with the mass parameter in the shower code. Overall, such techniques yield uncertainties in the mass about the order of magnitude of those relying on the total cross section, say about 1 GeV or above.

Peak of the b-jet energy spectrum
It was observed that the peak of the b-jet energy in top decay at LO is independent of the boost from the top to the laboratory frame, as well as of the production mechanism [86]. The CMS Collaboration did measure the top mass from the b-jet energy peak data at 8 TeV and 19.7 fb −1 [87], by using POWHEG and MadGraph to simulate top production and decay, and PYTHIA for parton shower, hadronization and underlying event. The resulting uncertainties are 1.17 GeV (statistics) and 2.66 GeV (systematics).

m bℓ , m bℓν and stranverse mass m T 2
The b-jet+lepton invariant-mass (m bℓ ) spectrum was used by CMS to reconstruct m t in the dilepton channel in Ref. [88], at 8 TeV and 19.7 fb −1 . The data were compared with the MadGraph+PYTHIA simulation, yielding a measurement consistent with the world average and an uncertainty about 1.3 GeV. In Ref. [88], for the sake of comparison, even the NLO code MCFM was used to predict the m bℓ distribution. More recently, in Ref. [89] CMS extracted m t even from the so-called stransverse mass m T 2 [90] and from m bℓν , which accounts for the neutrino missing transverse momentum as well. The sensitivity of these observables to m t yields an uncertainty about 180 MeV (statistics) and 900 MeV (systematics).

Endpoint method
Another method to measure m t consists of using the endpoints of distributions sensitive to m t , namely the endpoints of m bℓ , µ bb and µ ℓℓ , where b is a b-flavoured jet, and µ bb and µ ℓℓ generalizations of the bb and ℓ + ℓ − invariant masses in the dilepton channel, as described in Ref. [91] (CMS, 7 TeV and 5 fb −1 ). Since b-flavoured jets can be calibrated directly from data, the endpoint strategy is claimed to minimize the Monte Carlo error on m t , which is mostly due to colour reconnection, namely the formation of a B hadron by combining a b quark in t decay with an antiquark fromt decay or initial-state radiation. Constraining the neutrino and W masses to their world-average values, this method leads to uncertainties about 900 MeV (statistics) and 2 GeV (systematics).

Leptonic observables
Purely leptonic observables in the dilepton channel, such as the Mellin moments of lepton energies or transverse momenta, were proposed to measure m t , since in this way one can escape the actual reconstruction of the top quarks [92]. However, this method still yields uncertainties due to hadronization, production mechanism, Lorentz boost from the top to the laboratory frame, as well as missing higherorder corrections. Preliminary analyses have been carried out in [93] (CMS, based on LO MadGraph) and [94] (ATLAS, based on the MCFM NLO parton-level code [95]) using data at 8 TeV and 19.7 fb −1 and are expected to be improved by matching NLO amplitudes with shower/hadronization generators. For the time being, the uncertainties quoted in Ref. [93] are 1.1 GeV (statistics), 0.5 GeV (experimental systematics) and 2.5-3.1 GeV (theoretical systematics), whereas in Ref. [94] they read 0.9 GeV, 0.8 GeV and 1.2 GeV, respectively. Reference [93] also quotes an uncertainty +0. 8 −0.0 GeV due to the description of the top-quark transverse momentum. In fact, previous CMS analyses had displayed a mismodelling of the top p T simulated by MadGraph+PYTHIA, and therefore Ref. [93] reweighted the transverse momentum to match the measured one.

J/ψ method
Final states with J/ψ mesons were exploited by the CMS Collaboration in Ref. [96] to measure m t , using data collected at 8 TeV and a luminosity about 19.7 fb −1 . In this work, one explores t → bW processes where b-flavoured hadrons decay into states containing a J/ψ, the J/ψ decays according to J/ψ → µ + µ − pair and the W bosons undergo the leptonic transition W → ℓν. The top mass is then extracted by fitting the invariant mass distributions m µµ or m J/ψℓ , as well as the transverse momentum of the J/ψ. The analysis was carried out by using the MadGraph code, interfaced with PYTHIA, while, for the sake of estimating the theoretical error, POWHEG and SHERPA were employed as well. Overall, the statistical uncertainty in the investigation [96] amounts to 3 GeV, while the systematic error to 0.9 GeV. The conclusion of [96] is that, since the systematic uncertainties are of different origin from those entering in the measurements based on direct reconstruction and given the higher statistics which are foreseen, the J/ψ method should ultimately be worth to be included in the combination with the extractions from matrix-element or template strategies.

Final-state charged particles
A novel technique was presented by the CMS Collaboration in Ref. [97], where m t is measured by exploiting the kinematic properties of final-state charged particles. The observable used in this analysis is the mass m svℓ of the secondary vertex-lepton system, namely the invariant mass of a system made of the charged lepton in W decays and charged hadrons in a jet originating from a common secondary vertex. Using only charged particles, in fact, reduces the overall acceptance uncertainty, whereas this method is obviously dependent on the modelling of top decays and bottom hadronization. The investigation was undertaken using MadGraph+PYTHIA to simulate the signal, POWHEG and SHERPA to estimate the uncertainty due to the matrix-element generation and hadronization, respectively. The final error on the measurement of m t from charged particles is then 200 MeV (statistics) and +1. 58 −0.97 GeV (systematics), by using data sets of 8 TeV collisions and a luminosity of 19.7 fb −1 .

Perspectives at high luminosity
The perspectives for the top-mass determination at High Luminosity (HL) LHC were debated in Ref. [98], where the HL-LHC will collide protons at 14 TeV and accumulate an integrated luminosity of 3000 fb −1 . In the report [98] the ATLAS Collaboration presented a projection for the accuracy on m t using samples of events in the lepton+jets mode and J/ψ → µ + µ − decays in the final state, along the lines of Ref. [99]. The expected statistical and systematic uncertanties amount to 0.14 and 0.48 GeV respectively. As for CMS, the potentials for the top-mass extraction at HL-LHC are detailed in [100] and summarized in Fig. 2: one can learn that all uncertainties will tremendously decrease at HL-LHC. In particular, one expects an error which ranges from about 0.2 GeV (0.1%) for direct reconstruction in the lepton+jets channel to 1.2 GeV (0.7%) from the total tt NNLO cross section. It is remarkable that the uncertainty from J/ψ final states will go down to about 0.6 TeV (0.35%).

INTERPRETATION OF THE TOP-MASS MEASUREMENTS AND THEORETICAL UNCERTAINTIES
The nature of the reconstructed top-quark mass and its possible relations with field-theory mass definitions has lately become the topic of a very lively debate (see, e.g., the reviews in [13,101,102]). I shall first overview the main issues concerning the m t interpretation and then discuss the dominant sources of theoretical uncertainty.

Measured mass ad theoretical definitions
The discussion on the identification of the measured quantity is mostly based on the claim that Monte Carlo codes are LO, while well-posed field-theory mass definitions need at least a NLO computation. Although it is certainly true that, referring to standard codes, total cross sections are LO, event shapes and differential distributions go well beyond LO and account for a resummation of enhanced logarithms. NLO+shower codes like POWHEG and MC@NLO yield NLO total cross sections, adopting the top pole mass in the computation, while the differential spectra rely on the shower approximation and on the modelling of hadronization and underlying event. Nevertheless, it is indeed cumbersome interpreting the reconstructed top mass in terms of theoretical definitions or, in other words, scrutinizing all possible sources of uncertainties which may prevent such an identification. As far as this controversy is concerned, one can basically follow two mainstream viewpoints.
On the one hand, there are authors [7,8,9,10,11] who claim that the measured quantity cannot be directly associated with any field-theory mass definition and therefore one must stick to the notion of Monte Carlo mass. Along this point of view, much work has been undertaken in order to relate the Monte Carlo mass to definitions like the pole mass: the quoted discrepancies between Monte Carlo and pole masses have through the years ranged from few hundreds MeV to, in the most extreme case, almost 1 GeV. If this were indeed the case, it would be an uncertainty comparable or even larger than the current errors on the directly reconstructed top mass. On the other hand, we have authors [12,13] who instead argue against the use of the Monte Carlo mass and claim that, under given circumstances, the reconstructed mass should actually mimic the pole mass. According to this viewpoint, instead of constructing other mass definitions to properly interpret the measurements, the effort should rather be devoted to carefully estimate the theoretical uncertainties, of both perturbative and non-perturbative nature, in the identification of the measured quantity with the pole mass. In the following, I will briefly review the work carried out in this respect.
As far as I know, the pioneering work on relating the measured mass to the pole mass was carried out in Refs. [7,8]. First, Ref. [7] defined, for the case study of e + e − → tt collisions, the SCET (MSR-like) short-distance jet mass m J (µ), associated with the collinear jet function and corresponding to the MSR mass at a scale about the top width, i.e. R = Γ t . Then, m J (µ) was related to the pole mass by means of the following equation: Setting, e.g., µ ≃ 1 GeV, then the jet mass differs from the pole mass by about 200 MeV at O(α S ).
It is also remarkable that the correction is of order O(α S Γ t ), which confirms the intuition that the top width has to play a role in the uncertainty in the measured mass. Later on, Ref. More recently, Ref. [9] compared PYTHIA with a SCET computation in the NLO approximation, resumming soft-and collinear-enhanced contributions to NLL or even NNLL accuracy. As in [8], the SCET resummed calculation employed the MSR mass m MSR t (R), with R ∼ Γ t and m MSR t (R) → m t,pole for R → 0. The PYTHIA mass parameter was then calibrated to reproduce the SCET prediction for the 2-jettiness τ 2 , after running the code for several centre-of-mass energies and a few values of the top mass. The result of Ref. [9] is that the PYTHIA mass is consistent, within the errors, with the MSR mass evaluated at a scale of 1 GeV. Using instead the pole mass in the computation yields a shift with respect to the PYTHIA m t about 600-900 MeV, according to whether the Monte Carlo results are compared with a NLL or NNLL resummation. The work in [9] was extended to pp collisions in Ref. [10], where the extraction of m t from boosted top jets with light soft-drop grooming was proposed. 12 By comparing the NLL resummation for the groomed top-jet mass with PYTHIA, the pole mass was found about 400-700 MeV below the calibrated Monte Carlo mass, depending on the energy of the pp collision and non-perturbative parameters contained in the resummation. Still on this subject, Ref. [11] explores the dependence of m t on the parton shower cutoff, referring to the HERWIG 7 angular-ordered cascade. By working in the quasi-collinear limit, with boosted massive quarks in the NLL approximation, the authors of [11] stated that the mass parameter in a Monte Carlo code should be identified with a cutoff-dependent, coherent-branching (CB) mass, labelled as m CB t (Q 0 ). Such a coherent-branching mass is a low-scale short-distance mass, free from renormalon corrections, related to the pole mass by a relation like: Expressing in Eq. (23) α S in terms of the Monte Carlo QCD scale Λ MC defined in [66] and setting Q 0 = 1.25 GeV, like the shower cutoff of HERWIG 7, the shift between pole and CB masses amounts to about 500 MeV. Using instead the standard MS scheme for α S yields a discrepancy of the order of 300 MeV. Concerning the calibration of the Monte Carlo mass parameter, another approach was suggested in [104]: one measures an observable, e.g. a total or differential cross section, ignoring anything on the event generation, and, by comparing the data with the simulation, calibrates both observable and m t . The finding of of Ref. [104] is that, given the current precision on the inclusive tt rate, the uncertainty on this calibration is roughly 2 GeV.
As anticipated above, other authors, such as [12,13], claim that it is not really necessary to introduce the Monte Carlo mass concept to interpret measurements relying on final-state direct reconstruction. The starting point is the observation that, in the narrow-width approximation and assuming that one is able to catch all final-state radiation, the invariant mass of top-decay products in t → bW X, X being some extra radiation off top and bottom quarks, should mimic the on-shell top mass, i.e. the pole mass. Effects due to the top final width, parton emission which is not included in the reconstruction, contamination from initial-state radiation and non-perturbative phenomena, such as colour reconnection or underlying event, clearly spoil the direct identification of the invariant mass of top-decay final states with the pole mass. However, in the perspective of Refs. [12,13] rather than a genuine shift of the measured mass with respect to the pole mass, such effects are seen as uncertainties, of either perturbative or non-perturbative nature, in the identification of the extracted mass as pole mass.
Although such approaches may sound pretty different, work towards a possible compromize was carried out in [98], in such a way to guide the top-quark community and avoid confusion or statements claiming a sort of ignorance on the nature of the measured top-quark mass. Though starting from different perspectives, all those papers agree that the measured m t can be connected to the pole mass by means of a relation like: where δm t is a possible shift between measured and pole masses and ∆m t is an uncertainty. According to Refs. [12,13], which basically discourage the use of the concept of Monte Carlo mass, the extracted mass through top-decay final-state reconstruction mimics the pole mass, up to some computable uncertainty. In this approach δm t ≃ 0, while ∆m t is a theoretical (Monte Carlo based) error that, in measurements employing event generators, should be estimated, e.g., varying shower/hadronization parameters, confronting different models (cluster and string models for hadronization are a typical example) or changing the analysis details (for final-state jets, increasing/decreasing the jet radius leads to accounting for more or less gluon radiation). In the view of Refs. [12,13], the uncertainty ∆m t in the identification of the measurements with the pole mass should be of the order of the hadronization scale, i.e. O(Λ). On the contrary, in the work carried out in Refs. [8,9,10,11] m t is labelled as Monte Carlo mass and δm t is an actual discrepancy with respect to the pole mass, typically about O[Q 0 α S (Q 0 )] as in Eq. (23), while ∆m t is still an uncertainty, which one can estimate by varying the parameters or options in the codes and computations employed in the comparison.
Therefore, the disagreement among most authors of the relevant literature on the interpretation of the top-mass measurement is conceptually relevant, but in practice concerns whether one should calculate an actual discrepancy δm t or not, as well as the meaning of ∆m t and its numerical magnitude. In Refs. [8,9,10,11] different values for δm t and ∆m t have been quoted, which is reasonable, since, as also advocated in [12] for the purpose of the uncertainty, any possible relation between the pole mass and the measured quantity has to depend on the observable which is used to extract m t , on the details of the analysis, such as the imposed cuts, the energy of the collider and whether it runs, e.g., e + e − or pp modes. Moreover, since such determinations are based on a comparison between Monte Carlo results with resummed calculations, with m t being a tunable parameter, δm t and ∆m t also depend on the accuracy of the resummations, e.g., NLL or NNLL. As discussed above, δm t is about 200 MeV in Ref. [8], in the range 600-900 MeV in Ref. [9], 400-700 MeV in [10] and 300-500 MeV in [11]. The uncertainty ∆m t in the relation (24) was estimated to be roughly 250 MeV in [11] and 280-380 MeV in [9]. Refs. [12,13] do not contain an explicit calculation of ∆m t , but rather propose a method to compute it, e.g., by varying Monte Carlo perturbative and non-perturbative parameters or, in a POWHEG-like implementation, switching NLO and width effects on or off. Of course, it will be very interesting to follow such an approach and compare the results with the numbers obtained in Refs. [8,9,10,11]. One may already guess that, since Refs. [12,13] do not account for any explicit discrepancy δm t , one may likely get a larger uncertainty ∆m t when following this approach. Furthermore, it will be crucial understanding how much, for a given observable, any shift/uncertainty of the measured mass with respect to the pole mass depends on the specific shower code and, e.g., one finds an impact of the late implementation of NLO corrections and width effects along the lines of [29].

Theoretical uncertainties in the top mass determination
For the sake of a precise determination of the top-quark mass, a reliable estimate of the theoretical error is of paramount importance. In the top-mass world-average extraction, i.e. Ref. [6], based on the so-called standard measurements, the overall theory uncertainty accounts for about 540 MeV of the total 710 MeV systematics. In particular, Ref. [6] distinguishes the contributions due to Monte Carlo generators, radiation effects, colour reconnection and parton distribution functions (PDFs).
The Monte Carlo systematics is due to the differences in the implementation of parton showers, matrixelement matching, width effects, hadronization and underlying event in the various programs available to describe top-quark production and decay. There is no unique way to estimate this uncertainty, though, and each collaboration even follows different prescription according to the analysis. One can either compare two different generators, which are considered appropriate for a given analysis and have been properly tuned to some data sets, or choose one single code and explore how its predictions fare with respect to variations of its parameters. For example, in [6] CDF compares HERWIG and PYTHIA, while D0 uses ALPGEN+PYTHIA and ALPGEN+HERWIG; both Tevatron experiments use MC@NLO to gauge the overall impact of NLO corrections. At the LHC, ATLAS compares MC@NLO with POWHEG for the NLO contributions and PYTHIA with HERWIG for shower and hadronization; CMS instead confronts LO MadGraph with NLO POWHEG.
The radiation uncertainty gauges the effect of initial-and final-state radiation on the top mass and is typically obtained by varying in suitable ranges the relevant parameters in the parton-shower generators. Concerning PDFs, there are distinct strategies to evaluate the induced error on m t in the different experiments, although using two different sets or a given set but with different parametrizations are common trends. More generally, the choice of the PDF set in analyses based on event generators has also been the topic of several discussions: as pointed out before, although Monte Carlo codes yield LO or NLO total cross sections, differential spectra go beyond such approximations and include the resummation of classes of enhanced logarithmic terms. An attempt to propose some improved sets of parton distribution functions for standard parton shower generators was presented in [105].
Among the sources of theoretical uncertainty and possibile shifts between measured and pole masses, colour reconnection should deserve some special attention. In fact, it accounts for about 310 MeV in the world average presented in [6]. Also, the very fact that, for example, a bottom quark in top decay (t → bW ) can be colour-connected to an initial-state antiquark does not have its counterpart in e + e − annihilation and therefore its modelling in Monte Carlo event generators may need retuning at hadron colliders. Investigations on the impact of colour reconnection on m t were undertaken in [106,107], in the frameworks of PYTHIA and HERWIG, respectively. In particular, Ref. [107] addresses this issue by simulating fictitious top-flavoured hadrons T in HERWIG and comparing final-state distributions, such as the BW invariant mass, with standard tt events. In fact, in the top-hadron case, assuming T decays according to the spectator model, the b quark is forced to connect its colour with the spectator or with antiquarks in its own shower, namely b → bg, followed by g → qq, and colour reconnection is suppressed. The analysis in [107] is still ongoing and, in future perspectives, it may also serve to address the error on the identification of the measured mass with the pole mass. In fact, in the event samples simulated in [107] the Monte Carlo (HERWIG) mass is the mass of a heavy hadron, which can be related to any definition of the heavy-quark (top for T mesons) mass definition by means of lattice, potential models or Non Relativistic QCD. In Ref. [106], colour reconnection is instead investigated within the Lund string model, tuned to charged-particle multiplicity or transverse-momentum data. Several possible models for colour reconnection were investigated and the yielded uncertainty on the top mass varied between 200 and 500 MeV, depending on the chosen framework.
Another non-perturbative phenomenon which plays a role in the theoretical error is bottom-quark fragmentation, i.e. the hadronization of bottom quarks in top decays into b-flavoured mesons or baryons. The usual way to deal with it consists in tuning the Monte Carlo fragmentation parameters to precise e + e − → bb data and then using the best parametrizations to describe bottom-quark hadronization in top decays. This approach was followed, e.g., in Refs. [108,109], where data from DELPHI [110] SLD [111], OPAL [112] and ALEPH [113] were employed to tune the parameters of HERWIG [31] and PYTHIA [33]. In particular, Ref. [109] used such a tuning to predict the B-hadron+lepton invariant mass m Bℓ in tt events at LHC. A possible extraction of m t using this observable exhibited a large discrepancy between the two event generators, which was explained as due to the different quality of the e + e − fits, with HERWIG being only marginally consistent with the data. More recent modelling and fits, such as the so-called Monash [114] or A14 [115], or using the dipole-like shower implementation in [57] are expected to give a better description of bottom fragmentation in top decays. Investigations on the uncertainties using these implementations are currently in progress; it will be very interesting, in particular, exploring bottom-quark fragmentation by using NLO+shower codes, such as POWHEG and aMC@NLO, interfaced to HERWIG or PYTHIA. In fact, it is mandatory to understand whether the Monte Carlo default parameterizations or tunings like those in [114,115] work well at the LHC even when the hard scattering is at NLO, or one would rather need to refit the Monte Carlo parameters. In general, although the approach followed in [109] relies on the universality of the hadronization transition, it is not absolutely guaranteed that models which reproduce e + e − data work equally well in a coloured environment like tt events at the LHC, where initial-state radiation, colour reconnection and underlying event play a role. Therefore, tuning shower and hadronization parameters to LHC data should become a ultimate goal.
From this viewpoint, more recently, Ref. [116] reconsidered the issue of the dependence of m t on Monte Carlo parameters, suggesting a possible in-situ calibration of the shower codes using top events in the dilepton channel, and taking particular care about observables sensitive to b-fragmentation in top decays. In particular, Ref. [116] extended the work in [109] exploring top-decay observables in terms of B-hadrons, instead of b-jets, so that one should deal with fragmentation uncertainties, rather than with the jet-energy scale. For instance, if O is the average value of a given observable O and θ a generic generator parameter, then one can write the following relations: where we defined ∆ m θ = ∆ m O ∆ O θ . Therefore, if one aims at, e.g., an error of 500 MeV on m t , namely dm t /dm t < 0.003, one should also have ∆ m θ (dθ/θ) < 0.003. Reference [116] then identifies some so-called calibration observables, which depend on the shower/hadronization parameters but are rather insensitive to the top mass. Examples of such quantities are, e.g., the ratios of B-hadron to b-jet (b) transverse momenta p T,B /p T,b , of invariant masses m BB /m bb (b being a jet containing aB hadron), the azimuthal separations and invariant opening angles ∆φ(bb), ∆φ(BB), ∆R(bb), ∆R(BB). 13 Then, imagining that one could ideally tune the parameters to measurements of the calibration observables, other quantities can be explored to extract m t , such as the B-hadron energy and transverse momentum E B and p T,B , or the invariant masses m Bℓ , m ℓl and m BBℓl . The conclusion of this exploration is that, in order to achieve a 0.3% precision on the top mass, one needs to determine the strong coupling constant at 1% accuracy and other parameters, such as the shower cutoff, the gluon and quark effective masses or the hadronization parameters at 10%. Overall, Ref. [116] proposes a method to tune directly Monte Carlo generators to data from top events at the LHC, which, whenever top-production data were to become precise enough, should be preferable to the use of fits to e + e − data, in such a way to avoid all uncertainties and ambiguities in the application of e + e − -based fits to hadron collisions.

CONCLUSIONS
I discussed some challenging issues regarding the determination and interpretation of the top quark mass at hadron colliders. I reviewed the main top mass definitions, pointing out their most notable features and taking particular care about the pole and MS masses. I described recent calculations for the purpose of the renormalon ambiguity in the pole mass in the infrared regime, yielding uncertainties about 100-250 MeV, which, for the time being, are below the current error on the top mass. Also, such estimates are well below the top-width energy scale, about 1.4 GeV.
The most relevant features of Monte Carlo codes for top-quark phenomenology were then presented, stressing the late implementation of NLO corrections and interference effects between top-production and decay phases. Even the standard shower codes are nevertheless beyond LO in the differential distributions which account for classes of enhanced soft/collnear logarithms to all orders.
The main experimental methods to measure the top mass were discussed, pointing out the differences among the so-called standard and alternative measurements and the magnitude of the quoted uncertainties. For the time being, although the alternative measurements provide an excellent ground to reconstruct m t using the kinematic properties of the final states and, in some cases, they are even capable of minimizing the impact of the chosen Monte Carlo generator, the standard methods are still those which yield the lowest uncertainty. This will also be the case in the future LHC runs, albeit the higher statistics are expected to decrease the errors in the alternative strategies too.
Much space was then devoted to the present debate on the interpretation of the measurements and whether one should relate the extracted m t to some alternative mass definition or rather express it in terms of the pole mass, up to some uncertainty. A common features of both attitudes is nonetheless that there is no universal relation between the measured mass and any field-theory definition, but it depends on the considered observable and on the type of Monte Carlo shower code or QCD calculation which is employed in the comparison. There have been many investigations to relate the measured m t to shortdistance masses by comparing Monte Carlo predictions with SCET resummed computations: the obtained shift with respect to the pole mass was eventually derived and is of the order of a few hundreds of MeV, depending on the specific analysis and accuracy of the calculation. On the other hand, work is in progress to explore the sources of errors which, on the top of the theoretical systematics, affect the straightforward identification of the top mass in direct-reconstruction analyses as a pole mass, such as colour reconnection. Although the starting point of such approaches are conceptually different, a compromize can be reached and it will be very appealing applying the ongoing work on colour-reconnection and bottom-fragmentation uncertainties to the interpretation of the top-mass measurements in terms of well-defined field-theory quantities.
Finally, referring in particular to the world-average analysis, the contributions to the quoted theoretical error were debated, along with the current work aimed at obtaining even more reliable estimates of such uncertainties. Furthermore, it was discussed the possibility to use top-quark events and suitable calibration observables to fit Monte Carlo parameters, which will probably be the way to follow in future perspectives, once the data become precise enough to compete with e + e − experiments for the purpose of the tuning of event generators.
In summary, top-quark phenomenology at the LHC, especially in the high-luminosity perspective, has become precision physics and the smallness of the current and foreseen uncertainties in the top-mass measurement are a clear example of such a level of accuracy. However, for the sake of a robust and reliable top-mass determination, much work is still necessary, in order to understand better and possibly reduce the sources of uncertainties. In particular, progress in Monte Carlo studies and QCD calculations for top production and decay, as well as in theoretical work concerning top-mass definitions, should definitely be encouraged. As pointed out many times in this review, investigations along these lines are already in progress, in such a way that one can feel confident that the theoretical and experimental efforts will eventually converge to match the precisions which are expected in the future LHC runs and ultimately at HL-LHC.