Longitudinal brain MRI monitoring in neurodegeneration potentially provides substantial insights into the temporal dynamics of the underlying biological process, but is time- and cost-intensive and may be a burden to patients with disabling neurological diseases. Thus, the conceptualization of follow-up time-intervals in longitudinal MRI studies is an essential challenge and substantial for the results. The objective of this work is to discuss the association of time-intervals and the results of longitudinal trends in the frequently used design of one baseline and two follow-up scans.
Different analytical approaches for calculating the linear trend of longitudinal parameters were studied in simulations including their performance of dealing with outliers; these simulations were based on the longitudinal striatum atrophy in MRI data of Huntington’s disease patients, detected by atlas-based volumetry (ABV).
For the design of one baseline and two follow-up visits, the simulations with outliers revealed optimum results for identical time-intervals between baseline and follow-up scans. However, identical time-intervals between the three acquisitions lead to the paradox that, depending on the fit method, the first follow-up scan results do not influence the final results of a linear trend analysis.
This theoretical study analyses how the design of longitudinal imaging studies with one baseline and two follow-up visits influences the results. Suggestions for the analysis of longitudinal trends are provided.