AUTHOR=Thome Janine , Pinger Mathieu , Durstewitz Daniel , Sommer Wolfgang H. , Kirsch Peter , Koppe Georgia TITLE=Model-based experimental manipulation of probabilistic behavior in interpretable behavioral latent variable models JOURNAL=Frontiers in Neuroscience VOLUME=Volume 16 - 2022 YEAR=2023 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2022.1077735 DOI=10.3389/fnins.2022.1077735 ISSN=1662-453X ABSTRACT=In studying mental processes, we often rely on quantifying not directly observable latent processes. Interpretable latent variable models that probabilistically link observations to the underlying process have increasingly been used to draw inferences from observed behavior. However, these models are far more powerful than that. By formally embedding experimentally manipulable variables within the latent process, they can be used to make precise and falsifiable hypotheses or predictions. In doing so, they pinpoint how experimental conditions must be designed to test these hypotheses and, by that, generate adaptive experiments. By comparing predictions to observed behavior, we may then assess and evaluate the predictive validity of an adaptive experiment and model directly and objectively. These ideas are exemplified here on the experimentally not directly observable process of delay discounting. We propose a generic approach to systematically generate and validate experimental conditions based on the aforementioned models. The conditions are explicitly generated so as to predict 9 graded behavioral discounting probabilities across participants. Meeting this prediction, the framework induces discounting probabilities on 9 levels. In contrast to several alternative models, the applied model exhibits high validity as indicated by a comparably low out-of-sample prediction error. We also report evidence for inter-individual differences with respect to the most suitable models underlying behavior. Finally, we outline how to adapt the proposed method to the investigation of other cognitive processes including reinforcement learning.