Not seeing the forest for the trees: combination of path integration and landmark cues in human virtual navigation

Introduction In order to successfully move from place to place, our brain often combines sensory inputs from various sources by dynamically weighting spatial cues according to their reliability and relevance for a given task. Two of the most important cues in navigation are the spatial arrangement of landmarks in the environment, and the continuous path integration of travelled distances and changes in direction. Several studies have shown that Bayesian integration of cues provides a good explanation for navigation in environments dominated by small numbers of easily identifiable landmarks. However, it remains largely unclear how cues are combined in more complex environments. Methods To investigate how humans process and combine landmarks and path integration in complex environments, we conducted a series of triangle completion experiments in virtual reality, in which we varied the number of landmarks from an open steppe to a dense forest, thus going beyond the spatially simple environments that have been studied in the past. We analysed spatial behaviour at both the population and individual level with linear regression models and developed a computational model, based on maximum likelihood estimation (MLE), to infer the underlying combination of cues. Results Overall homing performance was optimal in an environment containing three landmarks arranged around the goal location. With more than three landmarks, individual differences between participants in the use of cues are striking. For some, the addition of landmarks does not worsen their performance, whereas for others it seems to impair their use of landmark information. Discussion It appears that navigation success in complex environments depends on the ability to identify the correct clearing around the goal location, suggesting that some participants may not be able to see the forest for the trees.


S2 Session order effect.
In our analysis, we observe a trend of reduced position error over sessions (Fig. S2).This is unsurprising, as placement of objects and shape of the outbound path do not vary across trials and conditions.Solely the number of objects from a given set of objects with fixed positions differs between conditions.However, including session number as an additional fixed effect in a regression model results in only slight improvement of fit for accuracy (without session: AIC≈1163.4,BIC≈1193.6;with session: AIC≈1150.9,BIC≈1194; likelihood ratio test: χ 2 ≈18.58,logLik2=565.4,p≈0.00033)and for precision (without session: AIC≈1385.8,BIC≈1416; with session: AIC≈1346.6,BIC≈1389.7;likelihood ratio test: χ 2 ≈45.18, logLik1=-685.9,logLik2=663.3,p<0.0001).As the order effect is small and not the primary focus of this study we do not consider it in further analyses.In our linear regression analysis, we average across trials and sessions using median position errors and position error standard deviation (scatter plot in Fig. 4).We also provide a linear mixed effects model on single-trial data (supplement Tab.S1).When comparing the differences between the zero and three object condition and between the three and 99 object condition for session 1 and session 4, we do not find a significant change between the sessions (Fig. S3) in accuracy (Wilcoxon signed-rank test: 0 vs 3 objects: Z=107, p≈0.36, d≈0.26; 3 vs 99 objects: Z=95, p≈0.2, d≈0.2)).Only in precision session 4 shows significantly lower values than session 1 (Wilcoxon signed-rank test: 0 vs 3 objects: Z=129, p≈0.8, d≈0.05; 3 vs 99 objects: Z=60, p≈0.016, d≈0.76)).

S3 Separating effects of low and high clutter.
We present two approaches, one based on a quadratic regression and one based on forward difference contrast coding, to justify splitting the dataset into two separate effects, an effect of low clutter and an effect of high clutter.

1) Linear vs quadratic fit
We observe a significant linear and a significant quadratic component in our linear mixed effects regression models for both accuracy and precision (see Results section "Disentangling the effect of different degrees of clutter").A simple linear fit does not well reflect the visually observed pattern of position error decrease from zero to three objects and increase from three to 99 objects (see Fig. 4 and 5).The significant quadratic component matches the observation better, which is confirmed by a comparison of the linear and the quadratic component of our linear mixed models above.The comparison shows the quadratic function yielding a better fit to our data (for accuracy: linear: AIC≈293, BIC≈301.8;quadratic: AIC≈255.4,BIC≈267.1;likelihood ratio test: χ 2 ≈39.63, logLik1=-143.5,logLik2=123.7,p<0.0001; and for precision AIC≈297.7,BIC≈306.5;quadratic: AIC≈241.5, BIC≈253.2;likelihood ratio test: χ 2 ≈58.23, logLik1=-145.8,logLik2=-116.7,p<0.0001) (see Tab. 1).An evaluation of the quadratic fits reveals theoretical minima for position error (3.19) and its standard deviation (2.66) close to the three object condition, identifying this condition as the one with the smallest homing errors (see Fig. 5).This speaks in favour of two separate effects with an error decrease for low clutter conditions (zero to three objects), and an error increase for high clutter conditions (three to 99 objects).

2) Forward difference contrast coding
Alternatively to analysing the linear and quadratic components of our linear mixed model, we can re-state the model using forward difference contrast coding.In forward difference contrast coding, "the mean of the dependent variable for one level of the categorical variable is compared to the mean of the dependent variable for the next level" (UCLA Statistical Consulting Group, 2021).The underlying coding matrix allows for comparing the central tendency of the dependent variable of each class with the subsequent class on the axis of the independent variable.In our case, this means that the zero object condition is compared to the one object condition, which in turn is compared to the two object condition, and so on.For our data, this approach results in significant negative slopes for the conditions with one, two and three objects, and in significant positive slopes for ten and 99 objects, for both accuracy (Tab. 1 model 3) and precision (Tab. 1 model 6).Note that for precision the estimate from the two to three object condition shows only a trend, indicating no significant improvement in precision.Again, this analysis speaks in favour of two separate effects for low clutter and high clutter conditions.When a participant's PI is repeatedly guiding them into a direction that is not the goal direction, we consider their PI biased (see also population trends in MLE models, Fig. 9).Such a biased PI can also be seen to influence a participant's goal estimation in other conditions in which objects are present.Below, we show the bootstrapped parameters of our path integration model for one such participant (Fig. S4) as an example.Data of the same participant is also plotted in Fig. 2a.The model parameters show that the biased PI observed in the zero object condition also guides the participant in the same wrong direction in the 99 object condition.The 95% confidence intervals of the direction and the distance estimate of these two conditions overlap, indicating a return of the bias present in the zero object condition.2a.Top row shows maximum likelihood estimates with bootstrapped 95% confidence intervals for the parameters of the distance models (µ gives the average expected distance, σ the standard deviation of expected distances), bottom row shows parameters for the direction models (µ gives average expected direction, κ is the concentration parameter of the underlying von Mises distribution and has been transformed to circular standard deviation for easier readability).See Eq. 1 and Eq. 2 for a full description of the model functions.Fitted parameters are shown for each experimental condition separately, with overlapping 95% confidence intervals indicating that parameter estimates are similar between conditions.Dashed red lines indicate geometric ground truth where applicable.

Figure S2 .
Figure S2.Order effect within the different conditions on position error accuracy (top) and precision (bottom).N=23 participants completed four sessions with six repetitions (n=24) in each condition.

Figure S3 .
Figure S3.Performance improvement in accuracy and precision of position error from zero to three objects and performance impairment from three to 99 objects separated for session 1 and session 4. Green and purple curves indicate kernel-density estimations fitted to the difference in median position error (left) or position error standard deviation (SD) (right) between zero and three objects (green), or three and 99 objects (purple), respectively, of N=23 single participants with n=24 repetitions within each condition (see dots).Boxplot notches indicate 95% confidence intervals around the median.Colours marking individual participants are matched across all figures, based on a given participant's median position error in the 99-object condition (refer to Fig. 4 top right).
std of PI direction estimates

Figure S4 .
Figure S4.Maximum likelihood parameter estimates for the path integration model of one single participant also plotted in Fig.2a.Top row shows maximum likelihood estimates with bootstrapped 95% confidence intervals for the parameters of the distance models (µ gives the average expected distance, σ the standard deviation of expected distances), bottom row shows parameters for the direction models (µ gives average expected direction, κ is the concentration parameter of the underlying von Mises distribution and has been transformed to circular standard deviation for easier readability).See Eq. 1 and Eq. 2 for a full description of the model functions.Fitted parameters are shown for each experimental condition separately, with overlapping 95% confidence intervals indicating that parameter estimates are similar between conditions.Dashed red lines indicate geometric ground truth where applicable.

Table S1 .
Performance of linear regression models on single trial basis.The model predicts the position error from the condition (number of objects).In all models condition is ordinal and polynomials from 1st to 5th degree are fitted.est.: estimate; s.e.: standard error; t: t value; d.f.:degrees of freedom; p: p-value

Table S2 .
Summary table of MLE model fits for the zero object condition.For an overview of model components see Table2.Model fits within each subdivision can be compared directly, since they belong the the same model family.

Table S3 .
Summary table of MLE model fits for the one object condition.For an overview of model components see Table2.Model fits within each subdivision can be compared directly, since they belong the the same model family.

Table S4 .
Summary table of MLE model fits for the two object condition.For an overview of model components see Table2.Model fits within each subdivision can be compared directly, since they belong the the same model family.

Table S5 .
Summary table of MLE model fits for the three object condition.For an overview of model components see Table2.Model fits within each subdivision can be compared directly, since they belong the the same model family.

Table S6 .
Summary table of MLE model fits for the ten object condition.For an overview of model components see Table2.Model fits within each subdivision can be compared directly, since they belong the the same model family.

Table S7 .
Summary table of MLE model fits for the 99 object condition.For an overview of model components see Table2.Model fits within each subdivision can be compared directly, since they belong the the same model family.