ORIGINAL RESEARCH article
Front. Bioinform.
Sec. Data Visualization
This article is part of the Research TopicAI in Data VisualizationView all articles
Visualizing Stability: A Sensitivity Analysis Framework for t-SNE Embeddings
Provisionally accepted- University of Tübingen, Tübingen, Germany
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
t-distributed Stochastic Neighbour Embedding (t-SNE) is a cornerstone for visualizing high-dimensional biological data, where each high-dimensional data point is represented as a point in a two-dimensional map. However, this static map provides no information about the stability of the visual layout, the features that influence it, or the impact of uncertainty in the input data. This work introduces a computational framework that allows one to extend the standard t-SNE plot by visual clues about the stability of the t-SNE embedding. First, we perform a sensitivity analysis to determine feature influence: by combining the Implicit Function Theorem with automatic differentiation, our method computes the sensitivity of the embedding w.r.t. the input data, provided in a Jacobian of first-order derivatives. Heatmap-visualizations of this Jacobian or summarizations thereof reveal which input features are most influential in shaping the embedding and identifying regions of structural instability. Second, when input data uncertainty is available, our framework uses this Jacobian to propagate error, probabilistically quantifying the positional uncertainty of each embedded point. This uncertainty is visualized by augmenting the plot with hypothetical outcomes, which display the positional confidence of each point. We apply our framework to three diverse biological datasets (bulk RNA-seq, proteomics, and single-cell transcriptomics), demonstrating its ability to directly link visual patterns to their underlying biological drivers and reveal ambiguities invisible in a standard plot. By providing this principled means to assess the robustness and interpretability of t-SNE visualizations, our work enables more rigorous and informed scientific conclusions in bioinformatics.
Keywords: t-SNE, uncertainty, Explainable Machine Learning, error propagation, visualization, data insights
Received: 06 Oct 2025; Accepted: 05 Dec 2025.
Copyright: © 2025 Zabel, Hennig and Nieselt. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence:
Susanne Zabel
Kay Katja Nieselt
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
