Your new experience awaits. Try the new design now and help us make it even better

BRIEF RESEARCH REPORT article

Front. Big Data

Sec. Data Science

From ties to coordinates: recovering latent social positions with graph neural networks

Provisionally accepted
Zixuan  WangZixuan Wang1*Can  PengCan Peng1,2
  • 1ZHT Lab Co., Ltd., Beijing, China
  • 2Beijing Information Science and Technology University, Beijing, China

The final, formatted version of the article will be published soon.

In social networks the pattern of ties governs how information, influence, and opportunities circulate. Explaining these flows often requires uncovering the hidden geometric space in which network distances acquire meaning, yet inferring that space from the adjacency matrix alone remains a long-standing obstacle in complex-systems research. To overcome it we present Scale-Aware Graph Neural Networks (SA-GNNs), a self-supervised framework that marries the statistical consistency of spectral embeddings with the scalability of message-passing neural networks. By leveraging small-ball return probabilities from random walks, our method estimates vertex-specific diffusion scales and reweights the graph accordingly to approximate an intrinsic Laplacian operator. The learned coordinates arise as minimisers of a Dirichlet energy functional and converge, under mild conditions, to the true latent geometry up to isometry. SA-GNN avoids negative sampling and scales linearly in memory and runtime, enabling applications to multi-million-node networks. Empirical evaluations show that our approach achieves a substantial improvement in alignment error com-pared to classical methods such as Laplacian Eigenmaps, Diffusion Maps, and node2vec. These results position SA-GNN as a theoretically grounded and computationally efficient tool for analyzing large-scale relational systems through the lens of diffusion geometry.

Keywords: Latent geometry, Network geometry, Graph neural networks, Large scale graph representation, Unlabeled graphs, Self-supervised learning

Received: 19 Jun 2025; Accepted: 04 Nov 2025.

Copyright: © 2025 Wang and Peng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Zixuan Wang, zixuanwang@zhtlab.com

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.