AUTHOR=Martínez-Pastor Esther , Blanco-Ruiz Marian , Feijóo Sandra TITLE=Digital mental health and hidden support: a qualitative analysis of non-suicidal self-injury communities on TikTok JOURNAL=Frontiers in Digital Health VOLUME=Volume 7 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1645276 DOI=10.3389/fdgth.2025.1645276 ISSN=2673-253X ABSTRACT=This study examines the digital representation of Non-Suicidal Self-Injury (NSSI) on TikTok, with particular attention to the emergence of online communities and the communicative strategies users employ to share content while evading platform moderation. As TikTok becomes increasingly influential among adolescents and young adults, understanding how sensitive mental health topics like NSSI circulate on the platform is critical for developing effective digital health interventions. We conducted a qualitative content analysis of 400 posts referencing NSSI, collected using a mixed-method approach: 25.5% using TikTok's official API and 74.5% via the “For You” feed of a simulated account designed to mirror organic user experience. Posts were selected based on visual indicators (e.g., scars, tools), textual cues (e.g., hashtags, metaphors), and thematic references to emotional distress, recovery, or relapse. The analysis focused on user profile characteristics, linguistic strategies, and audiovisual aesthetics. Findings reveal a loosely structured yet emotionally resonant digital community characterized by subcultural codes, such as euphemisms, ambiguous hashtags, and stylized imagery. Despite content moderation policies, most accounts remained active and visible, with minimal enforcement of warnings or restrictions. While some posts portray NSSI as a coping strategy or seek to normalize the behavior, others subtly encourage recovery or offer indirect support. However, explicit messaging that discourages self-harm is notably rare. These dynamics suggest that TikTok unintentionally enables both the concealment and dissemination of self-harm-related content, functioning as a space for affective connection but also as a vector for potential normalization of harmful behaviors. The study underscores the need for targeted, ethically grounded prevention strategies that address not only the psychological functions of these communities but also the algorithmic infrastructures that sustain their visibility. These findings contribute to ongoing debates about digital mental health, platform responsibility, and the design of safer online environments.