TECHNOLOGY AND CODE article
Front. Virtual Real.
Sec. Technologies for VR
Volume 6 - 2025 | doi: 10.3389/frvir.2025.1555173
This article is part of the Research TopicGenerative AI in the Metaverse: New Frontiers in Virtual Design and InteractionView all articles
Milo: An LLM-Based Virtual Human Open-Source Platform for Extended Reality
Provisionally accepted- 1Interdisciplinary Center Herzliya, Herzliya, Tel Aviv, Israel
- 2Advanced Reality Lab (ARL), Herzliya, Tel Aviv District, Israel
- 3Sammy Ofer School of Communications, Reichman University, Herzliya, Tel Aviv District, Israel
Select one of your emails
You have multiple emails registered with Frontiers:
Notify me on publication
Please enter your email address:
If you already have an account, please login
You don't have a Frontiers account ? You can register here
Large language models (LLMs) have made dramatic advancements in recent years, allowing for a new generation of dialogue agents. This allows for new types of social experiences with virtual humans, in both virtual and augmented reality. In this paper, we introduce an open-source system specifically designed for implementing LLM-based virtual humans within extended reality (XR) environments. Our system integrates into XR platforms, providing a robust framework for the creation and management of interactive virtual agents. We detail the design and architecture of the system and showcase the system's versatility through various scenarios. In addition to a straightforward single-agent setup, we demonstrate how an LLM-based virtual human can attend a multi-user virtual reality (VR) meeting, enhance a VR self-talk session, and take part in an augmented reality (AR) live event. We provide lessons learned, with focus on the possibilities for human intervention during live events. We provide the system as open-source, inviting collaboration and innovation within the community, paving the way for new types of social experiences.
Keywords: virtual reality, LLM, Live-Show-Production-tools, open-source, XR, Persona-Agents
Received: 03 Jan 2025; Accepted: 25 Apr 2025.
Copyright: © 2025 Shoa and Friedman. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Doron A Friedman, Advanced Reality Lab (ARL), Herzliya, Tel Aviv District, Israel
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.