- Ingeniería Ambiental, Facultad de Ciencias Agrarias, Universidad Agraria del Ecuador, Guayaquil, Guayas, Ecuador
The rapid expansion of artificial intelligence (AI) poses an increasing dilemma: its enormous energy and water consumption threatens environmental sustainability in a global context of climate crisis and resource scarcity. Although the use of more efficient hardware, renewable energy, and optimization techniques has improved data center efficiency, these solutions primarily focus on infrastructure, leaving unexplored the potential of communication among AI agents themselves. Recent developments in emergent languages show that agents can create autonomous protocols to coordinate more efficiently, reducing computational redundancies and data transmission, with direct implications for lowering the energy and water required for cooling and processing. In this perspective review we argue that optimized communication between agents represents a complementary pathway to align technological advancement with sustainability, while maintaining or even improving system performance. Advancing this field requires designing standardized metrics that integrate performance and environmental footprint, testing these protocols in real-world resource-limited scenarios, and establishing regulatory frameworks that make their impact transparent. This approach could transform the relationship between AI and sustainability, guiding it toward a more resilient and responsible future.
1 Introduction
Artificial intelligence (AI) has transformed numerous sectors, from healthcare to the entertainment industry. However, this progress has brought a significant increase in resource consumption, particularly energy and water. For instance, training large-scale language models can generate CO₂ emissions equivalent to those of several cars over their entire lifetimes (Strubell et al., 2019). Additionally, the data centers hosting these models require substantial amounts of water for cooling, raising concerns about the sustainability of these resources (Mytton, 2021).
In response to these challenges, a central question arises: could optimized communication among AI agents reduce energy and water consumption without compromising system efficiency? The scientific community has proposed various strategies to mitigate AI’s environmental impact, such as algorithm optimization, more efficient hardware, and the implementation of renewable energy in data centers (Patterson et al., 2022). However, these solutions typically focus on improving infrastructure and hardware, without comprehensively addressing communication among the AI agents themselves.
Communication among AI agents is an emerging research area that has shown potential to enhance operational efficiency and reduce resource consumption. Recent studies demonstrate that AI agents can develop their own languages to communicate more efficiently, eliminating redundancies and accelerating information transmission (Peters, 2025). This emergent communication ability could not only improve AI system performance but also contribute to sustainability by reducing the energy and water required for system operation.
Despite advances in this field, significant knowledge gaps remain. Most studies focus on computational efficiency and model accuracy, overlooking the environmental impact of agent-to-agent communication (Yu et al., 2024). Additionally, the lack of common standards and protocols hinders the practical implementation of emergent communication-based solutions.
We argue that fostering and studying AI agents’ own communication languages represents a promising path to reduce AI’s ecological footprint while enhancing speed and efficacy.
This article proposes that autonomous communication among AI agents, using optimized languages, can be an effective pathway to improve energy efficiency and reduce water consumption in AI systems, excluding economic or public policy aspects. The existing literature on emergent communication will be reviewed, potential sustainability benefits analyzed, and challenges and opportunities for implementing these technologies discussed.
2 Energy consumption and environmental footprint of AI
The expansion of AI is posed to has increased the energy consumption and environmental footprint of the data centers that support it. In 2024, the electricity consumption of data centers was estimated at approximately 415 terawatt-hours (TWh), representing around 1.5% of global electricity consumption. This figure is projected to double by 2030, reaching nearly 945 TWh, driven primarily by AI adoption (IEA, 2023). In parallel, water consumption for cooling can use around 25.5 million liters of water annually (Mytton, 2021; Farfan and Lohrmann, 2023). In the United States, data centers consumed roughly 4.4% of the country’s electricity in 2023, with expectations to triple by 2028 due to rapid AI growth (Kandemir, 2025). Moreover, data centers’ water footprint is substantial; for example, a medium-sized data center can consume up to 110 million gallons of water annually for cooling. Globally, data centers are estimated to consume around 560 billion liters of water annually, potentially rising to 1.2 trillion liters by 2030 (Farfan and Lohrmann, 2023). These figures highlight that AI, relying on high-performance data centers, significantly contributes to natural resource consumption. The growing demand for energy and water poses challenges for environmental sustainability, especially in regions where these resources are limited. It is essential to consider the environmental impact of AI-supporting infrastructure to mitigate long-term negative effects.
Developing and adopting Artificial intelligence technologies that are more efficient in energy and water use is a priority, as is promoting policies that encourage sustainability in digital infrastructure. Implementing more efficient cooling systems and utilizing renewable energy in data centers are fundamental steps toward more sustainable AI.
3 Emergent communication among AI agents
Emergent communication refers to the development of agents’ own interaction protocols, allowing them to coordinate in collaborative tasks without relying on human natural language. This can reduce message redundancy and, potentially, energy consumption associated with computation and data transmission. Recent studies demonstrate that mixed human-agent teams can benefit from interpretable emergent communications. When these team employ discrete vocabularies and sparse communication, achieving performance comparable to non-interpretable agents while reducing human cognitive load (Karten et al., 2023).
Peters et al. (2025) conducted a comprehensive review of emergent language in multi-agent reinforcement learning, identifying discrepancies between computational performance and interpretability of emergent protocols, as well as the lack of standardized metrics for evaluating linguistic properties such as composition, semantics, and pragmatics.
Zhou, Hao, and Zhang demonstrated that agents trained in referential games can generalize novel numerical concepts through semantically stable and consistent communications, showing the potential of these emergent languages for complex generalization tasks (Zhou et al., 2024).
While studies agree that emergent communication improves coordination and performance, discrepancies remain regarding interpretability, robustness against untrained agents, and out-of-domain generalization. Karten et al. (2023) show that imposing discrete vocabulary and sparse messaging enhances interpretability without significant task performance loss, whereas (Peters et al., 2025) caution that current metrics do not fully capture fundamental linguistic properties. Zhou et al. (2024) show high generalization, though their experiments occurred in relatively closed, controlled environments, limiting real-world extrapolation.
From an energy efficiency perspective, studies in UAV and edge computing environments indicate that emergent communication can optimize resource consumption. Xu, Liu, Gong et al. implemented a multi-agent deep reinforcement learning algorithm allowing UAVs to collaboratively decide which tasks to delegate and where, minimizing latency and computational energy use. This demonstrates that adaptive communication protocols among agents have direct implications for sustainability and resource efficiency (Xu et al., 2025).
We argue that while emergent communication offers clear opportunities to improve coordination and efficiency, its practical applicability will depend on designing protocols robust to noisy environments, agent heterogeneity, and bandwidth limitations, as well as developing metrics integrating performance, interpretability, and energy efficiency. Future research should focus on empirically evaluating the full energy consumption of these protocols, designing standards for interoperability among heterogeneous agents, and establishing benchmarks including sustainability metrics as integral to emergent language assessment (see Figure 1).
Figure 1. Methodological framework for evaluating sustainable emergent communication. Created by the authors.
Our proposed integrative framework combines three evaluation layers to quantify the environmental impact of emergent communication protocols, yielding a composite Sustainability Index.
4 Energy and water implications of large-scale AI expansion
Large-scale AI expansion has critical implications for energy sustainability and water management, as data centers require enormous amounts of electricity and water to operate and maintain efficient cooling systems. Recent studies estimate that the digital infrastructure’s water footprint reaches levels comparable to medium-sized cities, risking further stress in water-scarce regions (Farfan and Lohrmann, 2023; Jegham et al., 2025).
Parallel studies document that electricity consumption in data centers supporting large-scale model training can exceed several hundred megawatts, directly impacting carbon emissions and water use intensity (Masanet et al., 2020).
Although solutions based on renewable energy and more efficient cooling systems have been proposed, a significant gap remains regarding reduced consumption in distributed computing processes and inter-agent communication. Emergent communication among AI systems could indirectly contribute to sustainability by reducing transmission and computation redundancy, thus decreasing energy demand and water use in supporting infrastructures. Jegham et al. (2025) highlight that innovation in energy-efficient hardware and algorithms is essential to mitigate the sector’s environmental footprint, but designing lighter, adaptive communication protocols could serve as a complementary pathway.
We argue that the most significant impact lies in the potential to reduce GPU usage through more efficient emergent protocols, translating into lower heat generation and consequently reduced water cooling requirements. This approach opens a research agenda combining AI, sustainability, and critical resource management, aiming to align technological advancement with global climate mitigation and water security goals (see Table 1).
Table 1. Key features, differences from state-of-the-art, and similarities in emergent communication research.
5 Challenges, opportunities, and future perspectives
The rapid development of AI presents a dual scenario of challenges and opportunities. On one hand, the growth of large-scale models amplifies energy costs and associated emissions, straining international climate mitigation commitments (Masanet et al., 2020). Additionally, the lack of transparency in sustainability metrics and heterogeneity in calculation methodologies makes rigorous comparison of environmental impact across models and providers difficult (Strubell et al., 2019).
However, relevant opportunities emerge. Adoption of renewable energy, redesign of specialized hardware, and implementation of energy-efficient algorithms are promising pathways to reduce AI’s environmental footprint (Jegham et al., 2025). Likewise, approaches such as emergent communication among agents offer possibilities to optimize coordination with lower computational consumption, favoring efficiency and sustainability at large scale (Karten et al., 2023).
Future research is expected to focus on three main axes: (a) development of standardized metrics integrating performance, interpretability, and environmental footprint; (b) experimentation in real-world contexts linking digital sustainability with critical resource management, such as water and energy; and (c) design of international regulatory frameworks guiding transparency in environmental reporting by AI providers. These advances will allow technological progress to align with Sustainable Development Goals and the transition toward resilient, sustainable digital infrastructures. Katal et al. (2022) states that a data center in Mumbai has taken the initiative in building green data centers, demonstrating a reduction in energy consumption in its cooling system of 15 lakh kWh per year, as well as the implementation of water-saving devices that resulted in approximately a 24% reduction in total water requirements.
As well, Panwar et al. (2022) in their systematic review, indicate that predicting host CPU utilization, migrating virtual machines, and optimizing their placement enable efficient resource management when applying various consolidation techniques. As a result, significant reductions in energy consumption have been achieved in cloud data centers, with reported decreases of: heuristics (5.4–90%), metaheuristics (7.68–97%), machine learning methods (1.6–88.5%), and statistical methods (5.4–84%).
6 Discussion
The analysis primarily focuses on recent studies exploring the intersection of emergent communication and sustainability, limiting the scope to a still-nascent subset of research. Findings are largely theoretical and projected, with no large-scale empirical data validating the environmental benefits of emergent communication. Additionally, the lack of access to standardized water and energy footprint metrics prevents certainty about the degree of impact these innovations could have on real infrastructures. Nonetheless, this study suggests that emergent communication opens a promising pathway deserving attention in research and regulatory agendas.
The growing intersection of AI, sustainability, and emergent agent communication positions the discipline at a turning point. While recent research consolidates evidence on the high energy and water costs associated with large-scale model training and deployment (Masanet et al., 2020; Farfan and Lohrmann, 2023), promising innovations aimed at mitigating these impacts also emerge.
This perspective argues that developing emergent languages among AI agents, by optimizing coordination and reducing computational redundancies, could become a complementary pathway to align technological progress with environmental sustainability. This perspective recognizes that AI systems not only generate efficiency benefits in applied domains but also necessitate rethinking digital infrastructure through an environmentally responsible lens.
Three key points emerge from the literature. First, emergent communication among agents, when oriented toward interpretable and efficient protocols, reduces cognitive and processing load, showing significant potential in multi-agent scenarios (Karten et al., 2023; Zhou et al., 2024). Second, data centers and AI training processes remain responsible for disproportionate energy and water consumption, with global impacts rivaling intensive industrial sectors (Masanet et al., 2020; Farfan and Lohrmann, 2023). Third, while mitigation strategies such as renewable energy use or hardware optimization have been proposed (Jegham et al., 2025), standardized metrics to comprehensively evaluate AI’s environmental footprint are still lacking (Strubell et al., 2019). Integrating these research lines reveals a structural tension: AI’s exponential growth contrasts with the absence of robust mechanisms to ensure sustainability and equity in its global adoption.
These dynamics can be understood through two main mechanisms. First, the computational architecture of large-scale models involves inherently GPU-intensive processes, generating residual heat and continuous demand for water cooling. Second, international regulatory frameworks lag behind technological progress, leaving tech companies without uniform incentives to prioritize sustainability (Strubell et al., 2019; Jegham et al., 2025).
The “rebound effect” in AI, where efficiency gains are offset by increases in scale or workload, poses a critical challenge for sustainable development. While emergent communication cannot fully eliminate this systemic tendency, it provides mechanisms to mitigate its magnitude by structurally reducing redundancy in agent interactions and improving coordination efficiency. By constraining vocabularies, enforcing sparse messaging, and aligning reward functions with energy-aware objectives (Karten et al., 2023), emergent languages can reduce unnecessary computational cycles, thereby lowering heat generation and subsequent water-cooling demands (Peters, 2025). Emergent communication should therefore be considered a complementary mitigation strategy within a broader set of responsible AI practices. To guide the design of initial experimental setups on emergent communication among AI agents, we propose a methodological framework grounded in three complementary principles. First, adopting standardized metrics integrates computational performance, interpretability, and environmental footprint, including measures such as energy per episode and performance-per-energy ratios, to enable meaningful comparisons under homogeneous conditions. Second, implement experiments in representative and scalable scenarios, ranging from controlled simulations to distributed deployments with heterogeneous agents and resource constraints, to assess the practical feasibility of emergent communication protocols. Third, ensuring environmental transparency through the disclosure of computational footprints and the incorporation of indicators such as PUE and WUE, thereby justifying the integration of energy and water measurement instrumentation from the outset. Together, these guidelines inform task selection, experimental instrumentation, like power meters and communication traces, statistical analysis, and interpretation of results, providing a rigorous foundation for evaluating whether emergent languages can reduce environmental impact without compromising system performance.
Ultimately, the relevance of this debate transcends the technical domain, situating itself at the center of discussions on energy transition, climate justice, and digital equity. Globally, AI cannot be assessed solely through accuracy or speed metrics but through its capacity to harmoniously integrate with planetary sustainability goals. Achieving sustainable AI requires cultural and political shifts, with academia, tech companies, and regulators working in synergy. Only then can the development of emergent languages and large-scale architectures transform, rather than reproduce, the socio-environmental tensions of our time.
7 Conclusion
Emergent communication among AI agents represents a promising pathway to improve system operational efficiency and reduce environmental impact. By enabling agents to develop their own languages, redundancy in information transmission is minimized, optimizing computational and energy resource usage. This dynamic not only increases the speed and accuracy of multi-agent interactions but also contributes to more sustainable water and energy management in high-performance infrastructures. Potential benefits extend across sectors where AI is deployed at scale, from data centers to autonomous control systems, highlighting the strategic relevance of considering emergent communication in designing efficient and environmentally responsible architectures.
Optimized communication among AI agents could reduce energy and water consumption without compromising efficiency, but current evidence is theoretical. Studies confirm redundancy reduction and maintained accuracy, and UAV edge computing demonstrates energy savings. However, the review concludes that large-scale empirical validation is the critical bottleneck. Future work must adopt standardized metrics (performance-per-energy, WUE) and deploy protocols in real data centers to confirm these benefits.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors.
Author contributions
JF: Formal analysis, Investigation, Methodology, Supervision, Validation, Writing – original draft, Writing – review & editing. DA-J: Conceptualization, Investigation, Writing – review & editing.
Funding
The author(s) declared that financial support was received for this work and/or its publication. Funding for this publication was provided by the Universidad Agraria del Ecuador, which covered the Article Processing Charges (APC).
Acknowledgments
We would also like to express our gratitude to the academic community from both universities for their valuable support and collaboration throughout this.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that Generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
Farfan, J., and Lohrmann, A. (2023). Gone with the clouds: estimating the electricity and water footprint of digital data services in Europe. Energy Convers. Manag. 290:117225. doi: 10.1016/J.ENCONMAN.2023.117225
IEA. Energy demand from AI – energy and AI – analysis—IEA (2023). Available online at: https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai? (Accessed September 20, 2025).
Jegham, N, Abdelatti, M, Elmoubarki, L, and Hendawi, A. How hungry is AI? Benchmarking energy, water, and carbon footprint of LLM inference. arXiv, Cornell University. (2025). doi: 10.48550/arXiv.2505.09598
Kandemir, Mahmut. AI’S energy demand: challenges and solutions for a sustainable future (2025). Available online at: https://iee.psu.edu/news/blog/why-ai-uses-so-much-energy-and-what-we-can-do-about-it? (Accessed September 20, 2025).
Karten, S., Tucker, M., Li, H., Kailas, S., Lewis, M., and Sycara, K. (2023). Interpretable learned emergent communication for human-agent teams. IEEE Trans. Cogn. Dev. Syst. 15, 1801–1811. doi: 10.1109/TCDS.2023.3236599
Katal, A., Dahiya, S., and Choudhury, T. (2022). Energy efficiency in cloud computing data centers: a survey on software technologies. Clust. Comput. 26, 1845–1875. doi: 10.1007/S10586-022-03713-0,
Masanet, E., Shehabi, A., Lei, N., Smith, S., and Koomey, J. (2020). Recalibrating global data center energy-use estimates. Science 367, 984–986. doi: 10.1126/SCIENCE.ABA3758,
Mytton, D. (2021). Data Centre water consumption. NPJ Clean Water 4:11. doi: 10.1038/s41545-021-00101-w
Panwar, S. S., Rauthan, M. M. S., and Barthwal, V. (2022). A systematic review on effective energy utilization management strategies in cloud data centers. J. Cloud Comput. 11:95. doi: 10.1186/S13677-022-00368-5
Patterson, D., Gonzalez, J., Holzle, U., Le, Q., Liang, C., Munguia, L.-M., et al. (2022). The carbon footprint of machine learning training will plateau, then shrink. Computer 55, 18–28. doi: 10.1109/MC.2022.3148714
Peters, J. (2025). “Humanlike Emergent Language in Multi-Agent Systems” in Proceedings of the 24th international conference on autonomous agents and multiagent systems (Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems), 2971–2973.
Peters, J., de Waubert Puiseau, C., Tercan, H., Gopikrishnan, A., de Lucas Carvalho, G. A., Bitter, C., et al. (2025). Emergent language: a survey and taxonomy. Auton. Agent. Multi. Agent. Syst. 39, 1–73. doi: 10.1007/S10458-025-09691-Y/FIGURES/10
Strubell, E., Ganesh, A., and Mccallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th annual meeting of the Association for Computational Linguistics, 3645–3650.
Xu, S., Liu, Q., Gong, C., and Wen, X. (2025). Energy-efficient multi-agent deep reinforcement learning task offloading and resource allocation for UAV edge computing. Sensors 25:3403. doi: 10.3390/s25113403,
Yu, Y., Wang, J., Liu, Y., Yu, P., Wang, D., Zheng, P., et al. (2024). Revisit the environmental impact of artificial intelligence: the overlooked carbon emission source? Front. Environ. Sci. Eng. 18:158. doi: 10.1007/s11783-024-1918-y
Keywords: artificial intelligence, emergent communication, energy efficiency, multi-agent systems, water sustainability
Citation: Facuy Delgado J and Arcos-Jacome D (2026) Emergent language among AI agents: a path toward energy efficiency and water conservation. Front. Sustain. 6:1717425. doi: 10.3389/frsus.2025.1717425
Edited by:
Bruno Fabiano, University of Genoa, ItalyReviewed by:
Borui Cui, Oak Ridge National Laboratory (DOE), United StatesCopyright © 2026 Facuy Delgado and Arcos-Jacome. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Jussen Facuy Delgado, amZhY3V5QHVhZ3JhcmlhLmVkdS5lYw==; Diego Arcos-Jacome, ZGFyY29zQHVhZ3JhcmlhLmVkdS5lYw==
†ORCID: Jussen Facuy Delgado, orcid.org/0000-0003-1138-4823
Diego Arcos-Jacome, orcid.org/0000-0002-7741-0978