Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Digit. Health, 12 December 2025

Sec. Connected Health

Volume 7 - 2025 | https://doi.org/10.3389/fdgth.2025.1690489

Cognitive frontiers: neurotechnology and global internet governance

  • Blavatnik School of Government, University of Oxford, Oxford, United Kingdom

This article explores the largely uncharted intersection of neurotechnology and Internet governance on the international policy agenda. Neurotechnologies encompass a broad spectrum of functions and applications, from the direct recording or alteration of brain activity to the analysis of emotions and mental states through data collected from wearable devices, applications, and AI-based tools. Innovations such as cochlear implants, sleep optimisation technologies, and immersive educational tools are already available, and significant investments are made in the next generation of devices that blur the lines between mind, machine, and action, posing unprecedented challenges. While some international organisations have begun addressing the ethical and human rights implications of neurotechnology, there remains significant fragmentation and a lack of clarity regarding its integration into Internet governance. Critical issues related to neural infrastructure, standards, access to technologies, and protections for neural data have been overlooked in the 2024 Global Digital Compact and might remain off the agenda for the upcoming 20th review of the World Summit on the Information Society. This contribution underscores the urgent need to analyse the profound implications of neurotechnology, advocating for proactive measures that align with progress made across Internet governance fora, with respect to legal safeguards, multistakeholder consultations and institutional pillars.

1 The internet to come: a full-fledged physical-virtual integration

What was once purely speculative—the ability to monitor brain waves, transfer emotions and memories, whether true or false (1), and influence actions in real-time—is now closer than ever. By 2030, a seamlessly interconnected, intelligent, and immersive world built on the next generation Internet (sometimes referred to as Web 4.0) will enable a fusion of digital and physical objects and environments. It will facilitate more advanced human-machine interactions and open new avenues for innovation, growth and sustainable development. From limb rehabilitation to sleep optimisation, these cutting-edge technologies are set to become integral to virtual worlds, spanning domains such as health, education, entertainment, but also the judiciary (2) or the military (3, 4).

Understanding a person's thoughts, preferences, and emotions enables a deeply personalised and responsive experience in virtual worlds, but can also cause unprecedented harm to individuals, communities and societies more broadly. Due to the high sensitivity of the neural processes, neurotechnologies strike at the core of mental self-determination (5) and the very origin of the self. For these reasons, the journey towards immersive world(s) must consider how existing Internet governance frameworks can address neurotechnologies as a key element of the post-2030 agenda. The neurotechnologies available today are able to perform three main functions: 1) brain-imaging (or reading the brain activation patterns); 2) neurostimulation (covering short-term interventions in neural processes); and 3) neuromodulation (which comprises processes inducing longer-term change in brain activity, in particular for medical uses). Advances in the field of neuroscience are also making possible increasingly sophisticated ways to interface directly with neural tissue, opening avenues for both therapeutic and augmentative applications (6).

Experimentation with implantable technologies—such as brain-computer interfaces (BCIs) which link the human brain with computers or external devices (7)—shows promise in supporting bidirectional communication and control, with applications in domains as diverse as telemedicine and gaming (8). Other technical tools designed to perform specific functions (such as cochlear implants for restoring hearing), are increasingly complemented by biosensor technologies that infer sensory, motor, and mental states in a non-invasive way. Commercial devices available today include eye-tracking, video oculography, voice recognition and analysis, sleep movement monitoring, or facial emotion recognition systems. Some wearable devices– such as smart earbuds or XR glasses—-can already monitor neural signals or track physiological responses in real-time (9)

Research on the long-term impacts of commercial neurotech applications is currently lacking (10), as are scientific studies on brain-focused wellness products (11). Neuroenhancement and neuromarketing concerns have only just started to be addressed (1214). Whether aimed at improving cognitive functions or influencing consumer behaviour, interventions that affect neural processes could cause unpredicted harm to the nervous system and exacerbate individual and societal inequalities, potentially undermining the essence of what it means to be human. Particularly concerning is the fact that these developments take place in a largely unregulated environment, without comprehensive safety and security standards for neurotechnology hardware, software or data in commercial use. Dual-use considerations aside, brain technologies pose new risks to individuals, communities and societies more broadly.

2 Current trends in neurotechnology

The field of neurotechnology is advancing quickly, driven by increased specialisation and significant investments from both the public and private sectors, indicating a medium-term market shift. While governments have mainly stimulated neuroscience, with an estimated $6 billion allocated to national brain-research initiatives in the past decade, direct private investments in neurotechnology companies have exceeded $33.2 billion between 2010 and 2020 (15). Consequently, the neurotechnology devices market is expected to grow to $24.2 billion by 2027 (16).

According to a 2023 UNESCO report, the surge in funding has resulted in a 35-fold increase in neuroscience publications and a 20-fold rise in innovations, as measured by patents (15). However, 80% of high-impact publications come from just 10 countries, and six countries are responsible for 87% of neurotechnology patents. The United States alone accounts for nearly half of all global patent applications (47%). Other major contributors are the Republic of Korea (11%), China (10%), Japan (7%), Germany (7%), and France (5%). Together, these six countries hold 87% of neurotechnology patents filed between 2000 and 2020 in the world's five largest Intellectual Property offices (15).

In China and the Republic of Korea, public sector universities and institutions hold a significant number of patents. However, the commercialisation of neurotech is mostly driven by “Big tech” companies in the U.S. Significant investments in the field come from Meta, Neuralink, Alphabet and Kernel (17). The foremost neurotech innovations are primarily patented by multinational tech conglomerates, with IBM (US), Ping An Technology (China), Fujitsu (Japan), Microsoft (US), and Samsung (Republic of Korea) leading the rankings. The analysis of patent clusters shows that key areas of focus include multimodal neuromodulation, seizure prediction, neuromorphic computing, and brain-computer interfaces.

These emerging patent trends show that neurotechnology is spilling beyond clinical settings, paving the way for everyday, non-medical applications. An illustrative example is the consumer-grade electroencephalogram (EEG) headset (18), widely available as meditation, stress measuring or gaming aid. Likewise, major corporations are investing heavily in BCIs that let users operate handheld devices directly with their neural activity (19). On the near horizon, external interfaces for gaming, equipment control and memory and concentration enhancement will continue to be rolled out. While thought-sharing via implanted interfaces may take decades to develop, speech decoders (20) and “typing by the brain” (21) appear much nearer.

The convergence with AI has accelerated neurotechnology advances. The interest in emotion AI (22), in particular, has been on the rise, with applications extending across healthcare, education, marketing, entertainment, and smart home systems. Neuromarketing, as a subfield of advertising that leverages neural data to predict consumer behaviour, has also gained considerable traction. Immersive environments are geared towards a more granular collection of personal data, including brain wave patterns (23). Together, these trends suggest that the groundwork for a post-2030 future is being firmly laid around brain data, promising both groundbreaking innovations and significant societal challenges, as discussed below.

3 Internet governance and neurotechnology

Leaping from the lab into the limelight, neurotechnologies raise new questions about ethical concerns, risks and fundamental rights protection. No binding international framework on neurotechnology exists yet: despite the progress made in identifying the lacunae, the mind-technology intersection remains unprotected (24). Until now, the discussion has primarily centered on ethical and legal aspects, with more emphasis on the “neuro” component (25) and less on the technological aspects. Convergence with AI and wearable devices further confounds the governance discussion, straddling different sectors, applications and regulatory mandates.

Less-invasive tools have made neurotechnology safer and more accessible. For medical uses, existing regulatory structures address many core aspects intersecting with health research and clinical practice. Institutional review boards, FDA/EMA-type approval pathways, and emerging voluntary commitments impose safety, efficacy, and informed-consent requirements for devices that record, stimulate, or decode brain activity. Good Clinical Practice standards and post-market surveillance mandates also play an important role in ensuring neurotech can meet the same bar as traditional medical devices.

Despite these foundations, significant gaps remain. Current medical-device regulations were drafted before the proliferation of closed-loop neuromodulators, consumer-grade EEG wearables, and AI-driven neurodiagnostics, leaving unclear how to assess long-term neural plasticity effects, algorithmic bias, or cross-border data-sharing for brain signals (26). A 2025 scoping review of clinical studies involving closed-loop neurotechnologies found that ethical considerations—such as autonomy, data privacy, and equitable access—are rarely addressed in trial protocols, highlighting a disconnect between regulatory compliance and deeper ethical reflection (26). Moreover, there is no universally accepted taxonomy for “neuro-enhancement” vs. therapeutic use, which hampers consistent risk classification (27). Finally, existing oversight lacks coordinated mechanisms for real-time monitoring of off-label applications such as neuromarketing or cognitive-performance boosters, and the legal status of brain-derived data remains ambiguous (28). These gaps are especially problematic when brain data is commercialised, amplifying all of the risks mentioned earlier. Closing these gaps therefore demands more than medical-sector regulation; a comprehensive response will need to combine dedicated neurotechnology frameworks, harmonised international standards, and rigorous scrutiny within broader Internet-governance discussions—an effort that can be supported by existing institutions.

Until now, international discussions on governing neurotechnologies have taken place in silos, disconnected from broader Internet governance debates. In the emerging neurotech field, prominent actors have included intergovernmental organisations like the OECD and UNESCO, professional bodies such as the IEEE, international nonprofits like the Neurorights Foundation and Neuroethics Society, and private companies like Neuralink and Kernel. Wider alliances of policymakers, academics, civil society and citizens are increasingly joining the conversation, as the focus is gradually shifting from responsible innovation towards public governance and oversight of neurotechnologies. This shift offers a timely opportunity to bridge the silos and integrate neurotech into the future Internet and sustainable development agenda.

Significant progress on ethical and legal dimensions has been made since the early 2000s, when ambitious government-funded digital brain research programs first started. The EU Human Brain Project and the US Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative were launched in 2013, followed by Japan's Brain/MNDS project in 2014, the Korean Brain Initiative in 2016 and the Chinese Brain Science and Brain-inspired Intelligence Project in 2017. Other nation-wide initiatives focused on brain research include the Australian Brain Alliance (since 2016) and the Canadian Brain Research Strategy (2017). More recently, in 2020, the Iniciativa Cerebro Latinoamericana (LATBrain) was established, comprising ten countries from Latin America and the Carribean.

These projects created momentum for articulating key concerns in relation to developing neurotechnologies. These concerns can be understood through three separate lenses: ethical considerations, risk-focused analysis, and rights-oriented perspectives. First, the ethical concerns dimension examines the moral implications of emerging neurotechnologies, questioning how they align with values such as autonomy, beneficence, and justice (5, 29). Second, the risk-based dimension focuses on identifying, quantifying, and managing the concrete hazards—ranging from physical safety and long-term neural plasticity effects to algorithmic bias and data-security vulnerabilities—that these devices may introduce. Finally, the rights-based dimension explores the legal and normative claims individuals can assert over their neural data, including emerging concepts like a “right to mental privacy” and the broader implications for cross-border data sharing and regulatory oversight (30, 31). Together, these three lenses provide a comprehensive framework for analyzing the challenges posed by modern brain-interface technologies.

Because it grants unprecedented access to the inner workings of the human mind, neurotech challenges conventional notions of privacy, autonomy, and dignity: brain-derived data can reveal thoughts, emotions, and health conditions that are often beyond a person's conscious control. The prospect of influencing or augmenting cognition intensifies debates about mental integrity, consent, equitable distribution and social effects. Moreover, the deployment of neural devices in everyday contexts—workplaces, education, and consumer markets—pose multifaced risks, from surveillance to coercion, exploitation, and discrimination. These risks could jeopardise existing fundamental protections such as the right to privacy (ICCPR Art 17), the right to freedom of thought (ICCPR Art 18), the right to non-discrimination (UDHR Arts 2 and 25), disability rights under the CRPD, children's rights under the Convention on the Rights of the Child, or the right to personal security. As the Australian Human Rights Commission notes, the absence of clear legal regimes for brain-derived data amplifies these threats, leaving individuals exposed to surveillance, health or developmental harms, and a loss of agency over their own cognition (23).

As the fields of neuroethics and neurolaw have expanded, advocacy and policy responses have grown at various levels (25). Internationally and regionally, a corpus of soft-law instruments has begun to take shape, while individual states have started enacting formal legislative measures. Since 2020, two specific efforts have gained traction: (1) the development of ethical principles and (2) alignment with human rights.

The first international soft law instrument was adopted in 2019. The OECD Recommendation on Responsible Innovation in Neurotechnology—to which 39 countries have adhered—-outlines 9 principles (see Table 1 below). More recently, in 2023, the Organization of American States has published the Inter-American Declaration of Principles regarding neuroscience, neurotechnologies, and human rights (32), which lists ten principles and builds in transparent governance, oversight and access to effective protection and remedies. In the Council of the European Union, telecommunications and digital ministers signed the first European declaration on neurotechnology in October 2023, committing to a human-centric and rights-oriented approach, while strengthening the EU's competitiveness in the field and its open strategic autonomy in a digital world (33).

Table 1
www.frontiersin.org

Table 1. Three sets of commitments issued between 2019 and 2023.

In November 2025, UNESCO adopted the first global standard-setting instrument on the ethics of neurotechnology, based on the work of an expert group tasked to create a framework addressing neurotechnology's challenges and benefits. The Recommendation aims to “bring a globally accepted normative instrument that focuses not only on the articulation of values and principles, but also on their practical realisation, through concrete policy recommendations and implementation plans that will be impactful for the global community” (10).

Closely intertwined with ethical approaches, human rights discussions have been divided between updating interpretations of existing rights and creating new ones. Proposals for novel rights aimed at safeguarding the brain, referred to as “neuro-rights” have surged since 2017 (3436). By now, there is agreement in the philosophical-legal scholarship that “mental privacy, mental integrity, and cognitive liberty […] need to be considered in legal response to advances in neurotechnology” (37). Other proposed neuro-rights include: (1) the right to identity, encompassing control over both physical and mental integrity; (2) the right to agency, ensuring freedom of thought and the ability to choose one's actions; (3) the right to fair access to mental augmentation, ensuring equitable distribution of neurotechnology's benefits for enhancing sensory and cognitive abilities; and (4) the right to protection from algorithmic bias, ensuring technologies remain free from embedded prejudices (38).

This approach has influenced national-level legislation and initiatives, where regulation has been emerging. Chile offers a paradigmatic example, having pioneered a constitutional amendment in 2021 to enshrine neurorights by protecting “cerebral activity and its data” (39), with the Senate also approving a neuroprotection bill at the same time, establishing the rights to personal identity, free will and mental privacy. In August 2023, Chile's Supreme Court upheld these neurorights in a landmark decision by ordering the neurotech company Emotiv to delete all brain data collected from former Senator Guido Girardi. In the United States, the state of Colorado amended the local Privacy Act in 2024 to cover “data generated by the technological processing, measurement, or analysis of an individual's biological, genetic, biochemical, physiological, or neural properties, compositions, or activities or of an individual's body or bodily functions” (40). Despite more attention to legal enshrinment, Magee et al. (31) note that most mental-privacy laws define “neural data” very narrowly, covering only information taken straight from the nervous system (31). They often leave out other kinds of cognitive biometrics—like heart-rate variability, eye-tracking results, or behavior-based patterns—that come from non-neural sources.

Alternative proposals suggest that existing national and international legal systems already safeguard freedoms such as consent, equality, and privacy—concepts that neuro-rights claim to enhance. Internationally, the Council of Europe (41) and UNESCO's International Bioethics Committee (42) have independently determined that rather than creating new human rights, existing human rights laws and regulations should be adapted to address the specific issues posed by neurotechnology. Different civil society groups and human rights experts have issued similar opinions (43, 44).

Around the world, governments have started considering national approaches to neurotechnology governance. Apart from the constitutional reform in Chile, France issued a Charter for the Responsible Development of Neurotechnologies in 2022 (45). Elsewhere, countries like the UK are exploring under what circumstances neural data might be classified as a special category of data under existing data protection frameworks, such as the UK's GDPR (46). Other jurisdictions, such as Spain, have specifically targeted technological convergence. Neurotechnology research is one of the priorities of the Spanish National Strategy for Artificial Intelligence, and a new National Center on Neurotechnologies has recently been launched. The 2021 Charter of Digital Rights (47) included neurorights as part of citizens' rights for the digital era.

The diverse range of international, national, and regional initiatives on neurotechnology present a complex picture. While the specific opportunities and challenges of neurotechnology are well-documented in international discussions, there is less clarity on protections and path(s) forward.

4 The intersection of neurotech and internet governance

The ethical, risk-based, and rights-based dimensions of neurotechnology reflect contemporary Internet governance discussions that have evolved through multilateral and multistakeholder discussions since the 1990s and for which institutional structures have been set in place already. Despite this established institutional base, engagement with the current Internet-governance agenda remains sparse, limiting cross-sector dialogue and coordinated policy development. The most striking example comes from the UN Secretary-General's 2021 “Our Common Agenda” report, which identified neurotechnology as a critical frontier issue needing further clarification (48). Subsequently, the UN Human Rights Council has requested a report on neurotechnologies and their potential effects on human rights, whose final version was presented to the Council in March 2025 (49). Ahead of the UN Summit of the Future, however, the work stream on the Global Digital Compact and its agreed version (50) have not included a single reference to neurotechnology, despite wider engagement with emerging technologies in the text (51).

The incorporation of neurotechnologies into the digital ecosystem introduces novel variables for Internet-governance deliberations—most notably the need to accommodate ultra-low-latency communication protocols, secure end-to-end encryption for brain-derived data streams, and interoperable standards that allow heterogeneous devices to communicate without compromising safety or privacy. Yet the discipline of Internet governance brings a mature analytical toolkit that is directly applicable to these challenges. Decades of scrutiny concerning the distribution of power among state actors, private corporations, civil society, and technical communities have yielded robust multistakeholder processes, normative frameworks, and institutional mechanisms, from technical bodies to the UN-led Internet Governance Forum (5255). These structures provide a strong basis to mediate the complex policy questions now arising around neurotechnology, such as who should define technical specifications for neural-data transmission, how equitable access to neuro-digital infrastructure can be ensured across socioeconomic and geographic boundaries, and what accountability regimes are appropriate for entities that collect, aggregate, or repurpose neural data for commercial or governmental purposes. Moreover, the convergence of neurotechnology with immersive virtual environments—augmented reality, virtual reality, and mixed-reality platforms—magnifies the urgency of synchronising governance efforts.

At the nexus of Internet governance and emerging neurotechnologies, two primary axes of convergence emerge: (1) infrastructure, standards, and access—-the technical backbone that supports the routing, storage, and computation of neural-signal traffic, with its access and control points; and (2)neural data—the raw and brain-derived information captured by brain-computer interfaces, closed-loop neuromodulators, and consumer-grade wearables. Both are discussed in more detail below.

4.1 Infrastructure, standards and access

In the Web 4.0, many neurotechnologies are expected to operate as part of complex wireless ecosystems, which can combine unimpeded movement with higher levels of network connectivity to allow for faster processing of neural data. The underlying infrastructure might be part of the global internet (operating on existing networks) or distinct and specific for virtual worlds. Significant investments from private companies have targeted the creation of “enabling” platforms (56) for neurodevices, equivalent to operating systems for computers or mobile phones. The dual-use nature of neurotechnologies could also incentivise the separation of infrastructure or the unilateral establishment of rules, which is why it is primordial to involve technologists, clinicians, ethicists, regulators, and end-users in nascent policy discussions.

In closed-loop neuromodulation and BCI control loops, ultra-low-delay communication pathways are needed. In more generic uses, the heterogeneity of devices—from implants to consumer-grade wearables—necessitates interoperable protocol stacks and standardised data schemas that can be universally interpreted across manufacturers and jurisdictions. In the absence of interoperability standards and cybersecurity protocols, neurotechnologies pose important challenges to the Internet governance ecosystem. The evolution of the Internet has already been hindered by unequal participation in technology design and security issues (54). Therefore, international discussions on neurotechnology need to consider the value of non-proprietary solutions and standardisation from the outset. These technologies have global consequences, making it essential for international standardisation efforts to be participatory and inclusive of perspectives from various groups of stakeholders around the globe.

Access issues are equally important. In an already uneven landscape with R&D concentrated in a few geographic regions and driven by technologically-advanced countries, the distribution of benefits comes into question. Adding to it, the significant concentration of AI computing power and manufacturing resources in the hands of a few tech giants already structure the market for neurotech. Geopolitical interests and trade wars can also affect the development of the infrastructure, limiting the choices available to middle-income and lower-income countries and regions. Access to technological and medical infrastructure is especially critical in the least developed countries and small island developing states, where communities experience significant digital divides. It is crucial to ensure equal access to the infrastructure and to the latest scientific and technological knowledge.

The standardisation of neurotechnologies is uneven and incomplete, and their integration within the broader Internet systems is not adequately covered by current efforts. While safety, security and privacy are recognised as top priorities, there are sigificant limitations to addressing these issues using standardisation. These include a lack of agreed-upon terminology, limited community engagement, insufficient standards for data sharing, limited reporting on neurotech developments, and the need to specify complementary standards that scale from consumer to clinical applications (57). Alignment with existing frameworks on emerging technologies is also much needed in order to address wider societal and environmental implications. This would ensure safe implementation techniques, adequate security layers and trust in the technology (58). This approach should also integrate product lifecycle and disposal considerations from the outset. Although non-for-profit and professional organisations have started to advocate for a “Technocratic Oath” (59) or an “ethical-by-design” approach (60), standard-setting organisations—including Internet-related ones—are not consistently involved in testing and securing the networks reaching the brain.

Cutting across infrastructure, standards and access issues, sustainability considerations need to be integrated into governance frameworks from the start. Neurotech uses critical materials and device components resulting from processes that are detrimental to the environment and create toxic waste (42). Standards for environmental sustainability introduced early can help reduce their harmful impact, as various international commitments point to (10, 61). As the science around technological convergence and biodegrability progresses, there is an opportunity to leverage technological leapfrogging alongside sustainable growth, particularly in resource-constrained settings. In the near future, changes in the business models are also needed, shifting the start-up focus on quick gains and short-term objectives towards longer-term vision and plans required for neural interface work (17).

4.2 Neural data and adequate protections

UNESCO's 2024 draft Recommendation on the Ethics of Neurotechnology distinguishes between neural and cognitive biometric data, but treats them in tandem as “uniquely sensitive because they provide deep insights into the pre-behavioural processes that underpin our mental states and cognitive functions” (10). While the former refers to the brain activity patterns, thought and memory, cognitive biometric data comprises all other inferences about mental states (often AI-enabled) derived from devices and biosensors. Both types of data can influence neural processes in ways that are unanticipated or undesirable, affecting individual mental states and potentially transforming their societal interactions. Ensuring discussions around a “special category of data” protections have been interlinked with privacy risks inherent in capturing, storing, and processing neural signals, regardless of whether they originate from medical devices or consumer-grade wearables. Similarly, equitable access to (longitudinal) neurodata could be complicated by data sharing practices (61), particularly when it comes to cultural differences and the diversity of governance systems around the world.

Debating privacy has been at the core of internet governance debates for at least two decades (6264), evolving from voluntary commitments to a legal acquis in the form of the EU General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA), in force since 2018 and 2020, respectively. These regulations provide baseline privacy and data security protections, mandating consent, data minimisation, and purpose limitation, and codifying principles such as privacy-by-design and encryption. Yet, such legal frameworks are typically broad and flexible and do not account for unintended uses of neural and cognitive biometric data. Recent reports (65) and academic studies (31) show that all the major firms in BCI, extended reality, and fitness-wearable sectors routinely gather cognitive biometric data and claim broad rights to retain the neural data they collect, often under vague or permissive policies.

The concerns about lax industry practices around the collection, storage, use, and sale of cognitive biometric data are amplified by the fact that many privacy regimes treat de-identified or aggregated data as non-personal, allowing companies to sidestep consent requirements. While some firms claim to share only “aggregate” data for vaguely described (research) purposes, others publicly assert that they do not sell users' personal data, but reserve the right to leverage the same data (including de-identified or aggregated sets) for targeted advertising. However, neural and cognitive biometric data are more difficult to anonymise, increasing the potential for identification (66) beyond the initial context of data collection. From a cybersecurity perspective, misuse might include “hacking the brain” (67) to cause disruptions, as well as new forms of data tampering and user impersonation in virtual worlds.

Related security and surveillance concerns include questions about who collects the data, for how long, where they are stored and local regulations regarding companies' obligations to share data with the host country. Vulnerable populations, as well as those in vulnerable (mental) states, require additional safeguards. For example, the UN Committee on the Rights of the Child argued that practices relying on “neuromarketing, emotional analytics, immersive advertising and advertising in virtual and augmented reality environments to promote products, applications and services should [also] be prohibited from engagement directly or indirectly with children” (68). In this regard, the brain's development and its plasticity require special protections, especially considering the potential to influence life chances and future opportunities. Thus the life cycle approach to neural data and related technology is all the more important.

Consequently, there have been two main proposals for addressing neural data: carving out a separate legal right to mental privacy (as discussed above) or integrating adequate protections within existing data-protection regimes. The digital rights agenda in Internet governance serves as a cautionary tale. Despite being on the international agenda for over a decade, its interpretation is evolving with emerging technologies, leading to very different levels of protection across the globe. For Susser and Cabrera (69), neurotechnologies raise novel privacy stakes, comparable to those posed by other advanced data-collection tools in genomic sequencing or online surveillance) and are best addressed as part of existing legislation by adding context-appropriate safeguards, transparency, and accountability. Lessons from Internet governance show robust oversight takes decades to build and cannot be achieved by self-regulation alone (70, 71). Some of the existing mechanisms for governing the Internet—such as standard-setting bodies that codify protections, transparency watchdogs and multi-stakeholder forums that feed into policymaking could support the development of a more coherent, globally interoperable approach to neurotechnology safeguards.

5 The opportunity to govern neurotechnologies

The nascent national and regional frameworks addressing neurotechnology, whether directly or indirectly, remain fragmented and incomplete efforts, with many regulatory and protection gaps. As such, they are likely to fall short in addressing the critical challenges outlined in this review, specifically those related to (1) infrastructure, standards, and access, and (2) neural data. Crafting new international treaties to address these concerns is notoriously challenging, which makes convergence with the Internet governance agenda essential. Efforts by individual countries to regulate emerging technologies are often insufficient, raising concerns that neurotech companies might operate from or relocate to more favourable jurisdictions. On the other hand, soft law instruments lack formal enforcement mechanisms, rely on voluntary compliance and often result in inconsistent applications. For a domain as critical as neurotechnology, effective governance needs to go beyond to reflect societal concerns about the broader digital transformation.

Previously siloed efforts are beginning to come together as a result of technological convergence, in particular around AI and wearable technology. At the regional level, the EU AI Act outlines different levels of risks induced by the development and deployment of AI systems “with the objective to or the effect of materially distorting human behaviour” (72). While regulatory discussions in the context of emerging technologies might incorporate neurotechnology, they are not sufficiently advanced to capture the unique challenges posed by it. These new challenges, likely to affect the very essence of being human, require thorough discussions across all Internet governance and policy venues, as well as in specialised virtual worlds and Web 4.0 forums. Before definitions of key issues are agreed and governance trajectories are locked in, there is a need to include a more diverse group of actors in this process, including the different Internet governance communities active for decades. As a first step, the roles and responsibilities of the state and of regional and international organisations need to be clarified in relation to a domain that is private-sector led.

Broadly formulated principles for the development of neurotechnology help in building consensus across borders, but remain insufficient without governance instruments with binding power. Existing internet governance frameworks and processes for a trusted Internet continue to have neurotech as their blind spot. The Global Digital Compact is a case in point, despite referencing emerging technologies generically. Similarly, the preparatory work for the World Summit on Information Society 20th review (known as the WSIS+20) has not specifically designated cutting-edge technologies that focus on the brain. Increasing awareness, transparency and safety standards for neurotechnologies used for non-medical purposes is a must for a stable and equitable Internet future. Updating and tracking neurotechnology should become a routine part of the Web 4.0 policy conversation, with coordinated efforts across international venues to broaden current institutional mandates. This would allow future Internet-governance work to place a dedicated emphasis on protecting neural data, pairing technological progress with safeguards for the brain—the most sensitive part of the human body.

6 Conclusion and way forward

Neurotechnologies are still in their early stages of development, but their potential is immense, especially when paired with AI. By leveraging neural data, these innovations promise to transform both medical and non-medical applications, fundamentally altering our daily lives in ways we have yet to fully grasp. As they develop, they raise urgent questions about access to the underlying infrastructure and technical standards, as well as neural data safeguards for both individual users and communities more broadly.

Nearly all of today's transformative neurotechnologies are connected to the Internet. Yet most international and national discussions often occur in isolation from broader Internet governance conversations. This disconnect ignores the need for a unified approach that integrates neurotechnology into existing frameworks. This review examined the importance of addressing these concerns alongside Internet policy and emphasised the urgency of taking action sooner rather than later. To address these pressing challenges, a collective effort is essential—one that invites diverse stakeholder input into the development and regulation of these technologies as part of ongoing Internet governance initiatives, using the post-2030 agenda as a stepping stone.

In the short term, the deployment of neurotechnology for non-medical purposes should be top of the agenda for Internet policy-makers. This technology adds complexity to existing digital infrastructure and regulation dynamics, accelerating the need to implement clear safety and security standards across the stack. As the post-2030 sustainable development agenda is redefined, it is time to consider how neurotechnology can fit into future-oriented frameworks under negotiation, including the WSIS+20 review due in December 2025.

In the long-term, neurotechnologies could reshape human experience and societal structures in ways we are only beginning to imagine. Failing to address these concerns promptly could deepen the existing disparities and vulnerabilities and create new ones, impacting both users and the wider Internet ecosystem. Equitable access to digital and neural infrastructure, coupled with robust governance mechanisms, will be crucial for a future where neurotechnology is woven into the fabric of our lives.

Author contribution

RR: Writing – original draft, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research and/or publication of this article. This research was part of the Global Initiative for the Future of the Internet.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Ramirez S, Liu X, Lin P-A, Suh J, Pignatelli M, Redondo RL, et al. Creating a false memory in the hippocampus. Science. (2013) 341:387–91. doi: 10.1126/science.1239073

PubMed Abstract | Crossref Full Text | Google Scholar

2. Morse SJ. Neuroscience, free will, and criminal responsibility. In: Glannon W, editor. Free Will and the Brain: Neuroscientific, Philosophical, and Legal Perspectives. Cambridge: Cambridge University Press (2015). p. 251–86.

Google Scholar

3. Binnendijk A, Marler T, Bartels EM. Brain-computer Interfaces: US Military Applications and Implications, an Initial Assessment. Santa Monica, CA: RAND Corporation (2020).

Google Scholar

4. Aicardi C, Bitsch L, Badum NB, Datta S, Evers K, Farisco M, et al. Opinion on ‘Responsible Dual Use’: political, security, intelligence and military research of concern in neuroscience and neurotechnology. Human Brain Project (2018). Available online at: https://discovery.ucl.ac.uk/id/eprint/10124586/ (Accessed November 19, 2025).

Google Scholar

5. Farahany N. The Battle for Your Brain: Defending the Right to Think Freely in the age of Neurotechnology. New York: St. Martin’s Press (2023).

Google Scholar

6. Valeriani D, Santoro F, Ienca M. The present and future of neural interfaces. Front Neurorobot. (2022) 16:953968. doi: 10.3389/fnbot.2022.953968

PubMed Abstract | Crossref Full Text | Google Scholar

7. Botes M (ed) Brain-computer interfaces and human rights: brave new rights for a brave new world. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘22). (2022) ACM.

Google Scholar

8. López-Bernal S, Quiles-Pérez M, Martínez-Beltrán ET, Martínez-Pérez G, Huertas-Celdrán A. When brain–computer interfaces meet the metaverse: landscape, demonstrator, trends, challenges, and concerns. Neurocomputing. (2025) 625:129537. doi: 10.1016/j.neucom.2025.129537

Crossref Full Text | Google Scholar

9. Kim K, Jeong H, Park J, Lee S, Kang D. Metaverse wearables for immersive digital healthcare: a review. Adv Sci. (2023) 10(32):2303234. doi: 10.1002/advs.202303234

PubMed Abstract | Crossref Full Text | Google Scholar

10. UNESCO. Outcome document of the First Meeting of the AHEG First Draft of a Recommendation on the Ethics of Neurtechnology (2024).

Google Scholar

11. Salles A, Mahieu V, Rommelfanger K, Porcello D, Tournas L, Swieboda P. Towards Inclusive EU Governance of Neurotechnologies. Brussels: Institute of Neuroethics and International Center for Future Generations (2024).

Google Scholar

12. Gordon EC, Seth AK. Ethical considerations for the use of brain-computer interfaces for cognitive enhancement. PLoS Biol. (2024) 22(10):e3002899. doi: 10.1371/journal.pbio.3002899

PubMed Abstract | Crossref Full Text | Google Scholar

13. Alsharif AH, Salleh NZM, Abdullah M, Khraiwish A, Ashaari A. Neuromarketing tools used in the marketing mix: a systematic literature and future research agenda. Sage Open. (2023) 13(1):21582440231156563. doi: 10.1177/21582440231156563

Crossref Full Text | Google Scholar

14. Lim WM. Demistifying neuromarketing. J Bus Res. (2018) 91:205–20. doi: 10.1016/j.jbusres.2018.05.036

Crossref Full Text | Google Scholar

15. UNESCO. Unveiling the Neurotechnology Landscape: Scientific Advancements Innovations and major Trends. Paris: UNESCO (2023).

Google Scholar

16. Research BCC. Neurotech devices: Global market outlook (2023).

Google Scholar

17. Society R. iHuman: Blurring lines between mind and machine (2019).

Google Scholar

18. Rashid M, Sulaiman N, Perves M, Musa RM, Ab. Nasir AF, Bari BS. Current status, challenges, and possible solutions of EEG-based brain-computer interface: a comprehensive review. Front Neurorobot. (2020) 14:515104. doi: 10.3389/fnbot.2020.00025

Crossref Full Text | Google Scholar

19. Meta. Imagining a new interface: hands-free communication without saying a word (2020). Available online at: https://tech.facebook.com/reality-labs/2020/3/imagining-a-new-interface-hands-free-communication-without-saying-a-word/ (Accessed November 19, 2025).

Google Scholar

20. Naddaf M. Brain-Reading device is best yet at decoding “internal speech”. Nature. (2024). Available online at: https://www.nature.com/articles/d41586-024-01424-7 (Accessed November 19, 2025).

Google Scholar

21. Meta. BCI milestone: New research from UCSF with support from Facebook shows the potential of brain-computer interfaces for restoring speech communication (2021). Available online at: https://tech.facebook.com/reality-labs/2021/7/bci-milestone-new-research-from-ucsf-with-support-from-facebook-shows-the-potential-of-brain-computer-interfaces-for-restoring-speech-communication/ (Accessed November 19, 2025).

Google Scholar

22. Karizat N, Guarino N, Kenna D, Petralia A, Doorn N. Patent applications as glimpses into the sociotechnical imaginary: ethical speculation on the imagined futures of emotion AI for mental health monitoring and detection. Proceedings of the ACM Conference on Human Factors in Computing Systems (2024).

Google Scholar

23. Australian Human Rights Commission. Peace of mind: Navigating the ethical frontiers of neurotechnology and human rights (2025). Available online at: https://humanrights.gov.au/our-work/rights-and-freedoms/publications/peace-mind-new-report-explores-neurotechnology-and-human (Accessed October 22, 2025).

Google Scholar

24. Bublitz JC. What an international declaration on neurotechnologies and human rights could look like: ideas, suggestions, desiderata. AJOB Neurosci. (2024) 15(2):96–112. doi: 10.1080/21507740.2023.2270512

PubMed Abstract | Crossref Full Text | Google Scholar

25. Yuste R. Advocating for neurodata privacy and neurotechnology regulation. Nat Protoc. (2023) 18:2869–75. doi: 10.1038/s41596-023-00873-0

PubMed Abstract | Crossref Full Text | Google Scholar

26. Haag L, Starke G, Ploner M, Ienca M. Ethical gaps in closed-loop neurotechnology: a scoping review. NPJ Digital Medicine. (2025) 8(1):510. doi: 10.1038/s41746-025-01908-4

PubMed Abstract | Crossref Full Text | Google Scholar

27. Haston S, Gill S, Twentyman K, Green E, Agbeleye O, Eastaugh C, et al. A horizon scan of neurotechnology innovations. Int J Environ Res Public Health. (2025) 22(5):811. doi: 10.3390/ijerph22050811

PubMed Abstract | Crossref Full Text | Google Scholar

28. Colorado Medical Society. Colorado physicians successfully pass national resolution protecting patients’ neural data (2025). Available online at: https://www.cms.org/colorado-physicians-successfully-pass-national-resolution-protecting-patients-neural-data/ (Accessed November 19, 2025).

Google Scholar

29. Walker MJ, Mackenzie C. Neurotechnologies, relational autonomy, and authenticity. Int J Fem Approaches Bioeth. (2020) 13(1):98–119. doi: 10.3138/ijfab.13.1.06

Crossref Full Text | Google Scholar

30. Szoszkiewicz Ł, Yuste R. Mental privacy: navigating risks, rights and regulation: advances in neuroscience challenge contemporary legal frameworks to protect mental privacy. EMBO Rep. (2025) 26(14):3469–73. doi: 10.1038/s44319-025-00505-6

PubMed Abstract | Crossref Full Text | Google Scholar

31. Magee P, Ienca M, Farahany N. Beyond neural data: cognitive biometrics and mental privacy. Neuron. (2024) 112(18):3017–28. doi: 10.1016/j.neuron.2024.09.004

PubMed Abstract | Crossref Full Text | Google Scholar

32. OAS. Inter-American Declaration of Principles regarding neuroscience, neurotechnologies, and human rights (2023). Available online at: https://www.oas.org/en/sla/iajc/docs/CJI-RES_281_CII-O-23_corr1_ENG.pdf (Accessed November 19, 2025).

Google Scholar

33. Council of the European Union. European declaration on neurotechnology (2023). Available online at: https://www.lamoncloa.gob.es/lang/en/gobierno/news/Paginas/2023/20231024_leon-declaration-neurotechnology.aspx (Accessed November 19, 2025).

Google Scholar

34. Yuste R, Goering S, Arcas BA, Bi G, Carmena JM, Carter A, et al. Four ethical priorities for neurotechnologies and AI. Nature. (2017) 551:159–63. doi: 10.1038/551159a

PubMed Abstract | Crossref Full Text | Google Scholar

35. Ienca M, Andorno R. Towards new human rights in the age of neuroscience and neurotechnology. Life Sci Soc Policy. (2017) 13:5. doi: 10.1186/s40504-017-0050-1

PubMed Abstract | Crossref Full Text | Google Scholar

36. Sommaggio P, Mazzocca M, Gerola A, Ferro F. Cognitive liberty: a first step towards a human neuro-rights declaration. BioLaw J. (2017) 3:27–45.

Google Scholar

37. Ligthart S, Ienca M, Meynen G. Minding rights: mapping ethical and legal foundations of ‘neurorights’. Camb Q Healthc Ethics. (2023) 32(4):461–81. doi: 10.1017/S0963180123000245

Crossref Full Text | Google Scholar

38. Yuste R, Genser J, Herrmann S. It’s time for neuro-rights. Horizons. (2021) 18:154–64.

Google Scholar

39. Zúñiga-Fajuri A, Villavicencio Miranda L, Zaror Miralles D, Salas Venegas R. Neurorights in Chile: between neuroscience and legal science. In: Hevia M, editor. Developments in Neuroethics and Bioethics, Vol. 4. London: Academic Press (2021). p. 165–79. doi: 10.1016/bs.dnb.2021.06.001

Crossref Full Text | Google Scholar

40. H.B. 24 1058 (2024).

Google Scholar

41. Council of Europe. Strategic action plan on human rights and technologies in biomedicine (2020–2025) (2020).

Google Scholar

42. International Bioethics Committee. Ethical issues of neurotechnology (2021).

Google Scholar

43. Global Partners Digital. Response to consultation on the first draft of UNESCO recommendation on the ethics of neurotechnology (2024).

Google Scholar

44. Alegre S. We don’t need new “neurorights—we need to apply the existing law. Financial Times. (2023).

Google Scholar

45. Ministère de l'Enseignement supérieur. Charte de développement responsable des neurotechnologies (2022).

Google Scholar

46. Information Commissioner Office. ICO Tech Futures: Neurotechnology. (2023).

Google Scholar

47. Sánchez presents the Digital Rights Charter with which “Spain is at the international forefront in protecting citizens’ rights” [press release] (2021).

Google Scholar

48. United Nations. Our Common Agenda (2020).

Google Scholar

49. Special Rapporteur on Privacy. Foundations and principles for the regulation of neurotechnologies and the processing of neurodata from the perspective of the right to privacy (2025).

Google Scholar

50. United Nations. Global Digital Compact (2024).

Google Scholar

51. Radu R. Internet governance and global digital constitutionalism: framing the Global Digital Compact. In: De Gregorio G, Policino O, Valcke P, editors. The Oxford Handbook of Digital Constitutionalism. Oxford: Oxford University Press (2025). Available online at: https://doi.org/10.1093/oxfordhb/9780198877820.013.19 (Accessed November 19, 2025).

Google Scholar

52. Mueller M. Networks and States: The Global Politics of Internet Governance. Cambridge, MA: MIT Press (2010).

Google Scholar

53. DeNardis L. The Global war for Internet Governance. Cambridge: Cambridge University Press (2014).

Google Scholar

54. Radu R. Negotiating Internet Governance. Oxford: Oxford University Press (2019).

Google Scholar

55. Hofmann J. Multi-stakeholderism in Internet governance: putting a fiction into practice. J Cyber Policy. (2016) 1(1):29–49. doi: 10.1080/23738871.2016.1158303

Crossref Full Text | Google Scholar

56. Kernel. Vision (n.d.) Available online at: https://www.kernel.com/about (Accessed November 19, 2025).

Google Scholar

57. IEEE. Standards roadmap: neurotechnologies for brain-machine interfacing (2020). Available online at: https://standards.ieee.org/wp-content/uploads/import/documents/presentations/ieee-neurotech-for-bmi-standards-roadmap.pdf (Accessed November 19, 2025).

Google Scholar

58. IEEE. Neurotechnologies—the next technology frontier. (2020) Available online at: https://brain.ieee.org/topics/neurotechnologies-the-next-technology-frontier/ (Accessed November 19, 2025).

Google Scholar

59. Álamos MF, Kausel L, Baselga-Garriga C, Ramos P, Aboitiz F, Uribe-Etxebarria X, et al. A technocratic oath. In: López-Silva P, Valera L, editors. Protecting the Mind Ethics of Science and Technology Assessment, Vol. 49. Cham: Springer (2022). p. 163–74. doi: 10.1007/978-3-030-94032-4_14

Crossref Full Text | Google Scholar

60. IEEE Brain. IEEE Neuroethics Framework. IEEE (2021). Available online at: https://brain.ieee.org/publications/ieee-neuroethics-framework/ (Accessed November 19, 2025).

Google Scholar

61. Garden H, Winickoff D, Arnaldi S, Revuelta G. Responsible innovation in neurotechnology enterprises. OECD Science, Technology and Industry Working Papers; 2019/05 (2019).

Google Scholar

62. Parker J. From privacy to permission: the evolving landscape of internet governance. Institute of Internet Economics (2023). Available online at: https://instituteofinterneteconomics.org/from-privacy-to-permission-the-evolving-landscape-of-internet-governance/ (Accessed November 19, 2025).

Google Scholar

63. Bocquet N. Caught between privacy and surveillance: explaining the long-term stagnation of data protection regulation in liberal democracies. Regul Governance. (2025). doi: 10.1111/rego.12656

Crossref Full Text | Google Scholar

64. De Gregorio G, Radu R. Digital constitutionalism in the new era of internet governance. Int J Law Inf Technol. (2022) 30(1):68–87. doi: 10.1093/ijlit/eaac004

Crossref Full Text | Google Scholar

65. Genser J, Damianos S, Yuste R Safeguarding brain data: assessing the privacy practices of consumer neurotechnology companies. Neurorights Foundation (2024). Available online at: https://perseus-strategies.com/wp-content/uploads/2024/04/FINAL_Consumer_Neurotechnology_Report_Neurorights_Foundation_April-1.pdf (Accessed November 19, 2025).

Google Scholar

66. Schwarz CG, Kremers WK, Therneau TM, Sharp RR, Gunter JL, Vemuri P, et al. Identification of anonymous MRI research participants with face-recognition software. N Engl J Med. (2019) 381(17):1684–6. doi: 10.1056/NEJMc1908881

PubMed Abstract | Crossref Full Text | Google Scholar

67. Ienca M, Haselager P. Hacking the brain: brain–computer interfacing technology and the ethics of neurosecurity. Ethics Inf Technol. (2016) 18(2):117–29. doi: 10.1007/s10676-016-9398-9

Crossref Full Text | Google Scholar

68. Committee on the Rights of the Child. General comment No. 25 on children’s rights in relation to the digital environment (2021).

Google Scholar

69. Susser D, Cabrera LY. Brain data in context: are new rights the way to mental and brain privacy? AJOB Neurosci. (2024) 15(2):122–33. doi: 10.1080/21507740.2023.2188275

PubMed Abstract | Crossref Full Text | Google Scholar

70. Bennett CJ, Raab CD. Revisiting the governance of privacy: contemporary policy instruments in global perspective. Regul Governance. (2020) 14:447–64. doi: 10.1111/rego.12222

Crossref Full Text | Google Scholar

71. Rossi J. What rules the internet? A study of the troubled relation between web standards and legal instruments in the field of privacy. Telecomm Policy. (2021) 45(6):102143. doi: 10.1016/j.telpol.2021.102143

Crossref Full Text | Google Scholar

72. European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (2024).

Google Scholar

Keywords: neurotechnology, internet governance, virtual reality, brain data, privacy, data protection

Citation: Radu R (2025) Cognitive frontiers: neurotechnology and global internet governance. Front. Digit. Health 7:1690489. doi: 10.3389/fdgth.2025.1690489

Received: 21 August 2025; Revised: 25 October 2025;
Accepted: 4 November 2025;
Published: 12 December 2025.

Edited by:

Carlo Massaroni, Campus Bio-Medico University, Italy

Reviewed by:

Andrea Lavazza, Pegaso University, Italy
Efstratios Livanis, University of Macedonia, Greece

Copyright: © 2025 Radu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Roxana Radu, cm94YW5hLnJhZHVAYnNnLm94LmFjLnVr

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.