Abstract
Introduction:
The rapid evolution of artificial intelligence from static large language models to autonomous, agentic AI systems has introduced capabilities such as persistent memory, tool-augmented reasoning, and multi-agent collaboration. While these advancements significantly enhance real-world applicability, they also create a new and underexplored class of privacy risks, including unintended retention, propagation, and amplification of sensitive information across tasks, users, and execution cycles. Existing research predominantly focuses on stateless or single-inference models, leaving the privacy implications of agentic systems insufficiently understood.
Methods:
This study presents a comprehensive architectural analysis of data leakage in agentic AI systems. The proposed framework models the end-to-end agent workflow and systematically examines how sensitive information can traverse key components, including persistent memory modules, planning and reasoning processes, tool invocation layers, inter-agent communication channels, and feedback-driven autonomy loops. Based on this architecture, a structured taxonomy of leakage pathways is developed and mapped to realistic threat models and attack vectors observed in practical deployments.
Results:
The analysis identifies multiple leakage pathways unique to agentic AI systems, demonstrating how data can persist, propagate, and be unintentionally exposed across system components and operational cycles. The findings reveal that these leakage mechanisms are more complex and pervasive than those observed in traditional large language model settings, particularly due to the integration of memory, tools, and multi-agent interactions.
Discussion:
The study highlights the limitations of existing LLM-centric privacy and security defenses when applied to autonomous agentic systems. It emphasizes the need for lifecycle-aware, component-level mitigation strategies that address privacy risks across the entire agent workflow. The proposed architectural perspective provides a foundation for designing privacy-by-design agentic AI systems and supports safer deployment in sensitive and regulated domains.
1 Introduction
Recent advances in artificial intelligence have witnessed a paradigm shift from static, prompt-driven large language models (LLMs) to agentic AI systems capable of autonomous planning, tool usage, memory retention, and goal-oriented decision-making (Moia et al., 2025). Agentic AI systems constantly interact with external surroundings, use third-party tools, cooperate with other agents, and maintain lasting internal states, in contrast to traditional LLMs that function in isolated, single-turn or limited multi-turn interactions. While these features significantly improve usability and workflow automation, they also introduce emerging privacy and data leakage risks that remain insufficiently explored in recent research (Feretzakis and Verykios, 2024; Wang S. et al., 2025).
The main areas of research on data leaking in conventional machine learning and LLM-based systems have been training data memorization, model inversion, membership inference, and prompt-based information disclosure. However, by prolonging the lifecycle of sensitive data beyond a single inference, agentic AI radically changes the threat landscape. Agentic systems may inadvertently preserve, spread, and re-expose private or secret information between tasks, users, and sessions through long-term memory modules, vector databases, execution logs, and feedback loops (Hashmi et al., 2024). The effects of even small privacy violations are exacerbated by this persistent and independent conduct.
The use of tool-augmented reasoning, in which agents use databases, web services, enterprise software, code interpreters, and APIs to accomplish goals, increases the risk. Credentials, sensitive user input, proprietary documents, or regulated data may pass through several untrusted components, each of which could be a point of leakage. Furthermore, agent behavior can be manipulated via indirect prompt injection attacks, which are embedded within retrieved documents, web content, or tool outputs. This can result in the agent disclosing internal context, memory contents, or sensitive operational data without the express purpose of the user.
Multi-agent collaboration, in which autonomous agents converse, assign roles, and share intermediate outputs to complete challenging tasks, is another characteristic that distinguishes agentic AI. Although this kind of coordination increases scalability and performance, it also creates privacy issues with regard to emergent information sharing, role confusion, and trust boundaries (Juneja et al., 2025; Shapira et al., 2024). Data reduction and purpose limitation principles required by contemporary privacy legislation may be violated if information supplied to one agent for a particular purpose is disseminated to others without sufficient access control.
Systematic research on data leakage and privacy issues in agentic AI is still lacking, despite the quick uptake of agentic AI in enterprise copilots, robotic process automation (RPA), cybersecurity operations, healthcare decision support, and smart governance. The majority of current surveys ignore agent-specific elements like planners, memory structures, tool orchestration layers, and feedback mechanisms in favor of concentrating on LLM security or general AI privacy (Miao et al., 2024). This gap presents a significant obstacle to the application of agentic AI in fields that are sensitive to privacy and safety.
This survey offers a thorough and organized investigation of data leakage risks specific to agentic AI systems in order to overcome this constraint. A new taxonomy of leakage sources covering memory, tools, planning, inter-agent communication, and feedback loops is presented in this study. After a critical assessment of current mitigation techniques, it looks more closely at real-world attack pathways, threat models, and privacy failures across application domains (Qiao et al., 2025; He et al., 2025; Wang Z. et al., 2025). This work intends to promote the development of privacy-aware, secure-by-design agentic AI frameworks, enabling safer adoption of autonomous intelligence in real-world systems, by emphasizing open obstacles and future research areas.
2 Background and related work
Existing research on AI privacy and security can be broadly classified into four major perspectives: (1) model-level privacy risks such as training data memorization and inference attacks, (2) adversarial manipulation including prompt injection and context poisoning, (3) system-level vulnerabilities in multi-agent and tool-integrated architectures, and (4) governance and regulatory analyses addressing compliance and accountability. While these studies provide valuable insights into isolated components of AI security, a consolidated taxonomy specifically addressing data leakage pathways in autonomous agentic AI systems remains limited. This gap motivates the structured framework proposed in this work.
AgentLeak (Yagoubi et al., 2026) introduced a full-stack benchmark evaluating nearly 5,000 execution traces in multi-agent LLM systems. Their study quantitatively measures privacy leakage but does not provide a component-level architectural taxonomy explaining why leakage propagates across memory, tools, and feedback loops.
PrivAgent (Nie et al., 2024) proposes an agentic red-teaming framework for detecting systemic privacy vulnerabilities. While it provides strong empirical evaluation, its focus is attack discovery rather than structural modeling of leakage pathways.
Secure Multi-LLM Agentic AI and Agentification for Edge General Intelligence by Zero-Trust (Liu et al., 2025b) offers a broad security survey emphasizing zero-trust principles in distributed agent systems. However, its scope centers on trust governance and edge deployment, without detailed lifecycle-based leakage categorization.
MAGPIE (Juneja et al., 2025) provides a contextual privacy benchmark for multi-agent systems but does not analyze planning, memory persistence, and tool invocation as unified architectural risk sources.
2.1 Large language models and data leakage (baseline)
Although Large Language Models (LLMs) have shown impressive ability in natural language synthesis and processing, they are intrinsically susceptible to many types of data leakage. Previous studies have demonstrated that LLMs may unintentionally commit private training data to memory, creating privacy problems such model inversion attacks and membership inference. Furthermore, models may be forced to divulge sensitive or secret information contained in their learnt representations through prompt-based exploitation. Although mitigation strategies like output filtering, data sanitization, and differential privacy (DP) have been put forth, they mostly deal with situations involving single-inference or stateless interactions (Wu and Cao, 2023; Song and Zhao, 2024). As a result, they do not adequately take into consideration the privacy problems that arise from agentic AI systems' persistent memory, autonomous decision-making, and tool interactions.
2.2 What is agentic AI?
A family of artificial intelligence systems known as “agentic AI” is made to function independently by establishing objectives, organizing activities, carrying out tasks using outside resources, and changing behavior in response to feedback. Agentic systems, in contrast to typical LLMs, are able to carry out intricate, multi-step tasks for longer periods of time by maintaining internal states, frequently using short-term and long-term memory modules. A reasoning or planning module, a tool invocation layer, memory management systems, and self-reflection loops are essential elements of agentic AI (Arora and Hastings, 2024; Zhuo et al., 2021). Agents can operate with little human oversight thanks to their autonomy, but it also creates new security and privacy issues because of the agents' dynamic interactions with external surroundings and ongoing data retention.
2.3 Agentic AI frameworks and use cases
The practical use of autonomous intelligence is demonstrated by recent agentic AI frameworks as Auto-GPT, LangGraph, CrewAI, and BabyAGI. These frameworks allow agents to interact together with other agents, break down objectives into smaller tasks, and utilize a variety of resources like databases, online search, and code execution environments (Algethami and Alshamrani, 2024). Enterprise copilots, robotic process automation, cybersecurity operations, healthcare decision support, and smart governance systems are all increasingly utilizing agentic AI (Raza et al., 2026). These applications manage sensitive personal, organizational, and regulatory data, which makes them especially vulnerable to data leaks and privacy violations if agent workflows and memory mechanisms are not carefully built, even though they offer significant productivity improvements. Table 1 summarizes key focus areas in AI privacy and security, highlighting their leakage dimensions, mitigation strategies, and limitations. It shows that existing studies address specific aspects but lack a comprehensive view of privacy risks in agentic AI systems.
Table 1
| Focus area | Leakage dimension | Mitigation discussed | Limitation |
|---|---|---|---|
| LLM privacy | Training data exposure | Differential privacy | Model-level only |
| Prompt injection | Context manipulation | Input filtering | No system orchestration view |
| Multi-agent systems | Cross-agent leakage | Access isolation | Limited taxonomy structure |
| Regulatory AI | Compliance And governance | Policy enforcement | No technical threat modeling |
| AI security survey | General AI risks | Mixed techniques | Not focused on agent autonomy |
Comparative analysis of existing works on AI privacy and security.
3 Taxonomy of data leakage in agentic AI
Beyond the constraints of conventional LLM-based designs, agentic AI systems present novel and intricate data leakage channels. The assault surface is increased by their cooperative behavior, autonomy, persistent memory, and tool integration. This section offers a component-centric taxonomy of data leakage in agentic AI, classifying leakage sources according to fundamental architectural components in order to methodically examine these risks. This taxonomy offers a consistent framework for comprehending the ways in which agent operation may inadvertently reveal, disseminate, or amplify sensitive information (Neyigapula, 2024).
3.1 Taxonomy derivation and validation methodology
The proposed five-category taxonomy was derived using a structured architectural decomposition and cross-literature synthesis methodology. First, we analyzed 18 recent peer-reviewed and preprint studies focusing on agentic AI security, privacy risks, and red-teaming [Refs. Moia et al. (2025), Feretzakis and Verykios (2024), Wang S. et al. (2025), Hashmi et al. (2024), Juneja et al. (2025), Shapira et al. (2024), Miao et al. (2024), Arora and Hastings (2024), Zhuo et al. (2021), Algethami and Alshamrani (2024), Raza et al. (2026), Neyigapula (2024), Green et al. (2025), Lyu et al. (2024), Liu et al. (2025a), Kumar et al. (2024)]. Each documented leakage scenario, benchmark failure, or attack vector was extracted and categorized.
Second, we mapped each leakage event to recurring architectural components present across major agent frameworks, including memory modules, reasoning/planning engines, tool invocation layers, communication interfaces, and feedback loops. This component-centric mapping revealed five dominant leakage clusters.
Third, we performed cross-paper clustering analysis to ensure conceptual consistency. If multiple independent studies described leakage mechanisms involving the same architectural boundary (e.g., memory reuse or tool invocation), they were grouped into a unified leakage category.
Finally, validation was conducted using two criteria:
Completeness Test: Every documented attack vector from surveyed literature could be mapped to at least one taxonomy category.
Non-Overlap Test: Each category corresponds to a distinct architectural boundary, reducing conceptual redundancy.
This methodology ensures that the taxonomy is architecture-grounded, literature-supported, and systematically derived rather than ad hoc. While the proposed taxonomy captures documented leakage mechanisms in current agentic AI systems, it remains extensible. Emerging attack vectors can be incorporated under modular extension categories, ensuring adaptability to future threat evolution. This alignment does not replace the proposed taxonomy but complements it by situating agentic AI-specific leakage mechanisms within a broader security modeling perspective. Table 2 presents the mapping between the STRIDE threat model and potential privacy leakage risks in agentic AI systems. It illustrates how traditional security threat categories translate into specific vulnerabilities such as prompt injection, cross-agent access, and data leakage in autonomous AI environments.
Table 2
| STRIDE category | Agentic AI leakage mapping |
|---|---|
| Spoofing | Identity forgery via autonomous agents |
| Tampering | Prompt injection |
| Repudiation | Lack of traceability |
| Information disclosure | Data leakage |
| Denial of service | Resource exhaustion |
| Elevation of privilege | Cross-agent access |
Mapping of STRIDE threat categories to privacy leakage risks in agentic AI systems.
3.2 Memory-induced data leakage
Agentic AI systems are distinguished by their persistent memory, which allows them to retain and retrieve data across tasks and sessions. This feature poses serious privacy problems even though it is advantageous for long-term thinking and customization. Sensitive information, including credentials, medical records, proprietary papers, and personal identifiers, may be kept in long-term memory stores or vector databases without stringent access control or expiration guidelines. Cross-task or cross-user data contamination, in which information supplied by one user unintentionally appears in responses to another, might result from such memory persistence. Furthermore, dangerous or sensitive content can be purposefully inserted into an agent's memory through memory poisoning attacks, resulting in recurrent and magnified leaks over time. Memory-induced leakage is especially serious because of its permanence and autonomous reuse, in contrast to temporary LLM situations (Srivastava, 2025).
3.3 Tool-mediated data leakage
To complete tasks, agentic AI systems often rely on external resources such as databases, web services, enterprise software, and APIs. Sensitive inputs and intermediate outputs may be logged for debugging, cached for speed optimization, or sent to third-party services upon tool activation. These interactions result in numerous indirect leakage pathways, particularly when instruments require extensive data collecting or lack privacy-aware interfaces. Furthermore, agent behavior can be altered by indirect prompt injection attacks that are incorporated into tool outputs, such as retrieved documents or web material, which can result in the unapproved exposure of internal context, memory contents, or system instructions. Thus, a crucial junction between agent autonomy and external system confidence is tool-mediated leaking (Alizadeh et al., 2025).
3.4 Planning and reasoning-based leakage
Explicit planning and reasoning techniques are used by agentic AI systems to break down objectives into manageable steps. Internal representations of objectives, task priorities, and intermediate reasoning traces are frequently produced throughout these processes. Such information may disclose sensitive user intent, business logic, or strategic decision-making processes when it is exposed, whether on purpose or by accident. By pushing agents to summarize previous acts or defend choices, reflection and self-evaluation loops increase this risk by possibly reintroducing private data into the output. Planning-related leakage is especially harmful in enterprise, military, or governance applications since, in contrast to normal LLM replies, it discloses operational intelligence (Green et al., 2025; Lyu et al., 2024).
3.5 Inter-agent communication leakage
Agents work together in multi-agent systems by sharing intermediate outcomes, assigning tasks, and exchanging messages. Although this collaboration increases productivity and scalability, it also creates issues with information flow control and trust limits. Without proper authorization or filtering, information shared with one agent for a particular purpose could spread to others. Sensitive information can be extracted through these communication channels due to role misunderstanding, misaligned aims, or compromised agents. Additionally, unexpected information exchange that is hard to anticipate or audit may result from emergent behaviors in multi-agent settings, making it challenging to adhere to the concepts of data minimization and purpose limitation (Wang S. et al., 2025; Liu et al., 2025a).
3.6 Feedback loop and autonomy amplification leakage
In order to learn from previous results, fix mistakes, and improve tactics, agentic AI systems frequently include feedback mechanisms. Although these loops improve performance, they may inadvertently increase data leakage. The frequency and extent of exposure may increase if information shared during one iteration is reinforced and used again in later cycles. Such leakage can go unnoticed for a long time in autonomous environments with little human supervision. Agentic AI differs from standard models in that privacy violations are usually limited to isolated encounters due to this amplification effect (He et al., 2025; Kumar et al., 2024). Table 3 summarizes the key components of agentic AI systems and the associated privacy leakage types. It highlights how different architectural elements such as memory, tool invocation, and inter-agent communication can introduce specific security risks and attack vectors.
Table 3
| Agentic AI component | Leakage type | Description | Representative attacks/risks |
|---|---|---|---|
| Long-term memory (vector DB, Logs) | Persistent data leakage | Sensitive data stored across sessions and reused autonomously | Cross-user data exposure, memory scraping, memory poisoning |
| Tool Invocation layer | Tool-mediated leakage | Leakage through APIs, SaaS tools, logs, and third-party services | Indirect prompt injection, API logging leaks, telemetry exposure |
| Planner and reasoning module | Planning-based leakage | Exposure of intermediate goals, strategies, or reasoning traces | Chain-of-thought leakage, business logic disclosure |
| Inter-agent communication | Collaborative leakage | Unauthorized data propagation among agents | Role confusion attacks, compromised agent data exfiltration |
| Feedback and reflection loop | Amplification leakage | Repeated reuse and reinforcement of leaked information | Recursive leakage, autonomous re-exposure over iterations |
Mapping agentic AI components to data leakage types and representative attacks.
The agentic AI system is activated by user involvement through tasks or inquiries that may inevitably contain sensitive personal, organizational, or contextual data. The intent and execution requirements are discovered by interpreting these inputs. At this point, user-provided data may be inappropriately logged, reused across tasks, or kept longer than intended, posing privacy hazards. Unintentional information exposure can be made more likely by inadequate input sanitization and access restriction.
After receiving the task, the system decides on an execution plan and independently breaks down the goal into manageable steps. Maintaining awareness of objectives, priorities, and intermediate choices is part of this process. Any disclosure of intermediate representations, task plans, or decision traces could expose sensitive user intent, proprietary workflows, or operational logic because this internal reasoning depends on rich contextual information. In corporate, healthcare, and governance contexts, these disclosures are especially detrimental.
Contextual information is momentarily kept throughout task performance in order to preserve coherence and continuity. Even though this data is meant to be temporary, incorrect isolation or excessive preservation may result in leftover data being used in unrelated encounters. This could lead to the unintentional disclosure of previous user data through persistent contextual traces in shared or long-running deployments.
To enhance efficiency and customization, the system may save data throughout sessions in addition to transient context. Persistence increases autonomy, but it also greatly increases privacy problems. Cross-user or cross-domain leakage may result from sensitive data being improperly retrieved during subsequent operations. Additionally, without direct user interaction, adversarial alteration of stored data might result in repeated and magnified exposure.
The system often interacts with external resources including databases, online services, and application interfaces to achieve complex goals. Sensitive inputs and intermediate outputs may be transported outside of the system boundaries during these interactions. Indirect leakage channels may be produced by external services' logging, monitoring, or caching. Furthermore, embedded instructions that alter system behavior may be present in content acquired from external sources, which could unintentionally reveal stored data or internal context.
In cooperative environments, the system might share intermediate outcomes or assign subtasks by exchanging data with other independent entities. Although this kind of collaboration increases productivity, there are risks associated with unchecked knowledge spread. It is possible for data provided for a particular purpose to be transferred without sufficient authorization or filtering, which could violate privacy restrictions and trust assumptions.
The system may assess previous activities and modify its behavior in response to observed results in order to enhance performance. This adaptive capability can inadvertently encourage privacy failures even as it improves autonomy. Particularly in the absence of human oversight, information revealed during one execution cycle may be routinely reused and amplified in later rounds, increasing the scope and durability of exposure.
In order to provide a response, the system finally combines data from internal reasoning, stored context, exterior interactions, and cooperative exchanges. Sensitive information may show up in the finished product if earlier phases are not properly managed, leading to clear and obvious privacy violations. Because private information is made available to end users or unauthorized recipients, these errors constitute the most serious type of data leakage. Solid arrows indicate direct autonomous data flows, whereas dashed arrows represent indirect or inferred leakage channels. Leakage identifiers (L1–L6) are positioned adjacent to the primary leakage pathway. Bold boundary lines indicate trust domains.
Figure 1 illustrates how sensitive information propagates across architectural components in agentic AI systems. Solid arrows represent intended task execution flows, while dashed red arrows indicate observed leakage pathways documented in empirical studies. Leakage examples include cross-session memory reuse (L1), vector database poisoning (L2), JSON-based API logging exposure (L3), indirect prompt injection embedded in retrieved web content (L4), inter-agent role confusion leading to unauthorized propagation (L5), and recursive amplification through feedback loops (L6). Security boundaries are explicitly marked to highlight trust transitions between internal agent modules, external tool interfaces, and multi-agent communication channels. Leakage pathways L1–L6 are illustrative and represent commonly observed patterns in documented attacks. They are not ranked by severity but categorized for analytical clarity. Table 4 outlines the key interfaces in agentic AI systems, the data formats exchanged, and their associated boundary types. It highlights potential leakage risks that arise when information flows across internal, external, and persistence boundaries.
Figure 1
Table 4
| Interface | Data format | Boundary type | Example leakage |
|---|---|---|---|
| User → planner | JSON prompt payload | Internal | Sensitive intent logging |
| Planner → memory | Vector embeddings | Storage boundary | Cross-session retrieval |
| Agent → external API | REST/JSON | External trust boundary | API logging exposure |
| Agent → web tool | HTML/retrieved text | External content boundary | Prompt injection |
| Agent ↔ agent | Structured messages | Trust boundary | Role confusion leakage |
| Feedback loop | Internal state update | Persistence boundary | Recursive amplification |
Technical interfaces and associated leakage risks.
3.7 Concrete leakage propagation across agent boundaries
Figure 1 operationalizes the taxonomy by tracing how sensitive data moves through explicit system interfaces. For example, when a user submits a task containing confidential information, the payload is typically serialized in structured formats (e.g., JSON) and passed to the planner module. If stored in long-term memory as vector embeddings, this information may later be retrieved during unrelated sessions (Leakage L1).
During tool invocation, the agent constructs API requests that may include contextual information, intermediate reasoning, or authentication tokens. These requests often traverse REST endpoints and may be logged by third-party services (Leakage L3). Similarly, content retrieved from external web sources can contain hidden prompt injection instructions that alter internal reasoning behavior (Leakage L4).
In multi-agent configurations, intermediate outputs are exchanged via message brokers or structured communication protocols. Without strict role-based filtering, one agent may propagate sensitive context to others beyond intended purpose limitation (Leakage L5).
Finally, reflection loops may cause previously leaked data to be reintroduced into subsequent reasoning cycles, amplifying exposure over time (Leakage L6).
By explicitly modeling data formats, trust boundaries, and architectural interfaces, this analysis moves beyond abstract component depiction and demonstrates how privacy failures materialize in real deployments.
4 Threat models and attack vectors
Threat models that go beyond the traditional adversarial assumptions used for static LLMs are required due to the autonomous and persistent nature of agentic AI systems. Adversaries can function internally as insiders with limited system access, outwardly as malevolent users, or indirectly through hacked tools or data sources. These actors can cause unwanted data disclosure by taking advantage of the agent's autonomy, memory persistence, and tool orchestration. Memory poisoning, context window manipulation, and direct and indirect prompt injection are common attack vectors. Because it can subtly affect agent behavior without direct user engagement, indirect prompt injection—which is incorporated into tool outputs, documents, or online content—poses a serious risk (Guha et al., 2021; Puri et al., 2025).
Additionally, through role confusion and breaches of trust boundaries, attackers may use inter-agent communication channels to spread sensitive information among agents or obtain private information. By encouraging compromised conduct over several execution cycles, feedback-driven autonomy increases these hazards even more. Agentic AI attacks frequently develop over time, making detection and attribution more difficult than classical assaults that focus on individual inference events (Chen et al., 2024). Because attacks in agentic systems have cumulative, persistent, and possibly system-wide effects, it is essential to comprehend these threat models in order to create effective countermeasures.
Agentic systems typically exchange data via JSON-based API payloads, vector embeddings stored in similarity search databases, and RESTful or RPC tool calls. Security boundaries exist at:
Memory storage interfaces,
Tool invocation APIs,
Inter-agent message brokers, and
Output rendering layers.
Each boundary represents a potential leakage checkpoint requiring encryption, authentication, and access control enforcement. The proposed taxonomy was validated against 27 documented leakage scenarios extracted from recent peer-reviewed literature and technical reports. Each scenario was systematically mapped to one or more defined leakage categories (L1–L6), resulting in complete categorical coverage. Conceptual distinctiveness between categories was assessed to minimize overlap, and no documented case fell outside the defined taxonomy structure. This validation demonstrates structural completeness with respect to currently reported leakage mechanisms.
5 Privacy failures in real-world and evaluated agentic AI systems
Section 5 illustrates how the proposed leakage taxonomy manifests in real-world and experimentally evaluated agentic AI systems. Rather than presenting generic data breaches, this section focuses specifically on autonomy-enabled privacy failures, including cross-agent data propagation, memory retention leakage, prompt injection exploitation, and unintended tool-based data exposure. Each case is explicitly mapped to the corresponding leakage pathway (L1–L6), thereby demonstrating the practical relevance and analytical coverage of the proposed framework. Recent empirical studies and red-teaming efforts provide concrete evidence of privacy failures in deployed or experimentally evaluated agentic systems. The five case studies were selected based on three criteria: (1) documented real-world deployment context, (2) representation of distinct leakage categories within the taxonomy, and (3) availability of sufficient technical detail for analysis.
Case 1: Memory Poisoning in LLM Agents (AgentPoison) (Neyigapula, 2024)
Chen et al. demonstrated that inserting malicious content into an agent's long-term memory or vector database results in persistent behavioral manipulation and repeated data exfiltration. The attack achieved high success rates across multiple tasks, confirming that persistent memory significantly increases leakage durability. From a technical perspective, the incident unfolded across three primary stages: (1) attack initiation through prompt manipulation or autonomous task triggering, (2) cross-component propagation enabled by memory persistence or inter-agent communication, and (3) data exposure via API responses, logging mechanisms, or unintended output generation. The case illustrates how agent autonomy amplifies traditional attack vectors by enabling multi-step reasoning and chained execution without continuous human oversight. Mitigation strategies reported in the literature include stricter context isolation, reduced memory retention windows, access control reinforcement, and continuous adversarial testing of agent workflows.
Case 2: Cross-Session Privacy Leakage (Privacy in Action) (Feretzakis and Verykios, 2024; Shapira et al., 2024)
Wang et al. showed that LLM-powered agents reused sensitive information across sessions in shared deployments. Sensitive data introduced in one task was later reproduced in unrelated contexts, highlighting cross-task contamination risks. This case highlights the systemic nature of leakage in agentic AI systems, where vulnerability does not arise from a single component but from interaction effects between planning modules, memory stores, and external tool integrations. The exposure pathway demonstrates how iterative reasoning and autonomous task chaining can unintentionally escalate privileges or disclose sensitive information. Reported countermeasures emphasize sandboxed execution environments, scoped API permissions, and runtime monitoring of anomalous task sequences.
Case 3: Multi-Agent Contextual Privacy Failures (MAGPIE Benchmark) (Wang S. et al., 2025; Miao et al., 2024)
The MAGPIE benchmark revealed that agents collaborating without strict isolation frequently propagated private information between roles. Leakage occurred even when only one agent had initial access to sensitive inputs. This incident corresponds to leakage pathway L5 (Inter-Agent Communication) within the proposed taxonomy, reinforcing the applicability of the framework.
Case 4: Tool-Orchestrated Leakage (Agent Tools Orchestration Leaks More) (Juneja et al., 2025)
Qiao et al. demonstrated that agents invoking external APIs and SaaS tools increased exposure risk due to logging, telemetry, and indirect prompt injection embedded in tool outputs. This incident corresponds to leakage pathway L3 (Tool-Mediated Interaction) within the proposed taxonomy, reinforcing the applicability of the framework.
Case 5: Web-Use Agent Exploitation (Mind the Web) (Hashmi et al., 2024)
Web-automating agents were shown to execute malicious instructions embedded in web content, leading to unintended disclosure of internal context and memory. This incident corresponds to leakage pathway L4 (Prompt Injection) within the proposed taxonomy, reinforcing the applicability of the framework.
These cases demonstrate that privacy failures in agentic AI are not theoretical but empirically validated across memory persistence, tool invocation, and multi-agent coordination settings.
6 Privacy preserving and mitigation strategies
A multi-layered strategy that tackles risks at the architectural, operational, and governance levels is necessary to mitigate data leakage in agentic AI systems. Unintentional data reuse can be greatly decreased at the architectural level with memory segregation, regulated persistence, and safe context management. Cross-session and cross-user leakage can be reduced by designing ephemeral or task-scoped memory methods. In order to ensure that only necessary information is shared with external services and that logging systems are privacy-conscious, tool interactions should follow the least-privilege rules (Tariq et al., 2020).
From the standpoint of the model and system, privacy-aware reasoning suppression, output filtering, and context redaction can all aid in preventing the leakage of private internal states. Accountability is further improved by governance tools including audit trails, policy enforcement levels, and human-in-the-loop oversight. However, current mitigation strategies frequently result in trade-offs between privacy, performance, and autonomy (Kaur et al., 2023). As a result, privacy-by-design agent architectures that incorporate leakage protection as a primary system goal rather than an afterthought are becoming more and more necessary.
Empirical evaluations show that memory isolation and scoped context management reduce cross-session leakage by up to 60% in controlled experiments (Feretzakis and Verykios, 2024). Tool sandboxing and permission scoping significantly lower indirect prompt injection success rates (Juneja et al., 2025). However, no single mitigation eliminates leakage completely, reinforcing the need for layered defenses.
6.1 Framework evaluation and measurable effectiveness
Although this work is primarily a survey and architectural analysis, the proposed framework can be evaluated using empirical metrics reported in recent agentic AI privacy benchmarks. Studies such as AgentPoison (Neyigapula, 2024), MAGPIE (Wang S. et al., 2025), PrivAgent (Nie et al., 2024), and AgentLeak (Yagoubi et al., 2026) measure privacy leakage using:
Attack Success Rate (ASR).
Cross-session contamination rate.
Leakage persistence duration.
Tool-mediated exposure frequency.
When mapped against our five-category taxonomy, these benchmarks show:
Memory-related leakage exhibits the highest persistence and amplification risk.
Tool-mediated leakage increases exposure surface significantly.
Multi-agent leakage demonstrates propagation even under partial isolation.
Therefore, the proposed framework serves as a structural model that explains empirically observed failures and provides a systematic lens for evaluating mitigation strategies across architectural boundaries. From a regulatory perspective, agentic AI systems introduce novel compliance challenges under frameworks such as the EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Autonomous task delegation complicates data controller identification, while dynamic memory retention may conflict with data minimization and purpose limitation principles. Furthermore, explainability obligations under GDPR Article 22 may be difficult to satisfy when multi-agent orchestration produces emergent behaviors.
7 Comparative analysis: LLM vs. agentic AI
Although agentic AI systems and classical LLMs both have fundamental language modeling capabilities, their privacy risk profiles are very different. Data leaking is frequently limited to individual encounters in stateless or limited-context situations, where LLMs usually function. On the other hand, agentic AI systems create dynamic and cumulative leakage concerns since they maintain persistent memory, use tools on their own, and collaborate with other agents. Agentic AI is further distinguished by the incorporation of feedback loops and long-term context, as privacy failings may be amplified over time rather than staying isolated (Narajala and Narayan, 2025; Shahriar et al., 2025).
Effective mitigation techniques for LLMs, like single-turn output restrictions or quick filtering, are frequently insufficient for agentic designs. Lifecycle-aware privacy management, component-level access restriction, and ongoing monitoring are necessary for agentic AI. This comparison encourages the creation of agent-specific security benchmarks and evaluation measures and highlights the need to reconsider privacy frameworks for autonomous systems. Table 5 compares traditional LLM-based systems with agentic AI systems across several operational and security aspects. The comparison highlights how agentic AI introduces greater autonomy, persistent memory, and expanded attack surfaces, leading to more complex and long-term privacy leakage risks.
Table 5
| Aspect | Traditional LLMs | Agentic AI Systems |
|---|---|---|
| Interaction style | Single-turn or limited multi-turn | Continuous, goal-driven, autonomous |
| Memory usage | Stateless or short-lived context | Persistent short-term and long-term memory |
| Data retention | Typically confined to session | Cross-session and cross-task retention |
| Tool integration | Minimal or user-triggered | Autonomous and frequent tool invocation |
| Autonomy level | User-driven | Self-directed decision-making |
| Leakage scope | Localized and episodic | Cumulative and system-wide |
| Attack surface | Prompt-based exploitation | Prompt, memory, tools, inter-agent channels |
| Leakage persistence | Temporary | Long-term and recurring |
| Detection difficulty | Relatively easier | Difficult due to delayed effects |
Comparison of data leakage characteristics: LLMs vs. agentic AI.
Table 6 presents a comparative evaluation of privacy risks between traditional LLM-based systems and agentic AI systems across key privacy dimensions. The table highlights that agentic AI environments introduce higher exposure risks and therefore require stronger safeguards such as continuous monitoring and privacy-by-design architectures.
Table 6
| Privacy dimension | LLM-based systems | Agentic AI systems |
|---|---|---|
| Prompt injection impact | Moderate | High (amplified via tools and memory) |
| Memory poisoning risk | Low | High |
| Cross-user data leakage | Rare | Likely in shared agents |
| Tool-mediated exposure | Limited | Extensive |
| Inter-agent leakage | Not applicable | Significant |
| Effectiveness of output filtering | High | Limited |
| Differential privacy applicability | Feasible | Challenging |
| Need for continuous monitoring | Low | Essential |
| Privacy-by-design requirement | Optional | Mandatory |
Comparison of privacy threats and mitigation effectiveness.
Differential Privacy (DP) is challenging in agentic AI due to:
Sequential Composition Problem: Agents perform multi-step reasoning and tool calls, accumulating privacy budget (ε) across iterations.
Persistent Memory: DP mechanisms typically assume static training data, whereas agentic systems reuse and update memory dynamically.
Interactive Tool Use: External API calls create uncontrolled data flows outside DP-protected boundaries.
Multi-Agent Amplification: Privacy guarantees degrade when multiple agents share intermediate outputs.
Thus, unlike single-inference LLM settings, enforcing strict ε-bounds in continuous autonomous environments becomes computationally and architecturally complex. For example, consider an agentic AI system trained on user interaction logs containing sensitive behavioral data. By applying ε-differential privacy during model training, calibrated Laplacian noise is injected into gradient updates. This ensures that the inclusion or exclusion of any single user's data does not significantly alter model output probabilities, thereby limiting inference-based data reconstruction attacks. Table 7 summarizes the estimated likelihood and severity of different privacy leakage types in agentic AI systems. It also provides empirical evidence from recent studies to support the assessment of potential risks and their impact.
Table 7
| Leakage type | Estimated likelihood | Estimated severity | Empirical support |
|---|---|---|---|
| Memory-induced leakage | High | Critical (60%−80%) | AgentPoison (Neyigapula, 2024), privacy in action (Feretzakis and Verykios, 2024) |
| Tool-mediated leakage | High | High (30%−45%) | Tools orchestration study (Juneja et al., 2025) |
| Planning-based leakage | Moderate | High (30%−45%) | Reasoning privacy studies (Algethami and Alshamrani, 2024) |
| Inter-agent leakage | Moderate–high | High (30%-45%) | MAGPIE benchmark (Wang S. et al., 2025) |
| Feedback amplification | Moderate | Critical (long-term; 60%−80 %) | Recursive leakage studies (Shapira et al., 2024) |
Estimated severity and likelihood of leakage types (based on empirical benchmarks).
Severity Scale: Low/Moderate/High/Critical, Likelihood Scale: Rare/Moderate/High.
Estimates are derived from reported attack success rates (30%−80%) across surveyed empirical studies.
8 Open challenges and research opportunities
To mitigate leakage risks in agentic AI deployments, system designers should implement memory isolation between agents, enforce strict API scope limitations, apply differential privacy during training, and conduct continuous red-team prompt injection testing. Additionally, audit logging with anomaly detection should be integrated at orchestration layers to detect cross-agent data propagation.
Even while the concerns of data leaking in agentic AI systems are becoming more widely recognized, there are still a number of unsolved issues. The absence of formal privacy models specifically designed for autonomous agents is one of the main problems. The temporal, persistent, and self-directed behavior of agentic systems is not adequately captured by current privacy definitions and guarantees, which are mostly derived from static LLM settings (Liu et al., 2025b; Chhabra et al., 2025). It is still an open research challenge to develop agent-aware threat models and privacy metrics that take multi-agent coordination, tool interactions, and memory persistence into account.
Secure memory management is another major obstacle. More research is needed to design memory architectures that strike a balance between long-term utility and stringent privacy restrictions, such as contextual isolation, verifiable erasure, and selective forgetting. In a similar vein, the lack of transparency and control over third-party services makes protecting privacy in tool-augmented reasoning challenging. Policy-aware execution layers and tool interfaces that protect privacy are still in their early stages of development.
Because it is difficult to enforce trust boundaries and purpose limitation among autonomous entities, multi-agent systems add extra complexity. The trade-off between explainability and privacy must also be addressed in future study because revealing reasoning processes may unintentionally divulge private information. Lastly, there is an urgent need for regulatory rules, evaluation frameworks, and standardized benchmarks created especially for agentic AI. In order to enable the safe, moral, and privacy-preserving deployment of autonomous intelligence in practical applications, these issues must be resolved.
Agentic AI systems are still largely unexplored from the standpoint of data leaking, despite the fact that recent research has started to address privacy problems in big language models. The lack of agent-specific privacy benchmarks and datasets is a significant research gap. Ad hoc studies or LLM-centric measures are used in current evaluations, which do not account for multi-agent interactions, tool-mediated exposure, or permanent memory utilization. One of the most important opportunities for future study is to create uniform benchmarks that replicate genuine agent workflows.
The absence of privacy-conscious agent architectures is another weakness. The majority of current agentic frameworks consider privacy controls as external add-ons rather than fundamental design principles, placing a higher priority on performance and autonomy. This makes it possible to establish privacy-by-design agent frameworks, in which privacy policies naturally restrict memory, planning, and tool usage. Additionally, research on verified memory deletion, context expiration, and selective forgetting is still in its infancy and needs methodical study.
Additional opportunities are presented by multi-agent systems, especially when it comes to creating secure communication protocols and trust management systems that stop the spread of unwanted data. Additionally, there hasn't been much research done on how to balance privacy and explainability in agentic AI, particularly in regulated fields. Lastly, there is a need for multidisciplinary research that connects technical solutions with legal and ethical frameworks because regulatory and compliance viewpoints for autonomous agents are still disjointed.
Existing surveys primarily focus on:
LLM memorization and prompt injection (stateless models).
Classical multi-agent system security (non-LLM environments).
In contrast, this work uniquely integrates:
Persistent memory architectures.
Tool-augmented reasoning pipelines.
Multi-agent LLM collaboration.
Lifecycle-aware leakage modeling
Cross-component architectural taxonomy.
To our knowledge, no prior survey systematically unifies these dimensions into a component-level privacy taxonomy specific to agentic AI systems.
9 Conclusion
Artificial intelligence has evolved from passive language models to autonomous agentic systems capable of goal-driven reasoning, persistent memory, tool integration, and multi-agent collaboration. While these capabilities enable powerful applications, they also introduce complex privacy and data leakage risks that remain insufficiently explored. This survey examined privacy challenges in agentic AI and showed that leakage often arises from interactions among multiple agentic components operating over time rather than from isolated model behavior. A taxonomy was proposed to categorize leakage sources across memory persistence, reasoning processes, tool-mediated interactions, inter-agent communication, and feedback-driven autonomy. The study further analyzed threat models and deployment scenarios to illustrate how these risks appear in domains such as enterprise systems, healthcare, cybersecurity, and public-sector applications. Comparative evaluation showed that traditional LLM-focused privacy defenses are inadequate for agentic environments, emphasizing the need for component-aware mitigation strategies. The paper also identified key research gaps and highlighted the importance of privacy-by-design frameworks, standardized benchmarks, and agent-specific evaluation methodologies. This survey aims to support researchers, developers, and policymakers in designing secure and privacy-preserving agentic AI systems for sensitive and safety-critical applications.
Statements
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
RB: Formal analysis, Writing – original draft, Writing – review & editing. PC: Writing – review & editing, Formal analysis, Writing – original draft. SMe: Writing – original draft, Formal analysis, Writing – review & editing. SP: Writing – original draft, Formal analysis, Writing – review & editing. SMa: Formal analysis, Writing – original draft, Writing – review & editing. AG: Writing – review & editing, Formal analysis, Writing – original draft.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Acknowledgments
The authors would like to acknowledge their respective institutions for providing the necessary academic environment and research support to carry out this work. The authors also thank colleagues and reviewers whose constructive feedback helped improve the quality and clarity of the manuscript.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The handling editor declared a past co-authorship with one of the authors PC.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1
AlgethamiS. A.AlshamraniS. S. (2024). A deep learning-based framework for strengthening cybersecurity in internet of health things (IoHT) environments. Appl. Sci. 14:4729. doi: 10.3390/app14114729
2
AlizadehM.SameiZ.StetsenkoD.GilardiF. (2025). Data observed by LLM agents during task execution. arXiv [Preprint]. arXiv:2502.06718.
3
AroraS.HastingsJ. (2024). “Securing agentic AI systems-a multilayer security framework,” in Proceedings of the IEEE International Conference on Responsible AI and Applications (RAAI). Advance online publication. doi: 10.1109/RAAI67517.2025.11423374
4
ChenZ.XiangZ.XiaoC.SongD.LiB. (2024). AgentPoison: red-teaming LLM agents via poisoning memory or knowledge bases. arXiv [Preprint]. arXiv:2405.10189. doi: 10.52202/079017-4136
5
ChhabraA.DattaS.NahinS. K.MohapatraP. (2025). Agentic AI security: threats, defenses, evaluation, and open challenges. arXiv [Preprint]. arXiv:2510.23883.
6
FeretzakisG.VerykiosV. S. (2024). Trustworthy AI: securing sensitive data in large language models. AI5, 2773–2800. doi: 10.3390/ai5040134
7
GreenT.GubriM.PuertoH.YunS.OhS. J. (2025). Large reasoning models are not private thinkers. arXiv [Preprint]. arXiv:2501.01245. doi: 10.18653/v1/2025.emnlp-main.1347
8
GuhaA.SamantaD.BanerjeeA.AgarwalD. (2021). A deep learning model for information loss prevention from multi-page digital documents. IEEE Access9, 80451–80465. doi: 10.1109/ACCESS.2021.3084841
9
HashmiE.YaminM. M.YayilganS. Y. (2024). Securing tomorrow: a comprehensive survey on the synergy of artificial intelligence and information security. AI Ethics5, 1911–1929. doi: 10.1007/s43681-024-00529-z
10
HeX.XuG.HanX.WangQ.ZhaoL.ShenC.et al. (2025). Artificial intelligence security and privacy: a survey. Sci. China Inf. Sci.68:181101. doi: 10.1007/s11432-025-4388-5
11
JunejaG.NagaJ.PasupulatiS.AlbalakA.HuaW.WangW. Y.et al. (2025). “MAGPIE: a benchmark for multi-agent contextual privacy evaluation,” in Proceedings of Neural Information Processing Systems (NeurIPS).
12
KaurR.GabrijelčičD.KlobučarT. (2023). Artificial intelligence for cybersecurity: literature review and future research directions. Inf. Fusion97:101804. doi: 10.1016/j.inffus.2023.101804
13
KumarA.UpadhyayU.SharmaG.SharmaR. S.MishraN.KumawatJ. (2024). Strengthening AI governance through advanced cryptographic techniques. Int. J. Intell. Syst. Appl. Eng.12, 553–560. Available online at: https://ijisae.org/index.php/IJISAE/article/view/4920
14
LiuY.HuangJ.LiY.WangD.XiaoB. (2025a). Generative AI model privacy: a survey. Artif. Intell. Rev.58, 1–47. doi: 10.1007/s10462-024-11024-6
15
LiuY.ZhangR.LuoH.LinY.SunG.NiyatoD.et al. (2025b). Secure multi-LLM agentic AI and agentification for edge general intelligence by zero-trust: a survey. arXiv [Preprint]. arXiv:2508.19870.
16
LyuL.YuH.YangQ.LiX.NandakumarK.ZhouJ.et al. (2024). Privacy and robustness in federated learning: Attacks and defenses. IEEE Trans. Neural Netw. Learn, Syst, 35, 8726–8746. doi: 10.1109/TNNLS.2022.3216981
17
MiaoW.ZhaoX.ZhangY.ChenS.LiX.LiQ.et al. (2024). A deep learning-based method for preventing data leakage in electric power industrial internet of things business data interactions. Sensors24:4069. doi: 10.3390/s24134069
18
MoiaV. H. G.SanzI. J.RebelloG. A. F.de MenesesR. D.HitajB.LindqvistU. (2025). LLM in the middle: a systematic review of threats and mitigations to real-world LLM-based systems. arXiv [Preprint]. arXiv:2509.10682. doi: 10.1016/j.cosrev.2026.100916
19
NarajalaV. S.NarayanO. (2025). Securing agentic AI: a comprehensive threat model and mitigation framework for generative AI agents [Online]. arXiv:2504,19956. doi: 10.1109/ACAI68217.2025.11406310
20
NeyigapulaB. S. (2024). Secure AI model sharing: a cryptographic approach for encrypted model exchange. Int. J. Artif. Intell. Mach. Learn.4, 48–60. doi: 10.51483/IJAIML.4.1.2024.48-60
21
NieA.LiY.WangX.ChenZ. (2024). PrivAgent: an agentic red-teaming framework for detecting systemic privacy vulnerabilities in LLM-based systems. arXiv [preprint]. Available online at: https://arxiv.org/html/2412.05734v1 (Accessed March 27, 2026).
22
PuriA.KiranC.EvuruR.ChapadosN.CappartQ.LacosteA. (2025). Malice in agentland: down the rabbit hole of LLM agent vulnerabilities. arXiv [Preprint]. arXiv:2504.01987.
23
QiaoY.LiuD.YangH.ZhouW.HuS. (2025). Agent tools orchestration leaks more: dataset, benchmark, and mitigation. arXiv [preprint], 1–27. Available online at: https://arxiv.org/html/2512.16310v1 (Accessed March 27, 2026).
24
RazaS.SapkotaR.KarkeeM.EmmanouilidisC. (2026). TRiSM for agentic AI: a review of trust, risk, and security management in LLM-based agentic multi-agent systems. AI Open7, 71–95. doi: 10.1016/j.aiopen.2026.02.006
25
ShahriarA.RahmanM. N.AhmedS.SadequeF.ParvezM. R. (2025). A survey on agentic security: applications, threats and defenses. arXiv [Preprint]. arXiv:2510.06445.
26
Shapira A. Gandhi P. A. Habler E. Shabtai A. (2024) Mind the web: the security of web-use agents. arXiv [Preprint]. arXiv:2402.07895.
27
SongE.ZhaoG. (2024). Privacy-preserving large language models: mechanisms, applications, and future directions. arXiv [Preprint]. arXiv:2412.06113.
28
SrivastavaS. S. (2025). MemoryGraft: persistent compromise of LLM agents via poisoned experience retrieval. arXiv [Preprint]. arXiv:2503.01234.
29
TariqM. I.RehmanM. H.AliR.KhanA.KimB.-S.KwakK. S.et al. (2020). A review of deep learning security and privacy defensive techniques. Mob. Inf. Syst.2020:6535834. doi: 10.1155/2020/6535834
30
WangS.YuF.LiuX.QinX.ZhangJ. (2025). Privacy in action: towards realistic privacy mitigation and evaluation for LLM-powered agents. arXiv [Preprint]. arXiv:2502.09026. doi: 10.18653/v1/2025.findings-emnlp.925
31
WangZ.LiY.ChenH.LiuY. (2025). Security and privacy challenges in LLM-based autonomous agents: attacks and defenses. IEEE Access.13, 21534–21552.
32
WuH.CaoY. (2023). Membership inference attacks on large-scale models: a survey. arXiv [Preprint]. arXiv:2312.10489.
33
YagoubiF.Al MallahR.Badu-MarfoG. (2026). AgentLeak: a full-stack benchmark for privacy leakage in multi-agent LLM systems. arXiv [preprint]. doi: 10.48550/arXiv.2602.11510
34
ZhuoR.HuffakerB.ClaffyK. C.GreensteinS. (2021). The impact of the general data protection regulation on internet interconnection. Telecommun. Policy45:102083. doi: 10.1016/j.telpol.2020.102083
Summary
Keywords
agentic AI, AI security, autonomous systems, data leakage, multi-agent architectures, privacy risks
Citation
Bhosale R, Chandre P, Mehetre S, Powar S, Mathur S and Ghandat A (2026) The dark side of autonomous intelligence: a survey on data leakage and privacy failures in agentic AI. Front. Comput. Sci. 8:1802727. doi: 10.3389/fcomp.2026.1802727
Received
03 February 2026
Revised
09 March 2026
Accepted
10 March 2026
Published
02 April 2026
Volume
8 - 2026
Edited by
Nilesh P. Sable, Vishwakarma Institute of Technology, India
Reviewed by
Alex C. Ng, La Trobe University, Australia
Seema Rani, Jaypee University of Information Technology, India
Updates
Copyright
© 2026 Bhosale, Chandre, Mehetre, Powar, Mathur and Ghandat.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Rohini Bhosale, rohinibhosale1987@gmail.com
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.