Your new experience awaits. Try the new design now and help us make it even better

REVIEW article

Front. Public Health, 14 October 2025

Sec. Public Health Policy

Volume 13 - 2025 | https://doi.org/10.3389/fpubh.2025.1663298

This article is part of the Research TopicGoverning AI in Public Health: Legal, Ethical, and Policy ConsiderationsView all articles

Ethical and legal concerns in artificial intelligence applications for the diagnosis and treatment of lung cancer: a scoping review

  • 1Department of Public Health and Epidemiology, Faculty of Medicine, University of Debrecen, Debrecen, Hungary
  • 2Department of Thoracic Surgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Università Cattolica del Sacro Cuore, Rome, Italy
  • 3Faculty of Medicine, Institute of Preventive Medicine and Public Health, Semmelweis University, Budapest, Hungary
  • 4HUN-REN-UD Public Health Research Group, Department of Public Health and Epidemiology, Faculty of Medicine, University of Debrecen, Debrecen, Hungary
  • 5National Laboratory for Health Security, Epidemiology and Surveillance Centre, Semmelweis University, Budapest, Hungary

Introduction: Artificial intelligence (AI) is increasingly integrating into the healthcare field, particularly in lung cancer care, including screening, diagnosis, treatment, and prognosis. While these applications offer promising advancements, they also raise complex challenges that must be addressed to ensure responsible implementation in clinical practice. This scoping review explores the ethical and legal aspects of AI applications in lung cancer.

Methods: A search was conducted across PubMed, Scopus, Web of Science, Cochrane Library, PROSPERO, OAIster, and CABI. A total of 581 records were initially retrieved, of which 20 met the eligibility criteria and were included in the review. The PRISMA guidelines were followed.

Results: The most frequently reported ethical concern was data privacy. Other recurrent issues included informed consent, no harm to patients, algorithmic bias and fairness, transparency, equity in AI access and use, and trust. The most frequently raised legal concerns were data protection and privacy, although issues relating to cybersecurity, liability, safety and effectiveness, the lack of appropriate regulation, and intellectual property law were also noted. Solutions proposed ranged from technical approaches to calls for regulatory and policy development. However, many studies lacked comprehensive legal analysis, and most included papers originated from high-income countries. This highlights the need for a broader global perspective.

Discussion: This review found that data privacy and protection are the most prominent ethical and legal concerns in AI applications for lung cancer care. Deep Learning (DL) applications, especially in diagnostic imaging, are closely tied to data privacy, lack of transparency, and algorithmic bias. Hybrid and multimodal AI systems raise additional concerns regarding informed consent and the lack of proper regulations. Ethical issues were more frequently addressed than legal ones, with limited consideration for global applicability, particularly in low- and lower middle-income countries. Although technical and policy solutions have been proposed, these remain largely unvalidated and fragmented, with limited real-world feasibility or scalability.

1 Introduction

Lung cancer is a significant public health concern, with a global incidence of 2.48 million and mortality of 1.8 million deaths according to GLOBOCAN 2022. It remains the leading cause of cancer-related deaths worldwide (1). Lung cancer is primarily classified into small cell and non-small cell lung cancer (NSCLC), with the latter accounting for approximately 85% of all cases (2). Despite several medical advancements, lung cancer is usually detected at a later stage with over half of patients being diagnosed when curative treatment is no longer an option (3). This late detection coupled with the aggressive nature of lung cancer leading to poor prognosis, with the age-standardized 5-year relative survival rate being between 10–20% in most regions (4). This creates a considerable financial burden on healthcare systems and individuals (5). If left unaddressed, lung cancer is projected to impose the largest global economic burden of all cancers. Tracheal, bronchus, and lung cancers are estimated to account for 15.4% of total costs, amounting to $3.9 trillion by 2050 (6). Therefore, given the significant public health impact of lung cancer, integrating advanced technologies such as AI-driven approaches for early detection and personalized treatment is a clinical imperative to reduce mortality and mitigate the global burden of the disease.

AI refers to the ability of computer systems to perform tasks that are normally done by human reasoning (7). AI consists of Machine Learning (ML) which enables computers to learn from data and modify their decision-making (8). A specialized subset of ML known as DL involves algorithms that process data such as medical images by following a predefined pathway known as an Artificial Neural Network (ANN) (8).

AI technologies are increasingly integrated into the diagnosis and treatment of lung cancer, offering advances in early detection, image interpretation, decision support, and personalized therapy. For instance, AI algorithms now assist in interpreting CT scans for early-stage NSCLC and predicting patient outcomes using radiomics and machine learning models (9). Predictive AI models can accurately stage lung cancer and determine overall survival rates (9, 10). For example, deep learning models such as the neural network developed by Trebeschi et al. can predict one-year overall survival for stage 4 NSCLC by detecting morphological changes across patient follow-up CT scans (11, 12). Similarly, Sybil, a deep learning-based AI algorithm, has produced promising results in predicting the future risk of developing lung cancer from a single Low-Dose CT scan (13). Also, AI models can be trained to provide optimized treatment plans including surgical decision-making, such as surgical risk prediction and assisting in drug selection (9, 14).

Currently, AI has the strongest impact on cancer care in lung cancer imaging diagnostics, where DL algorithms applied to CT scans match human experts in sensitivity (≈82% vs. 81%) while significantly surpassing them in specificity (≈75% vs. 69%) (15). More recently, multi-attention ensemble models have further advanced performance, achieving 98.73% sensitivity and 98.96% specificity in classifying lung nodules from CT images, representing a 35% reduction in error rates compared to previous methods (16).

However, alongside these advances, the use of AI in medicine has raised ethical and legal concerns since its emergence, particularly with regard to patient privacy, bias in algorithms, and accountability for errors (17). Early AI systems like MYCIN in the 1970s highlighted issues of trust and liability despite demonstrating diagnostic potential (18). As modern AI tools are growing more autonomous, scholars emphasize the need for transparent, regulated deployment to ensure equity and safety in healthcare (19).

Some of the ethical concerns include maintaining patient privacy when using large datasets to train models, the interpretability of “black-box” AI systems, and challenges related to informed consent and algorithmic bias of AI models (14, 20, 21).

Legal concerns such as determining liability for AI based technology, data ownership and protection regulations, limit health care workers’ abilities to accurately make health-related decisions (2224). AI is particularly vulnerable to cyber-attacks which can lead to corrupted data, infected algorithms, or even threats to patient privacy through access to sensitive data (23, 25). These issues underscore the need for responsible AI development and clear ethical and regulatory frameworks as this technology becomes more widely implemented in lung cancer care (22, 26).

However, the ethical and legal challenges in AI-driven cancer research intersect in areas like patient privacy, data ownership, and informed consent, where protecting individuals’ rights is paramount. For example, current data protection regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation (GDPR) in the European Union, aim to safeguard users’ privacy. They diverge in that ethics often addresses broader questions of fairness, bias, and trustworthiness beyond the law, while legal frameworks focus on compliance with specific regulations and enforceable standards (27).

Given these premises, lung cancer, serves as a critical domain where AI applications are rapidly evolving. The high stakes involved in lung cancer care amplify the consequences of ethical or legal oversights, yet literature discussing these dimensions is dispersed and inconsistently framed across technical, medical, and legal publications.

To date, no comprehensive synthesis has mapped out the breadth of ethical and legal concerns associated with AI in lung cancer care. This scoping review is therefore warranted to systematically explore the existing literature, identify thematic trends, highlight under-researched issues, and outline proposed solutions or regulatory frameworks. In addition, the aim is to answer questions about which categories of ethical and legal concern are most prevalent and which mitigation strategies are being suggested.

2 Methods

We selected a scoping review given the novelty of the topic and the heterogeneity of the literature on AI-related ethical and legal concerns in lung cancer. This method was deemed the most appropriate way to map the existing evidence and highlight knowledge gaps. This scoping review was conducted in accordance with the methodological framework developed by Arksey and O’Malley, and guided by the Joanna Briggs Institute (JBI) recommendations, which are widely recognized for ensuring rigor and consistency in evidence synthesis (28, 29). The reporting of the scoping review will follow the PRISMA extension for Scoping Reviews (PRISMA-ScR) checklist (30). The review follows the five-stage process: (1) identifying the research questions, (2) identifying relevant studies, (3) selecting studies, (4) charting the data, and (5) collating, summarizing, and reporting the results.

A protocol outlining the objectives and methodology of this scoping review was registered on the Open Science Framework (OSF) prior to conducting the review. The registration is publicly accessible at https://doi.org/10.17605/OSF.IO/8HUZJ.

2.1 Information sources and search strategy

The search strategy combined terms and free-text terms related to lung cancer, artificial intelligence, and ethical/legal concerns. The search query was as follows:

(“lung cancer*” OR “pulmonary cancer*” OR “lung neoplasm*” OR “pulmonary neoplasm*” OR “lung tumo*” OR “lung nodule*” OR “pulmonary nodule*”) AND (“artificial intelligence” OR “machine learning” OR “deep learning” OR “computer reasoning” OR “computational intelligence” OR “machine intelligence” OR “neural network*” OR algorithm* OR robotics) AND (ethic* OR moral* OR bioethic* OR jurisprudence OR litigat* OR legal* OR policy OR policies OR law*).

A comprehensive literature search was performed using the following electronic databases: PubMed, Scopus, Web of Science, Cochrane Library, and PROSPERO. To capture the full scope of the literature and ensure comprehensive coverage of the field, we included grey literature by searching OAIster and CABI. The search was conducted without restrictions on language or publication date. Search strategies were adapted for each database as needed. Additionally, a snowball search was conducted by screening the reference lists of the articles included. The full search strategy, including the search queries used in each database, is provided in Supplementary file S1.

2.2 Eligibility criteria

The eligibility criteria were developed based on the Population–Exposure–Outcome (PEO) framework:

• Population (P): Patients with lung cancer at any stage, including those undergoing screening, diagnosis, treatment, or prognostic assessment.

• Exposure (E): Use of AI technologies in lung cancer care, including imaging analysis, predictive modeling, clinical decision-support systems, treatment planning, and prognostic assessment.

• Outcome (O): Ethical and legal issues arising from the use of AI in lung cancer care, as well as proposed solutions and mitigation strategies.

Studies were included if the full text was available and written in English. All types of publications, including original research articles, reviews, conference papers or proceedings, grey literature, editorials, opinions, letters, and commentaries, were included, while study protocols were excluded. The content of the publication needed to be relevant to lung cancer, either focusing directly on its diagnosis, treatment, screening, or prognosis, or mentioning lung cancer within broader discussions of multiple diseases. Additionally, studies had to involve the use of artificial intelligence techniques, such as machine learning or deep learning, in relation to lung cancer. Articles that mentioned any ethical or legal discussions were included if they focused on lung cancer or, in the case of multi-disease discussions, made explicit reference to lung cancer within the ethical or legal context.

The categories used to classify ethical and legal concerns were adopted from the study by Gerke et al. (23).

The ethical concerns were categorized as follows: informed consent to use, safety and transparency, algorithmic bias and fairness, and data privacy. The legal concerns were categorized as follows: safety and effectiveness, liability, data protection and privacy, cybersecurity, intellectual property law. Additional ethical and legal concerns not covered by these categories can be included as “other.”

2.3 Selection of sources of evidence

The identified records were imported from each database into Endnote. Then, they were imported into Covidence for duplication removal and screening. Two independent reviewers conducted the initial data extraction (GC, NA). Any discrepancies were addressed through weekly consensus meetings (lasting approximately 1.5 h each). When consensus could not be reached, a third reviewer (OV) was consulted to adjudicate and provide a final decision.

2.4 Data charting process and data items

A data extraction form was developed and pilot-tested within Covidence. Two reviewers (GC, NA) independently charted the data, with discrepancies discussed and resolved by consensus with the help of a third party (OV). The following data items were extracted from each included study: publication ID, title, lead author, year of publication, country of affiliation, source type, aim of the publication, type of lung cancer discussed, AI-based technology addressed, application of AI technology in lung cancer care, ethical concerns, legal concerns, and suggested solutions.

2.5 Synthesis of results

Extracted data were collated and summarized in tabular form to provide an overview of the characteristics and scope of the included literature. A descriptive synthesis was conducted to map the ethical and legal concerns raised in relation to the use of AI in lung cancer screening, diagnosis, treatment, and prognosis including the types of technologies used, geographic distribution of studies, and recurring themes in ethical and legal context.

3 Results

3.1 Selection of sources of evidence

After applying the search strategy across all databases, 581 records were retrieved. Following duplicate removal and initial screening, 400 articles were reviewed at the title and abstract level. Of these, 200 were excluded, and 200 proceeded to full-text screening. Among the remaining 200 records, 32 did not have accessible full texts. A total of 168 articles were assessed for eligibility criteria. Of these, 155 were excluded: 7 publications were excluded due to being the wrong type (protocols), 1 publication was in a language other than English, 48 did not discuss lung cancer, 30 did not discuss AI in lung cancer, and 69 did not include any ethical or legal discussion concerning the application of AI in lung cancer. After further exclusions based on eligibility criteria, a total of 13 studies were included. An additional 7 relevant records were identified through snowball searching, bringing the total number of included publications to 20 (Figure 1).

Figure 1
Flowchart illustrating the identification and inclusion of studies via databases and registers. Initially, 586 records were identified: PubMed (112), Scopus (281), Web of Science (125), PROSPERO (9), Cochrane (30), OAister (16), and CABI (13). After removing 186 records (181 duplicates, 5 inaccessible), 400 records were screened, excluding 200. Of 200 reports sought, 32 were not retrieved. From 168 assessed reports, 155 were excluded for various reasons. Thirteen studies were included, supplemented by seven from a snowball search, totaling 20 studies reviewed.

Figure 1. Preferred reporting items for systematic reviews and meta-analyses (PRISMA 2020) flowchart.

3.2 Synthesis of results

3.2.1 General characteristics of relevant studies

Our search identified studies published between 1996 and 2024, the year with the most publications was in 2021 [7 publications; (3137)]. The distribution of publications based on the first author’s affiliations shows that the most frequently affiliated country is China [4 publications; (35, 36, 38, 39)], followed by India [3 publications; (4042)], then 2 publications each for Italy (33, 34), France (43, 44), United States (45, 46), and Australia (31, 32), while the remaining countries (the United Kingdom, Greece, Norway, Germany, and Canada) had one publication each (37, 4750). Out of the 20 studies, 18 were journal articles, and 2 were conference proceedings publications (41, 46) (Table 1).

Table 1
www.frontiersin.org

Table 1. Main characteristics of the included studies.

3.2.2 Overview of the applications of AI in lung cancer

The reviewed studies demonstrate diverse applications of AI for different aspects of lung cancer care. The use of AI was classified into 4 categories: screening, diagnosis, treatment, and prognosis. In the context of screening, AI was employed to detect pulmonary nodules on chest radiographs (39, 43, 50), and to identify target sites and detect lung nodules in images using CADe (Computer-Aided Detection) systems (35, 44) (Table 2).

Table 2
www.frontiersin.org

Table 2. Overview of the AI algorithm and the ethical and legal concerns in the included studies.

Beyond detection, AI plays a role in lung cancer diagnosis, which was the most common application. The majority of publications reported the use of AI algorithms or AI-based systems to classify pulmonary nodules as malignant or benign, or to distinguish between lung cancer subtypes (31, 34, 36, 37, 4043, 4547, 49, 50). Additionally, AI tools were used to support lung cancer diagnosis using histological data. Applications included differentiating lung cancer types in pathology, classifying challenging cytological slide images, and analyzing ambiguous morphology in histopathological images from lung cancer biopsies (34, 35, 47, 48).

AI has been used to support various aspects of lung cancer treatment. Etienne et al. demonstrated that AI can assist in surgical procedures and support decision-making, including the use of robotics (43). Bellini et al. reported that AI-assisted surgery can reduce hospital stay and postoperative complications (34). In another study, it was shown that AI could contribute to personalized drug treatment recommendations and to guide targeted therapy selection and surgical planning (47). Similarly, Zhang et al. applied AI to enhance surgical precision and reduce invasiveness via RATS (Robot-Assisted Thoracic Surgery), as well as to plan personalized treatment and regulate irradiation time, dose rate, and imaging in radiotherapy (35). Rabbani et al. further advanced radiation therapy by using AI to predict dose-volume histograms and select the optimal angle for radiation (50). Additionally, Cucchiara et al. explored the integration of AI with radiomics and liquid biopsy for therapeutic purposes (33).

AI technologies were employed in several studies to support lung cancer prognosis. One study reported that AI was used to assist surgical decision-making by evaluating individual risk factors and enabling personalized clinical decisions (43). In another study, AI was applied to predict the risk of major complications and mortality after lung resection, as well as the risk of lung adenocarcinoma recurrence (34). In addition, Abbaker et al. focused on estimating postoperative prognosis, predicting therapy responses, assessing surgical risk, and forecasting cardio-respiratory morbidity and postoperative outcomes (47). Histological data were also used to support prognosis (48). AI was further used to stratify patients by mortality risk following radiotherapy and surgery, and to predict survival and cancer-specific outcomes (44). Prognostic models were also developed for early mortality and treatment failure (50).

3.2.3 Overview of ethical and legal concerns identified

The analysis of 20 included studies revealed consistent ethical and legal challenges associated with AI applications in lung cancer (Table 2). Ethically, the most prominent concerns centered on data privacy [13 publications; (31, 32, 35, 36, 3843, 47, 48, 50)], particularly in contexts involving sensitive imaging or genomic data, and the need for robust informed consent mechanisms. The principle of non-maleficence or causing no harm to patients emerged as the second critical issue [4 publications; (36, 43, 46, 49)]. Studies highlighted risks to patient lives if AI systems fail to distinguish true from false-positive lung lesions, or provide inappropriate or inaccurate risk assessments, treatment recommendations, or diagnoses. Similarly, informed consent-related concerns [4 publications; (34, 40, 41, 47)] was identified as essential to upholding patient autonomy and ensuring comprehension in diagnostic and surgical decision-making. Furthermore, safety and transparency deficits in “black-box” deep learning models [3 publications; (37, 40, 47)] underscored the need for interpretable decision-making processes to ensure model reliability.

The principle of Algorithmic fairness and bias was emphasized in two studies [2 publications; (45, 47)]. They noted that biased or under-representative training datasets could lead to unfair outcomes.

Equity in access and use [1 publication; (34)] emerged as a critical concern, whether in access to AI technologies or disparities in digital literacy among users. Addressing these issues is essential to ensure equitable demographic distribution of AI tools. Moreover, trust in AI systems [1 publication; (47)] was identified as a challenge, with the opaque nature of AI algorithms cited as a barrier to enhancing trustworthiness. Finally, liability within ethical frameworks [1 publication; (44)] was noted, raising questions about the extent of accountability for AI-driven decisions.

Legally, data protection and privacy [9 publications; (3234, 36, 3840, 43, 50)] dominated discussions, with studies highlighting compliance challenges under regulations such as GDPR or HIPAA. Liability ambiguities [3 publications; (40, 43, 44)] emerged, particularly surrounding responsibility for errors generated by AI tools. Cybersecurity concerns [2 publications; (39, 50)] were raised regarding potential hacking threats to datasets used in algorithms. Notably, only two studies comprehensively addressed the lack of proper regulation and legislation governing AI integration in lung cancer care (35, 50). A single study highlighted safety and effectiveness concerns, emphasizing that AI tools should be evaluated to meet legal requirements (43). Another study discussed intellectual property law and the importance of addressing regulatory aspects related to AI algorithm ownership (33). Finally, accountability was mentioned in one study as a key consideration (47).

3.2.4 Overview of the solutions

A total of 20 studies were reviewed to identify ethical and legal considerations in the use of AI for lung cancer diagnosis and treatment. 15 out of 20 suggested solutions for the ethical concerns presented (Table 3).

Table 3
www.frontiersin.org

Table 3. The solutions proposed for the ethical/legal concern in the included publications.

3.2.4.1 Ethical solutions

Several studies addressed key ethical concerns, with data privacy being the most commonly cited issue for which solutions were suggested (31, 32, 3840, 42, 48, 50). Davri et al. (48) proposed the creation of a regulatory framework to ensure data security and confidentiality. Additionally, Rabbani et al. (50) emphasized the importance of a legal framework to protect personal data. Another solution proposed by Rabbani et al. involves the use of strong authentication methods. Other papers also proposed technical solutions, recommending the use of encryption (40, 42). Kumar et al. (38) suggested using blockchain in combination with DL, while others advocated decentralized methods, such as distributed learning or federated deep learning (31, 32).

To avoid harming patients, Kriegsmann et al. recommend that AI algorithms be supervised by humans to prevent misdiagnosis (49).

One study suggested that class activation maps (CAM) and gradient-weighted CAM (Grad-CAM) can improve model explainability, addressing safety and transparency concerns in deep learning (37).

Concerning algorithmic fairness and bias, Vaidya et al. (45) suggested a bias mitigation strategy, while Abbaker et al. (47) suggested a legal framework for AI in healthcare.

The study by Rabbani et al. (50) discusses data ownership as an ethical concern that should be addressed through appropriate policy regulation.

3.2.4.2 Legal solutions

As ethical and legal concerns often overlap, some of the solutions proposed for data protection and privacy in the reviewed publications were intended to address the same ethical concern: data privacy. Clearly separating the two domains was difficult.

Many ethical concerns have corresponding legal solutions. These include the use of a blockchain-based data sharing method (38) and distributed learning approaches (32, 39). Furthermore, Cucchiara et al. (33) and Bellini et al. (34) discussed data protection regulations in their studies, recommending compliance with robust legal frameworks. Also, Huang et al., (36) emphasized the importance of a regulatory framework to safeguard sensitive medical data and ensure confidentiality.

Two studies proposed legal solutions involving the creation of guidelines and the establishment of a legal framework to address liability issues (40, 43).

4 Discussion

This scoping review systematically identified the predominant ethical and legal concerns associated with AI applications in lung cancer care, as well as the proposed solutions to address these concerns.

4.1 Overall findings

Of the identified ethical and legal concerns, issues related to data privacy and data protection were found to be the most significant. This finding aligns with the work of Cartolovni et al., whose study on AI-based medical decision-support tools similarly identified privacy considerations as a major ethical and legal challenge (51).

The use of big data for training and validating AI algorithms is fundamentally important (52), yet it inevitably raises significant privacy concerns, making robust data protection measures essential (53). Several established frameworks address these issues, including the GDPR in May 2018 in Europe, the HIPAA in the United States for health data protection, and the Global Initiative on Ethics of Autonomous and Intelligent Systems. Beyond regulatory compliance, the ethical necessity to protect privacy has actively driven technological innovation, leading to the development of AI models with privacy preservation mechanisms such as federated learning (31, 39). Furthermore, several studies (n = 8), including those by Joshi et al. (41) and Horry et al. (31), have recognized ethical concerns such as data privacy and informed consent, but did not take into account any legal concerns. This reflects a trend where the focus is more on ethical issues than legal ones, not just in lung cancer but also across the broader healthcare field (54).

The predominance of studies from high-income and upper-middle-income countries introduces an important limitation for the generalizability of our findings. As most of the included studies originate from China, Italy, France, Australia, and the U.S., there is a lack of information on how AI will function ethically and legally in low- and lower middle-income countries (3136, 38, 39, 4346). Such an absence of studies raises concerns about global equity. It also limits our understanding of how to implement AI in contexts where healthcare infrastructures, regulatory environments, and cultural perspectives on ethics may differ substantially. Future research should critically examine these disparities and include studies from diverse regions to ensure that AI applications in lung cancer are equitable, context-sensitive, and globally relevant.

4.2 Patterns between AI types and ethical/legal concerns

Specific relationships between AI types, application areas, and the nature of ethical/legal concerns can be observed, although the evidence remains uneven. Diagnostic DL applications, particularly in imaging, are most often associated with risks of data privacy, a lack of transparency/interpretability, and algorithmic bias. This reflects their “black box” nature and their reliance on large datasets (37, 38, 40, 42, 45, 47). Hybrid or multimodal AI systems integrate clinical records, genomic data, and imaging. They raise compounded challenges, including data privacy, informed consent, and a lack of regulatory oversight (31, 32, 35, 41, 43, 48, 50).

The type of AI application directly influences the nature of the ethical and legal concerns it raises. Diagnostic DL models tend to prioritize issues of privacy and transparency, while hybrid approaches used in all lung care areas, frequently highlight gaps in existing regulations. Yet, most studies address these concerns in general terms, without explicitly linking them to the architecture or operational context of the AI systems involved.

4.3 Validity and practicality of proposed solutions

Although technical solutions such as homomorphic encryption (42), federated learning (31, 39), and Grad-CAM explainability (37) show promising results. However, they are primarily reported in experimental or small-scale contexts. Their deployment on a clinical scale is rarely validated. Blockchain for privacy (38) and bias mitigation algorithms (45) have also been proposed, but trade-offs—such as reduced model performance or higher computational demands—are rarely assessed. Only one study (45) acknowledged that bias mitigation reduced performance.

Policy proposals, such as labeling robots as “electronic persons” (43), have been described as legally ambiguous and inconsistent with current jurisprudence, potentially undermining real-world applicability. Legal recommendations, like clearer data ownership laws (50) lack jurisdiction-specific detail, particularly regarding interoperability between regulatory frameworks like GDPR and HIPAA.

Taken together, the proposed solutions can be synthesized into three broad categories: technical safeguards (e.g., encryption, federated learning, blockchain, explainability tools), legal structures (e.g., liability frameworks, data ownership regulations), and policy guidelines (e.g., international standards or governance frameworks). While these approaches highlight possible pathways forward, they are often presented in isolation and rarely assessed for feasibility, scalability, or readiness for clinical use, a limitation also noted in previous reviews of AI governance proposals (51). Technical measures may enhance privacy but reduce performance; legal structures can improve accountability but face jurisdiction barriers; and policy guidelines often remain aspirational. Consequently, most proposals remain abstract. These trade-offs underscore the need for future research that critically examines not only the conceptual merit of these proposals but also their operational viability in diverse healthcare settings.

5 Conclusion

This review surfaces a vital concern: while ethical and legal issues widely acknowledged, the depth of analysis often remains surface-level, lacking in specificity and operational grounding. Ethical concerns are explored more than legal ones, and the mapping between AI type, clinical application, ethical and legal implications and actionable solutions is still underdeveloped. A meaningful step forward would be to develop context-aware, AI-type-specific governance frameworks that are technically feasible, legally binding, and globally inclusive, a need not currently addressed by the literature.

Author contributions

GC: Writing – original draft, Writing – review & editing, Conceptualization, Investigation, Methodology, Software. FL: Writing – review & editing. CS: Writing – review & editing. NA: Writing – review & editing, Investigation, Methodology, Writing – original draft. RÁ: Writing – original draft, Supervision, Writing – review & editing. OV: Supervision, Writing – review & editing, Writing – original draft.

Funding

The author(s) declare that financial support was received for the research and/or publication of this article. This scoping review is part of the LANTERN project of ERA PerMed JTC2022 and 2023-1.2.1-ERA_NET-2023-00007.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Generative AI statement

The authors declare that no Gen AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpubh.2025.1663298/full#supplementary-material

References

1. Zhou, J, Xu, Y, Liu, J, Feng, L, Yu, J, and Chen, D. Global burden of lung cancer in 2022 and projections to 2050: incidence and mortality estimates from GLOBOCAN. Cancer Epidemiol. (2024) 93:102693. doi: 10.1016/j.canep.2024.102693

PubMed Abstract | Crossref Full Text | Google Scholar

2. Gridelli, C, Rossi, A, Carbone, DP, Guarize, J, Karachaliou, N, Mok, T, et al. Non-small-cell lung cancer. Nat Rev Dis Primers. (2015) 1:15009. doi: 10.1038/nrdp.2015.9

PubMed Abstract | Crossref Full Text | Google Scholar

3. Zhang, J, IJzerman, MJ, Oberoi, J, Karnchanachari, N, Bergin, RJ, Franchini, F, et al. Time to diagnosis and treatment of lung cancer: a systematic overview of risk factors, interventions and impact on patient outcomes. Lung Cancer. (2022) 166:27–39. doi: 10.1016/j.lungcan.2022.01.015

PubMed Abstract | Crossref Full Text | Google Scholar

4. Bi, J, Tuo, J, Xiao, Y, Tang, D, Zhou, X, Jiang, Y, et al. Observed and relative survival trends of lung cancer: a systematic review of population-based cancer registration data. Thorac Cancer. (2023) 15:142–51. doi: 10.1111/1759-7714.15170

PubMed Abstract | Crossref Full Text | Google Scholar

5. Yousefi, M, Jalilian, H, Heydari, S, Seyednejad, F, and Mir, N. Cost of lung Cancer: a systematic review. Value Health Reg Issues. (2023) 33:17–26. doi: 10.1016/j.vhri.2022.07.007

PubMed Abstract | Crossref Full Text | Google Scholar

6. Chen, S, Cao, Z, Prettner, K, Kuhn, M, Yang, J, Jiao, L, et al. Estimates and projections of the global economic cost of 29 cancers in 204 countries and territories from 2020 to 2050. JAMA Oncol. (2023) 9:465–72. doi: 10.1001/jamaoncol.2022.7826

PubMed Abstract | Crossref Full Text | Google Scholar

7. Naik, N, Hameed, BMZ, Shetty, DK, Swain, D, Shah, M, Paul, R, et al. Legal and ethical consideration in artificial intelligence in healthcare: who takes responsibility? Front Surg. (2022) 9:862322. doi: 10.3389/fsurg.2022.862322

PubMed Abstract | Crossref Full Text | Google Scholar

8. Mintz, Y, and Brodie, R. Introduction to artificial intelligence in medicine. Minim Invasive Ther Allied Technol. (2019) 28:73–81. doi: 10.1080/13645706.2019.1575882

PubMed Abstract | Crossref Full Text | Google Scholar

9. Pei, Q, Luo, Y, Chen, Y, Li, J, Xie, D, and Ye, T. Artificial intelligence in clinical applications for lung cancer: diagnosis, treatment and prognosis. Clin Chem Lab Med. (2022) 60:1974–83. doi: 10.1515/cclm-2022-0291

PubMed Abstract | Crossref Full Text | Google Scholar

10. Huang, D, Li, Z, Jiang, T, Yang, C, and Li, N. Artificial intelligence in lung cancer: current applications, future perspectives, and challenges. Front Oncol. (2024) 14:1486310. doi: 10.3389/fonc.2024.1486310

PubMed Abstract | Crossref Full Text | Google Scholar

11. Yang, D, Miao, Y, Liu, C, Zhang, N, Zhang, D, Guo, Q, et al. Advances in artificial intelligence applications in the field of lung cancer. Front Oncol. (2024) 14:1449068. doi: 10.3389/fonc.2024.1449068

Crossref Full Text | Google Scholar

12. Prognostic value of deep learning-mediated treatment monitoring in lung cancer patients receiving immunotherapy – PubMed [Internet]. Available online at: https://pubmed.ncbi.nlm.nih.gov/33738253/ (Accessed September 7, 2025)

Google Scholar

13. Sybil: A validated deep learning model to predict future lung cancer Risk from a single low-dose chest computed tomography – PubMed [Internet]. Available online at: https://pubmed.ncbi.nlm.nih.gov/36634294/ (Accessed September 7, 2025)

Google Scholar

14. Wang, Y, Cai, H, Pu, Y, Li, J, Yang, F, Yang, C, et al. The value of AI in the diagnosis, treatment, and prognosis of malignant lung cancer. Front Radiol. (2022) 2:810731. doi: 10.3389/fradi.2022.810731

PubMed Abstract | Crossref Full Text | Google Scholar

15. Wang, TW, Hong, JS, Chiu, HY, Chao, HS, Chen, YM, and Wu, YT. Standalone deep learning versus experts for diagnosis lung cancer on chest computed tomography: a systematic review. Eur Radiol. (2024) 34:7397–407. doi: 10.1007/s00330-024-10804-6

PubMed Abstract | Crossref Full Text | Google Scholar

16. Saha, U, and Prakash, S. Multi-attention stacked ensemble for lung cancer detection in CT scans. arXiv ; (2025). Available online at: http://arxiv.org/abs/2507.20221 (Accessed September 7, 2025)

Google Scholar

17. Char, DS, Shah, NH, and Magnus, D. Implementing machine learning in health care — addressing ethical challenges. N Engl J Med. (2018) 378:981–3. doi: 10.1056/NEJMp1714229

PubMed Abstract | Crossref Full Text | Google Scholar

18. Computer-Based Medical Consultations: Mycin. Elsevier; (1976). Available online at: https://linkinghub.elsevier.com/retrieve/pii/B9780444001795X5001X (Accessed June 18, 2025)

Google Scholar

19. Morley, J, Floridi, L, Kinsey, L, and Elhalal, A. From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics. (2020) 26:2141–68. doi: 10.1007/s11948-019-00165-5

PubMed Abstract | Crossref Full Text | Google Scholar

20. Huang, S, Yang, J, Shen, N, Xu, Q, and Zhao, Q. Artificial intelligence in lung cancer diagnosis and prognosis: current application and future perspective. Semin Cancer Biol. (2023) 89:30–7. doi: 10.1016/j.semcancer.2023.01.006

PubMed Abstract | Crossref Full Text | Google Scholar

21. Quanyang, W, Yao, H, Sicong, W, Linlin, Q, Zewei, Z, Donghui, H, et al. Artificial intelligence in lung cancer screening: detection, classification, prediction, and prognosis. Cancer Med. (2024) 13:e7140. doi: 10.1002/cam4.7140

PubMed Abstract | Crossref Full Text | Google Scholar

22. Lekadir, K, Frangi, AF, Porras, AR, Glocker, B, Cintas, C, Langlotz, CP, et al. FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare. BMJ. (2025) 388:e081554. doi: 10.1136/bmj-2024-081554

Crossref Full Text | Google Scholar

23. Gerke, S, Minssen, T, and Cohen, G. Chapter 12 – Ethical and legal challenges of artificial intelligence-driven healthcare. In: A Bohr and K Memarzadeh, editors. Artificial intelligence in healthcare. Academic Press, imprint of Elsevier (2020). 295–336.

Google Scholar

24. Bottomley, D, and Thaldar, D. Liability for harm caused by AI in healthcare: an overview of the core legal concepts. Front Pharmacol. (2023) 14:1297353. doi: 10.3389/fphar.2023.1297353

Crossref Full Text | Google Scholar

25. Murdoch, B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics. (2021) 22:122. doi: 10.1186/s12910-021-00687-3

PubMed Abstract | Crossref Full Text | Google Scholar

26. Liu, S, and Guo, LR. Data ownership in the AI-powered integrative health care landscape. JMIR Med Inform. (2024) 12:e57754. doi: 10.2196/57754

PubMed Abstract | Crossref Full Text | Google Scholar

27. Froicu, EM, Creangă-Murariu, I, Afrăsânie, VA, Gafton, B, Alexa-Stratulat, T, Miron, L, et al. Artificial intelligence and decision-making in oncology: a review of ethical, legal, and informed consent challenges. Curr Oncol Rep. (2025) 27:1002–12. doi: 10.1007/s11912-025-01698-8

Crossref Full Text | Google Scholar

28. Arksey, H, and O’Malley, L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. (2005) 8:19–32. doi: 10.1080/1364557032000119616

Crossref Full Text | Google Scholar

29. Aromataris, E, Lockwood, C, Porritt, K, Pilla, B, and Jordan, Z, editors. JBI Manual for evidence synthesis. JBI ; (2024). Available online at: https://jbi-global-wiki.refined.site/space/MANUAL (Accessed May 28, 2024).

Google Scholar

30. Tricco, AC, Lillie, E, Zarin, W, O’Brien, KK, Colquhoun, H, Levac, D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. (2018) 169:467–73. doi: 10.7326/M18-0850

Crossref Full Text | Google Scholar

31. Horry, MJ, Chakraborty, S, Pradhan, B, Paul, M, DPS, Gomes, and Ul-Haq, A. Deep mining generation of lung Cancer malignancy models from chest X-ray images. (2020). Available online at: http://arxiv.org/abs/2012.05447

Google Scholar

32. Field, M, Vinod, S, Aherne, N, Carolan, M, Dekker, A, Delaney, G, et al. Implementation of the Australian computer-assisted Theragnostics (AusCAT) network for radiation oncology data extraction, reporting and distributed learning. J Med Imaging Radiat Oncol. (2021) 65:627–36. doi: 10.1111/1754-9485.13287

PubMed Abstract | Crossref Full Text | Google Scholar

33. Cucchiara, F, Petrini, I, Romei, C, Crucitta, S, Lucchesi, M, Valleggi, S, et al. Combining liquid biopsy and radiomics for personalized treatment of lung cancer patients. State of the art and new perspectives. Pharmacol Res. (2021) 169:105643. doi: 10.1016/j.phrs.2021.105643

PubMed Abstract | Crossref Full Text | Google Scholar

34. Bellini, V, Valente, M, Del Rio, P., and Bignami, E. Artificial intelligence in thoracic surgery: a narrative review. J Thorac Dis. Available online at: https://jtd.amegroups.org/article/view/56034/html (Accessed May 21, 2025)

Google Scholar

35. Zhang, H, Meng, D, Cai, S, Guo, H, Chen, P, Zheng, Z, et al. The application of artificial intelligence in lung cancer: a narrative review. Transl Cancer Res. (2021) 10:2478–87. doi: 10.21037/tcr-20-3398

PubMed Abstract | Crossref Full Text | Google Scholar

36. Huang, G, Wei, X, Tang, H, Bai, F, Lin, X, and Xue, D. A systematic review and meta-analysis of diagnostic performance and physicians’ perceptions of artificial intelligence (AI)-assisted CT diagnostic technology for the classification of pulmonary nodules. J Thorac Dis. (2021) 13:4797–811. doi: 10.21037/jtd-21-810

PubMed Abstract | Crossref Full Text | Google Scholar

37. Kaliyugarasan, S, Lundervold, A, and Lundervold, AS. Pulmonary nodule classification in lung Cancer from 3D thoracic CT scans using fastai and MONAI. Int J Interact Multimed Artif Intell. (2021) 6(Special Issue on Current Trends in Intelligent Multimedia Processing Systems:83–9. doi: 10.9781/ijimai.2021.05.002

Crossref Full Text | Google Scholar

38. Kumar, R, Wang, W, Kumar, J, Yang, T, Khan, A, Ali, W, et al. An integration of blockchain and AI for secure data sharing and detection of CT images for the hospitals. Comput Med Imaging Graph. (2021) 87:101812. doi: 10.1016/j.compmedimag.2020.101812

PubMed Abstract | Crossref Full Text | Google Scholar

39. Fan, KF, Liu, LX, Yang, MZ, Fan, KF, Liu, LX, and Yang, MZ. Federated learning of lung nodule detection based on dual mechanism differential privacy protection. HCIS. (2024) 14:19–9. doi: 10.22967/HCIS.2024.14.019

Crossref Full Text | Google Scholar

40. Sangeetha, SKB, Mathivanan, SK, Karthikeyan, P, Rajadurai, H, Shivahare, BD, Mallik, S, et al. An enhanced multimodal fusion deep learning neural network for lung cancer classification. Syst Soft Comput. (2024) 6:200068. doi: 10.1016/j.sasc.2023.200068

Crossref Full Text | Google Scholar

41. Joshi, A, Maiti, J, Sharma, V, Dewangan, O, Jain, S, and Gurupahchan, DK. Machine learning-based classification of lung cancer types from radiological images. In IEEE Comput Soc ; (2023), 1–7. Available online at: https://www.computer.org/csdl/proceedings-article/icaiihi/2023/10489617/1Waa8VmsLok (Accessed May 21, 2025)

Google Scholar

42. Adhikary, S, Dutta, S, and Dwivedi, AD. Secret learning for lung cancer diagnosis-a study with homomorphic encryption, texture analysis and deep learning. Biomed Phys Eng Express. (2023) 10. doi: 10.1088/2057-1976/ad0b4b

Crossref Full Text | Google Scholar

43. Etienne, H, Hamdi, S, Le Roux, M, Camuset, J, Khalife-Hocquemiller, T, Giol, M, et al. Artificial intelligence in thoracic surgery: past, present, perspective and limits. Eur Respir Rev Off J Eur Respir Soc. (2020) 29:200010. doi: 10.1183/16000617.0010-2020

PubMed Abstract | Crossref Full Text | Google Scholar

44. de Margerie-Mellon, C, and Chassagnon, G. Artificial intelligence: a critical review of applications for lung nodule and lung cancer. Diagn Interv Imaging. (2023) 104:11–7. doi: 10.1016/j.diii.2022.11.007

PubMed Abstract | Crossref Full Text | Google Scholar

45. Vaidya, A, Chen, RJ, Williamson, DFK, Song, AH, Jaume, G, Yang, Y, et al. Demographic bias in misdiagnosis by computational pathology models. Nat Med. (2024) 30:1174–90. doi: 10.1038/s41591-024-02885-z

PubMed Abstract | Crossref Full Text | Google Scholar

47. Abbaker, N, Minervini, F, Guttadauro, A, Solli, P, Cioffi, U, and Scarci, M. The future of artificial intelligence in thoracic surgery for non-small cell lung cancer treatment a narrative review. Front Oncol. (2024) 14:1347464. doi: 10.3389/fonc.2024.1347464

Crossref Full Text | Google Scholar

48. Davri, A, Birbas, E, Kanavos, T, Ntritsos, G, Giannakeas, N, Tzallas, AT, et al. Deep learning for lung Cancer diagnosis, prognosis and prediction using histological and cytological images: a systematic review. Cancer. (2023) 15:3981. doi: 10.3390/cancers15153981

PubMed Abstract | Crossref Full Text | Google Scholar

49. PMC. Deep learning for the classification of small-cell and non-small-cell lung cancer. Available online at: https://pmc.ncbi.nlm.nih.gov/articles/PMC7352768/ (Accessed May 21, 2025)

Google Scholar

50. Rabbani, M, Kanevsky, J, Kafi, K, Chandelier, F, and Giles, FJ. Role of artificial intelligence in the care of patients with nonsmall cell lung cancer. Eur J Clin Investig. (2018) 48:e12901. doi: 10.1111/eci.12901

PubMed Abstract | Crossref Full Text | Google Scholar

51. Čartolovni, A, Tomičić, A, and Lazić Mosler, E. Ethical, legal, and social considerations of AI-based medical decision-support tools: a scoping review. Int J Med Inform. (2022) 161:104738. doi: 10.1016/j.ijmedinf.2022.104738

PubMed Abstract | Crossref Full Text | Google Scholar

52. Price, WN, and Cohen, IG. Privacy in the age of medical big data. Nat Med. (2019) 25:37–43. doi: 10.1038/s41591-018-0272-7

PubMed Abstract | Crossref Full Text | Google Scholar

53. Yadav, N, Pandey, S, Gupta, A, Dudani, P, Gupta, S, and Rangarajan, K. Data privacy in healthcare: in the era of artificial intelligence. Indian Dermatol Online J. (2023) 14:788–92. doi: 10.4103/idoj.idoj_543_23

PubMed Abstract | Crossref Full Text | Google Scholar

54. Karimian, G, Petelos, E, and Evers, S. The ethical issues of the application of artificial intelligence in healthcare: a systematic scoping review. AI Ethics. (2022) 2:1–13. doi: 10.1007/s43681-021-00131-7

Crossref Full Text | Google Scholar

Keywords: ethics, law – moral, artificial intelligence, machine learning, lung cancer

Citation: Chamouni G, Lococo F, Sassorossi C, Atuhaire N, Ádány R and Varga O (2025) Ethical and legal concerns in artificial intelligence applications for the diagnosis and treatment of lung cancer: a scoping review. Front. Public Health. 13:1663298. doi: 10.3389/fpubh.2025.1663298

Received: 10 July 2025; Accepted: 30 September 2025;
Published: 14 October 2025.

Edited by:

Hannah Van Kolfschooten, University of Amsterdam, Netherlands

Reviewed by:

Ridwan Islam Sifat, University of Maryland, Baltimore County, United States
Ricardo De Moraes E. Soares, Instituto Politecnico de Setubal (IPS), Portugal

Copyright © 2025 Chamouni, Lococo, Sassorossi, Atuhaire, Ádány and Varga. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ghenwa Chamouni, Z2hlbndhLmNoYW1vdW5pQG1lZC51bmlkZWIuaHU=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.