<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Artif. Intell.</journal-id>
<journal-title>Frontiers in Artificial Intelligence</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Artif. Intell.</abbrev-journal-title>
<issn pub-type="epub">2624-8212</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/frai.2023.1149082</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Artificial Intelligence</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Specific challenges posed by artificial intelligence in research ethics</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Bouhouita-Guermech</surname> <given-names>Sarah</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1881303/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Gogognon</surname> <given-names>Patrick</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1727682/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>B&#x000E9;lisle-Pipon</surname> <given-names>Jean-Christophe</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/1396876/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>School of Public Health, Universit&#x000E9; de Montr&#x000E9;al</institution>, <addr-line>Montr&#x000E9;al, QC</addr-line>, <country>Canada</country></aff>
<aff id="aff2"><sup>2</sup><institution>Centre de recherche, CHU Sainte-Justine</institution>, <addr-line>Montr&#x000E9;al, QC</addr-line>, <country>Canada</country></aff>
<aff id="aff3"><sup>3</sup><institution>Faculty of Health Sciences, Simon Fraser University</institution>, <addr-line>Burnaby, BC</addr-line>, <country>Canada</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Fred Wright, North Carolina State University, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Surapaneni Krishna Mohan, Panimalar Medical College Hospital and Research Institute, India; Junaid S. Kalia, NeuroCare.AI, United States</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Jean-Christophe B&#x000E9;lisle-Pipon <email>jean-christophe_belisle-pipon&#x00040;sfu.ca</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>06</day>
<month>07</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>6</volume>
<elocation-id>1149082</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>01</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>13</day>
<month>06</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 Bouhouita-Guermech, Gogognon and B&#x000E9;lisle-Pipon.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Bouhouita-Guermech, Gogognon and B&#x000E9;lisle-Pipon</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<sec>
<title>Background</title>
<p>The twenty first century is often defined as the era of Artificial Intelligence (AI), which raises many questions regarding its impact on society. It is already significantly changing many practices in different fields. Research ethics (RE) is no exception. Many challenges, including responsibility, privacy, and transparency, are encountered. Research ethics boards (REB) have been established to ensure that ethical practices are adequately followed during research projects. This scoping review aims to bring out the challenges of AI in research ethics and to investigate if REBs are equipped to evaluate them.</p>
</sec>
<sec>
<title>Methods</title>
<p>Three electronic databases were selected to collect peer-reviewed articles that fit the inclusion criteria (English or French, published between 2016 and 2021, containing AI, RE, and REB). Two instigators independently reviewed each piece by screening with Covidence and then coding with NVivo.</p>
</sec>
<sec>
<title>Results</title>
<p>From having a total of 657 articles to review, we were left with a final sample of 28 relevant papers for our scoping review. The selected literature described AI in research ethics (i.e., views on current guidelines, key ethical concept and approaches, key issues of the current state of AI-specific RE guidelines) and REBs regarding AI (i.e., their roles, scope and approaches, key practices and processes, limitations and challenges, stakeholder perceptions). However, the literature often described REBs ethical assessment practices of projects in AI research as lacking knowledge and tools.</p>
</sec>
<sec>
<title>Conclusion</title>
<p>Ethical reflections are taking a step forward while normative guidelines adaptation to AI&#x00027;s reality is still dawdling. This impacts REBs and most stakeholders involved with AI. Indeed, REBs are not equipped enough to adequately evaluate AI research ethics and require standard guidelines to help them do so.</p>
</sec></abstract>
<kwd-group>
<kwd>artificial intelligence</kwd>
<kwd>AI ethics</kwd>
<kwd>normative guidance</kwd>
<kwd>research ethics</kwd>
<kwd>research ethics board</kwd>
</kwd-group>
<counts>
<fig-count count="4"/>
<table-count count="4"/>
<equation-count count="0"/>
<ref-count count="66"/>
<page-count count="17"/>
<word-count count="14464"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Medicine and Public Health</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1. Introduction</title>
<p>The twenty first century is often defined as the era of artificial intelligence (AI) Brynjolfsson and Andrew, <xref ref-type="bibr" rid="B14">2017</xref>). For a long time, humans have been conceptualizing an autonomous entity capable of human-like functions and more. Many innovations have preceded what we now know as AI (Stark and Pylyshyn, <xref ref-type="bibr" rid="B61">2020</xref>). The mathematical and computational progress has had a significant impact on what made today&#x00027;s AI possible and flourish so quickly in the span of the last few years (Calmet and John, <xref ref-type="bibr" rid="B15">1997</xref>; Xu et al., <xref ref-type="bibr" rid="B66">2021</xref>). Many place their bet on AI&#x00027;s potential to revolutionize most fields. As ubiquitous as it seems, AI&#x00027;s role in our society remains ambiguous. Although Artificial Intelligence comes in different forms, essentially, it is predisposed to simulate human intelligence (Mintz and Brodie, <xref ref-type="bibr" rid="B48">2019</xref>). AI has many forms: voice or facial recognition applications or even medical diagnosis systems (radiology, dermatology, etc.), algorithms that increase user service, and more (Copeland, <xref ref-type="bibr" rid="B21">2022</xref>). AI is mainly used to increase productivity and make tasks less burdensome. It has proven to absorb and analyze more data in a shorter period than humans. Indeed, some have noticed patients&#x00027; satisfaction increasing, better financial performance, and better data management in healthcare (Davenport and Rajeev, <xref ref-type="bibr" rid="B23">2018</xref>). Many innovations emanated from AI&#x00027;s ability to collect large sets of data which resulted in better predictions on different issues, helping to understand information collected throughout history, or depicting puzzling phenomena more efficiently (The Royal Society, The Alan Turing Institute, <xref ref-type="bibr" rid="B64">2019</xref>).</p>
<p>However, advances made in AI come with concerns about ethical, legal, and social issues (B&#x000E9;lisle-Pipon et al., <xref ref-type="bibr" rid="B9">2021</xref>). AI systems (AIS) are part of professionals&#x00027; decision-making and occasionally take over that role, making us wonder how responsibilities and functions are divided between each participating party (Dignum, <xref ref-type="bibr" rid="B24">2018</xref>). Another issue worth investigating is data bias. A group of individuals initially programs AI to adhere to a set of pre-established data. This data could already be biased (i.e., favoring one group of people over another based on their race or social-economic status) by having one specific group represented and marginalizing the rest (M&#x000FC;ller, <xref ref-type="bibr" rid="B50">2021</xref>). Another fundamental issue to consider is data privacy. People are worried about using their data, which has become easier to access by big companies (Mazurek and Karolina, <xref ref-type="bibr" rid="B41">2019</xref>). It is now much more strenuous to track where all the existing information goes. The lack of transparency has decreased public&#x00027;s trust. Many, such as industry representatives, governments, academics, and civil society, are working toward building better frameworks and regulations to design, develop and implement AI efficiently (Cath, <xref ref-type="bibr" rid="B16">2018</xref>). Considering the multidisciplinary aspect of AI, different experts are called to provide their knowledge and expertise on the matter (B&#x000E9;lisle-Pipon et al., <xref ref-type="bibr" rid="B10">2022</xref>). Many fields must leave room to adjust their standard of practice. One field that will be discussed in this study is research ethics.</p>
<p>Research ethics boards (REBs; the term REB is used for simplicity and includes REC, Research ethics committees, and IRB, Institutional review boards) have been created to ensure that ethical practices are adequately followed during research projects to ensure participant protection and that advantages outweigh the induced harms (Bonnet and B&#x000E9;n&#x000E9;dicte, <xref ref-type="bibr" rid="B12">2009</xref>). To achieve this, they follow existent codes and regulations. For instance, REBs in Canada turn to the <italic>Canadian Tri-Council Policy Statement (TCPS2)</italic> to build their framework in research ethics. In contrast, the US uses the <italic>US Common Rule</italic> as a model (Page and Jeffrey, <xref ref-type="bibr" rid="B55">2017</xref>). Many countries have a set of guidelines and laws that are used as a starting point to set boundaries for AI use. However, ordinances and regulations regarding AI are limited (O&#x00027;Sullivan et al., <xref ref-type="bibr" rid="B54">2019</xref>). The lack of tools makes it harder for REBs to adjust to the new challenges created by AI. This gap reflects the need to understand better the current state of knowledge and findings in research ethics regarding AI.</p>
<p>To inform and assist REBs in their challenges with AI, we conducted a scoping review of the literature on REBs&#x00027; current practices and the challenges AI may pose during their evaluation. Specifically, this article aims to raise the issues and good practices to support REBs&#x00027; mission in research involving AI. To our knowledge, this is the first review on this topic. After gathering and analyzing the relevant articles, we will discuss the critical elements in research ethics AI while considering REBs&#x00027; role.</p>
</sec>
<sec id="s2">
<title>2. Methodology</title>
<p>To better understand the REBs&#x00027; current practices toward AI in research, we conducted a scoping review on articles generated from PubMed, Ovid, and Web of Science. Since the literature behind our research question is still preliminary, opting for a scoping review seemed like the better approach to garner the existing and important papers related to our topic (Colquhoun et al., <xref ref-type="bibr" rid="B20">2014</xref>). A scoping review was preferred over a systematic review since the studied field is not yet clearly defined, and the literature behind it is still very limited (Munn et al., <xref ref-type="bibr" rid="B51">2018</xref>). After a preliminary overview of relevant articles which showcased the limited literature on the matter, we opted for a scoping review for a more exploratory approach. A scoping review will allow us to collect and assess essential information from the emerging literature and gather it into one place to help advance future studies. We focused on two concepts: AI and REB. <xref ref-type="table" rid="T1">Table 1</xref> of this article presents equations for each concept that differ from one search engine to another. We sought to use general terms frequently used in the literature to define both concepts. After validating the research strategy with a librarian, the subsequent articles were imported to Covidence. The criteria exclusion to determine whether studies were not eligible for the review were: articles published before 2016, articles published in a language other than English or French, studies found in books, book chapters, or conferences, and studies that did not contain AI, REB, and research ethics. The criteria inclusion to determine whether studies were eligible for the review were (as seen in <xref ref-type="table" rid="T2">Table 2</xref>): articles published between 2016 and 2021, articles published in English or French, studies published in a peer-reviewed article, a commentary, an editorial, a review or a discussion paper and studies containing AI, REB and research ethics. We have chosen 2016 as the starting year of the review because while it was a year that showed significant advancement in AI, many were concerned about its ethical implications (Mills, <xref ref-type="bibr" rid="B47">2016</xref>; Stone et al., <xref ref-type="bibr" rid="B62">2016</xref>; Greene et al., <xref ref-type="bibr" rid="B35">2019</xref>). Since AI is fast evolving, the literature from the recent years was used to obtain the emergent and most recent results (Nittas et al., <xref ref-type="bibr" rid="B53">2023</xref>; Sukums et al., <xref ref-type="bibr" rid="B63">2023</xref>). <xref ref-type="fig" rid="F1">Figure 1</xref> presents our review flowchart following PRISMA&#x00027;s guidelines (Moher et al., <xref ref-type="bibr" rid="B49">2009</xref>). The initial total number of studies subject to review was 657. For the first step of the review, two investigators screened all 657 articles by carefully reviewing their titles and abstracts and considering the inclusion and exclusion criteria. That resulted in excluding 589 irrelevant studies leaving us with 68 studies. In the next review step, two investigators did a full-text reading of the studies assessed for eligibility. This Full-Text review excluded 40 studies (21 articles with no &#x0201C;research ethics&#x0201D; or &#x0201C;research ethics committee,&#x0201D; eight papers with no &#x0201C;REB,&#x0201D; &#x0201C;RE,&#x0201D; and &#x0201C;AI,&#x0201D; five articles with no &#x0201C;Artificial Intelligence,&#x0201D; five pieces that were not research papers and one unavailable full text). With NVivo (Braun and Victoria, <xref ref-type="bibr" rid="B13">2006</xref>), each article was analyzed according to a set of different themes that aimed to answer the questions of the current topic. &#x0201C;REB&#x0201D; is used throughout the article as an umbrella term to include all the variations that are used to label research ethics boards in different countries.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Search strategy.</p></caption> 
<table frame="box" rules="all">
<thead>
<tr style="background-color:&#x00023;919498;color:&#x00023;ffffff">
<th valign="top" align="left"><bold>Concepts</bold></th>
<th valign="top" align="left"><bold>Terms</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><bold>AI</bold></td>
<td valign="top" align="left"><bold>PB</bold> <bold>&#x0003D;</bold> (&#x0201C;artificial intelligence&#x0201D; OR &#x0201C;AI&#x0201D; OR &#x0201C;ambient intelligence&#x0201D; OR &#x0201C;Machine Learning&#x0201D; OR &#x0201C;Deep Learning&#x0201D; OR &#x0201C;machine intelligence&#x0201D; OR &#x0201C;Natural Language Processing<sup>&#x0002A;</sup>&#x0201D; OR bot OR robot<sup>&#x0002A;</sup> or &#x0201C;computational intelligence&#x0201D; or &#x0201C;computer reasoning&#x0201D; or &#x0201C;computer vision system<sup>&#x0002A;</sup>&#x0201D;)) OR (&#x0201C;Artificial Intelligence&#x0201D; or &#x0201C;Machine Learning&#x0201D; [MeSH Terms]) <bold>EMB</bold> <bold>&#x0003D;</bold> exp Artificial Intelligence/ or (artificial intelligence or &#x0201C;AI&#x0201D; or ambient intelligence or Machine Learning or Deep Learning or machine intelligence or Natural Language Processing<sup>&#x0002A;</sup> or bot or robot<sup>&#x0002A;</sup> or computational intelligence or computer reasoning or computer vision system<sup>&#x0002A;</sup>).ab,kf,kw,ti. <bold>WoS</bold> <bold>&#x0003D;</bold> (&#x0201C;artificial intelligence&#x0201D; or &#x0201C;AI&#x0201D; or &#x0201C;ambient intelligence&#x0201D; or &#x0201C;Machine Learning&#x0201D; or &#x0201C;Deep Learning&#x0201D; or &#x0201C;machine intelligence&#x0201D; or &#x0201C;Natural Language Processing<sup>&#x0002A;</sup>&#x0201D; or bot or robot<sup>&#x0002A;</sup> or &#x0201C;computational intelligence&#x0201D; or &#x0201C;computer reasoning&#x0201D; or &#x0201C;computer vision system<sup>&#x0002A;</sup>&#x0201D;)</td>
</tr>
<tr>
<td valign="top" align="left"><bold>REB</bold></td>
<td valign="top" align="left"><bold>PB</bold> <bold>&#x0003D;</bold> (&#x0201C;research ethic<sup>&#x0002A;</sup>&#x0201D; or &#x0201C;responsible research&#x0201D; or &#x0201C;REB&#x0201D; or &#x0201C;IRBS&#x0201D; or &#x0201C;Institutional Review Board<sup>&#x0002A;</sup>&#x0201D; or &#x0201C;Ethical review board<sup>&#x0002A;</sup>&#x0201D; or &#x0201C;ERB&#x0201D; or ((&#x0201C;Ethics committee<sup>&#x0002A;</sup>&#x0201D; or &#x0201C;Ethic committee<sup>&#x0002A;</sup>&#x0201D;) adj2 (research<sup>&#x0002A;</sup> or independent)) OR (&#x0201C;Ethics&#x0201D; or &#x0201C;Research Ethics Committee&#x0201D; or Research<sup>&#x0002A;</sup>[MeSH Terms]) <bold>EMB</bold> <bold>&#x0003D;</bold> ethics committees/ or ethics committees, research/ or ethics, research/ or (research ethic<sup>&#x0002A;</sup> or responsible research or &#x0201C;REB&#x0201D; or &#x0201C;IRBS&#x0201D; or Institutional Review Board<sup>&#x0002A;</sup> or Ethical review board<sup>&#x0002A;</sup> or &#x0201C;ERB&#x0201D; or ((Ethics committee<sup>&#x0002A;</sup> or Ethic committee<sup>&#x0002A;</sup>) adj2 (research<sup>&#x0002A;</sup> or independent))).ab,kf,kw,ti. <bold>WoS</bold> <bold>&#x0003D;</bold> (&#x0201C;research ethic<sup>&#x0002A;</sup>&#x0201D; or &#x0201C;responsible research&#x0201D; or &#x0201C;REB&#x0201D; or &#x0201C;IRBS&#x0201D; or &#x0201C;Institutional Review Board<sup>&#x0002A;</sup>&#x0201D; or &#x0201C;Ethical review board<sup>&#x0002A;</sup>&#x0201D; or &#x0201C;ERB&#x0201D; or ((&#x0201C;Ethics committee<sup>&#x0002A;</sup>&#x0201D; or &#x0201C;Ethic committee<sup>&#x0002A;</sup>&#x0201D;) NEAR/2 (research<sup>&#x0002A;</sup> or independent)))</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p>PB, PubMed; EMB, Embase; WoS, Web of Science.</p>
</table-wrap-foot>
</table-wrap>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Selection criteria.</p></caption> 
<table frame="box" rules="all">
<thead>
<tr style="background-color:&#x00023;919498;color:&#x00023;ffffff">
<th/>
<th valign="top" align="left"><bold>Description</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Date</td>
<td valign="top" align="left">2016&#x02013;2021 (5 years)</td>
</tr> <tr>
<td valign="top" align="left">Language</td>
<td valign="top" align="left">English; French</td>
</tr> <tr>
<td valign="top" align="left">Type of publication</td>
<td valign="top" align="left">Peer-reviewed article, a commentary, an editorial, a review, or a discussion paper</td>
</tr>
<tr>
<td valign="top" align="left">Concepts</td>
<td valign="top" align="left">AI, REB, and research ethics</td>
</tr>
</tbody>
</table>
</table-wrap>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>PRISMA Flowchart. AI, Artificial intelligence; REB, Review ethics board; REC, Research ethics committees; IRB, Institutional review boards; RE, Research ethics.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-06-1149082-g0001.tif"/>
</fig>
</sec>
<sec id="s3">
<title>3. Results</title>
<p>The following section includes the results based on the thematic coding grid used to create the different sections relevant to our topic (see <xref ref-type="fig" rid="F2">Figure 2</xref>). The results come from our final sample of articles.</p>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Architecture that illustrates the article&#x00027;s results structure starting with the two main domains: <bold>(A)</bold> AI and research ethics and <bold>(B)</bold> research ethics boards.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-06-1149082-g0002.tif"/>
</fig>
<sec>
<title>3.1. AI and research ethics</title>
<p>Researchers are faced with several ethical quandaries while navigating research projects. They are urged to safeguard human research participants&#x00027; protection when working with human research participants. However, it is not always simple to balance the common good (i.e., develop solutions for the wider population) and the individual interest (i.e., research participants&#x00027; safety) (Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>; Battistuzzi et al., <xref ref-type="bibr" rid="B8">2021</xref>). Researchers are responsible for anticipating and preventing risks from harming participants while advancing scientific knowledge, which requires maintaining an adequate risk-benefit ratio (Sedenberg et al., <xref ref-type="bibr" rid="B59">2016</xref>; Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>). With AI&#x00027;s fast growth, another set of issues is added to the existing ones: data governance, consent, responsibility, justice, transparency, privacy, safety, reliability, and more (Samuel and Derrick, <xref ref-type="bibr" rid="B57">2020</xref>; Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). This section will describe the views on current guidelines to regulate AI, key principles and ethical approaches, and the main issues. In the current climate, we expect continuity on the following concepts: responsibility, explainability, validity, transparency, informed consent, justice, privacy, data governance, benefits and risks assessment, safety, and justice.</p>
<sec>
<title>3.1.1. Views on current guidelines</title>
<sec>
<title>3.1.1.1. Existent guidelines that can be used to regulate AI</title>
<p>The current normative guidelines do not make up for the few AI-related guidelines (Aymerich-Franch and Fosch-Villaronga, <xref ref-type="bibr" rid="B7">2020</xref>). However, in addition to the ethical standards used as a basis for AI use guidelines, the UN published a first set of guidelines to regulate AI (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). Many projects, like the Human Brain Project (HBP), took the initiative to encourage discussions from different parties to anticipate issues that could result from their research (Stahl and Coeckelbergh, <xref ref-type="bibr" rid="B60">2016</xref>; Aicardi et al., <xref ref-type="bibr" rid="B3">2018</xref>, <xref ref-type="bibr" rid="B2">2020</xref>). Researchers and developers can access tools that help orient their reflections on their responsible use of technology (Aymerich-Franch and Fosch-Villaronga, <xref ref-type="bibr" rid="B7">2020</xref>). Furthermore, the implementation of ethical approval committees (i.e., Human Research Ethics Committees in Australia) that uses a soft-governance model, which leans toward ethical regulation and is less restrictive than legal regulations, would help prevent studies or companies abuse their participants or users (Andreotta et al., <xref ref-type="bibr" rid="B5">2021</xref>). Many are contemplating using digital health ELSI to encourage the implementation of ethical standards in AI when laws and regulations are lacking in it (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>).</p>
<p>Articles have mentioned many leading countries in AI research. <xref ref-type="supplementary-material" rid="SM1">Supplementary Table 1</xref> showcases the progress and effort the European Union (EU) and other countries have made regarding AI regulation. The countries, alongside the EU, that were often mentioned throughout our final sample were the following: Australia, Canada, China, the European Union, the United Kingdom, and the United States. Since this information is strictly from our selected articles, some information was unavailable. While noticeable progress is being made regarding AI development and regulation, most countries have shown little indication, if any, of AI research ethics.</p>
</sec>
<sec>
<title>3.1.1.2. Moral status and rights</title>
<p>While guidelines and norms are shifting to fit AI standards, many questions on moral status and rights are raised to adapt to this new reality. Authors argue that we cannot assign moral agency to AI and robots. There are multiple reasons for it: robots do not seem capable of solving problems ethically (Stahl and Coeckelbergh, <xref ref-type="bibr" rid="B60">2016</xref>), AI&#x00027;s lack of explanation regarding its generated results, and the absence of willingness to choose (Farisco et al., <xref ref-type="bibr" rid="B28">2020</xref>) which might impact decision making in research ethics.</p>
<p>Rights are attributed to different living entities. For instance, in the EU, the law protects animals as sentient living organisms and unique tangible goods. Their legal status also obliges researchers not to harm animals during research projects, making us question the status and rights we should assign AIS or robots (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). Indeed, Miller pointed out that having a machine at one&#x00027;s disposal raises questions on human-machine relationships and the hierarchical power it might induce (Miller, <xref ref-type="bibr" rid="B46">2020</xref>).</p>
</sec>
</sec>
<sec>
<title>3.1.2. Key principles and norms of AI systems in research ethics</title>
<p>We have seen that the lexicon and the language used invoke both classical theories and contextualization of AI ethics benchmarks within the practices and ethos of research ethics.</p>
<sec>
<title>3.1.2.1. Ethical approaches in terms of AI research ethics</title>
<p>The literature invoked the following classic theories: the Kantian-inspired model, utilitarianism, principlism (autonomy, beneficence, justice, and non-maleficence), and the precautionary principle. <xref ref-type="table" rid="T3">Table 3</xref> illustrates these essential ethical approaches found in our final sample, along with their description in terms of AI research ethics.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Critical key ethical approaches that were raised in the present scoping review and their description in terms of AI research ethics.</p></caption> 
<table frame="box" rules="all">
<thead>
<tr style="background-color:&#x00023;919498;color:&#x00023;ffffff">
<th valign="top" align="left"><bold>Key ethical approaches</bold></th>
<th valign="top" align="left"><bold>Description in terms of AI research ethics</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left"><bold>Kantian-inspired model</bold></td>
<td valign="top" align="left">The Kantian approach demands that researchers act responsibly during research (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). The same procedure should be executed to ensure responsible AI. <italic>Ex: AI developers must ensure that their system is adequate and will not cause harm for society. Researchers must responsibly use AI systems during their projects</italic>.</td>
</tr> <tr>
<td valign="top" align="left"><bold>Utilitarianism</bold></td>
<td valign="top" align="left">The utilitarianism approach focuses on consequences and the best outcome for most people. It is invoked in the dilemma of using machine learning algorithms to help progress science and maintain participants&#x00027; privacy (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). <italic>Ex: AI systems should serve the wellbeing of participants and other individuals over their usage for scientific progress</italic>.</td>
</tr> <tr>
<td valign="top" align="left"><bold>Principlism</bold></td>
<td valign="top" align="left">Principlism is an approach that underlines principles such as autonomy, beneficence, non-maleficence, and justice invoked in issues raised while developing and using machine learning (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>).</td>
</tr> <tr>
<td valign="top" align="left">Autonomy</td>
<td valign="top" align="left">Participants&#x00027; autonomy suggests they can consent to their own will when participating in a research project using AI (Grote, <xref ref-type="bibr" rid="B36">2021</xref>). <italic>Ex: Many concerns are raised about the eventuality that AI becomes fully autonomous, which takes away our control over them</italic> (Aicardi et al., <xref ref-type="bibr" rid="B2">2020</xref>). <italic>Some may even say they should be granted moral autonomy</italic> (Farisco et al., <xref ref-type="bibr" rid="B28">2020</xref>). <italic>Although, for now, AI mainly relies on humans, whether the users, employers, or programs which then brings up the notion of responsibility</italic> (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). <italic>While it may not be autonomous, its purpose is to assist humans, which could negatively impact our autonomy</italic> (McCradden et al., <xref ref-type="bibr" rid="B42">2020c</xref>).</td>
</tr> <tr>
<td valign="top" align="left">Beneficence</td>
<td valign="top" align="left">AI is more efficient in specific tasks than humans, bringing better results for those involved (Grote, <xref ref-type="bibr" rid="B36">2021</xref>). <italic>Ex: One of AI&#x00027;s benefits is that it can generate more precise and accurate results</italic> (Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref>). <italic>AI can also search data more efficiently and make predictions</italic> (Andreotta et al., <xref ref-type="bibr" rid="B5">2021</xref>; Grote, <xref ref-type="bibr" rid="B36">2021</xref>). <italic>Furthermore, robots can assist humans in alleviating them from specific tasks</italic> (Battistuzzi et al., <xref ref-type="bibr" rid="B8">2021</xref>).</td>
</tr> <tr>
<td valign="top" align="left">Justice</td>
<td valign="top" align="left">AI&#x00027;s use should be done in a way that does not put people at a disadvantage (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>). <italic>Ex: Data bias can result from the under-representation of minority groups which may lead to algorithmic discrimination disadvantaging the groups in question in receiving the proper treatment of care</italic> (Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref>; Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>; Grote, <xref ref-type="bibr" rid="B36">2021</xref>; Li et al., <xref ref-type="bibr" rid="B40">2021</xref>).</td>
</tr> <tr>
<td valign="top" align="left">Non-maleficence</td>
<td valign="top" align="left"><italic>AI must distinguish right from wrong to ensure non-maleficence</italic> (Farisco et al., <xref ref-type="bibr" rid="B28">2020</xref>). <italic>Ex: Robots should not cause harm</italic> (Stahl and Coeckelbergh, <xref ref-type="bibr" rid="B60">2016</xref>).</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Precautionary principle</bold></td>
<td valign="top" align="left">The precautionary principle in AI may serve as a guiding framework to encourage responsible AI research and development, prioritizing the protection of individuals, society, and the environment from potential negative impacts that may arise from AI systems (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). <italic>Ex: AI developers should consider societal needs and ensure that potential risks will be taken care of at the beginning of product conception. Governments should put in place regulations to prevent future harm with AI from happening</italic>.</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec>
<title>3.1.2.2. Responsibility in AI research ethics</title>
<p>Public education and ethical training implementation could help governments spread awareness and sensitize people regarding research ethics in AI (Cath et al., <xref ref-type="bibr" rid="B17">2018</xref>). Accountability of AI regulation and decision-making should not strictly fall into stakeholders&#x00027; hands but also be based on solid legal grounds (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). Digital mental health apps and other institutions will now be attributed responsibilities that have usually been acclaimed to professionals or researchers using the technology (i.e., decision-making, providing users with enough tools to understand and use products, being able to help when needed, etc.) (Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). Scientists and AI developers must not throw caution to the wind regarding the possibility that biased algorithms could be fed to AI models (Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref>). Clinicians will have to tactfully manage to inform patients of the results generated by machine learning (ML) models while considering their risk of error and bias (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). It is still vague to attribute responsibility to specific actors. However, it is necessary to have different groups work together to tackle the problem (Meszaros and Ho, <xref ref-type="bibr" rid="B45">2021</xref>; Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>). Some consider that validity, explainability, and open-source AI systems are some of the defining points that lead to responsibility. With the advancement of technologies and its gain of interest, the sense of social responsibility also increased. Indeed, every actor must contribute to making sure that these novel technologies are developed and used in an ethical matter (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>; Aicardi et al., <xref ref-type="bibr" rid="B2">2020</xref>).</p>
</sec>
<sec>
<title>3.1.2.3. Explainability and validity</title>
<p>An important issue with AIS usually raised is the explainability of results. Deep learning (DL) is another type of ML with more extensive algorithms that encloses data with a broader array of interpretations (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). This makes it harder to explain how DL and AI models reached a particular conclusion (Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref>; Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). This poses transparency issues that are challenging to participants (Grote, <xref ref-type="bibr" rid="B36">2021</xref>).</p>
<p>Since AI is known for its &#x02018;black-box&#x00027; aspect, where results are difficult to justify, it is difficult to fully validate a model with certainty (Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref>). Deciding to monitor research participants closely could help validate results which, in theory, would bring more accurate results. However, close monitoring could also have a negative effect by influencing participants&#x00027; decisions based on whether they mind being monitored or not. This event could, as a result, produce more inaccurate results (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). Furthermore, it could be more challenging in certain contexts to promote validity when journals and funding bodies favor new and innovative studies over ethical research on AI, even if the latter is being promoted (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>).</p>
</sec>
<sec>
<title>3.1.2.4. Transparency and informed consent</title>
<p>According to the White House Office of Science and Technology Policy (OSTP), transparency would help solve many ethical issues (Cath et al., <xref ref-type="bibr" rid="B17">2018</xref>). Transparency allows research participants to be aware of a study&#x00027;s different outlooks and comprehend them (Sedenberg et al., <xref ref-type="bibr" rid="B59">2016</xref>; Grote, <xref ref-type="bibr" rid="B36">2021</xref>). The same goes for new device users (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). AI models (i.e., products, services, apps, sensor-equipped wearable systems, etc.) produce a great deal of data that does not always come from consenting users (Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref>; Meszaros and Ho, <xref ref-type="bibr" rid="B45">2021</xref>). Furthermore, AI&#x00027;s black-box imposes a challenge to obtain informed consent since the lack of explainability of AI-generated results might not allow participants to have enough information to give out their informed consent (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>; Andreotta et al., <xref ref-type="bibr" rid="B5">2021</xref>). Thus, it is essential to make consent forms easy to understand for the targeted audience (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>).</p>
<p>However, the requirement to get informed consent could lead to other less desirable implications. Some argue that requiring authorization for all data, especially studies that hold a vast set of data, might lead to data bias and a decrease in data quality because it only entices a specific group of people to give out consent which leaves out a significant part of the population (Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>).</p>
</sec>
<sec>
<title>3.1.2.5. Privacy</title>
<p>While the levels of privacy differ from one scholar to another, the concept of privacy remains a fundamental value to human beings (Andreotta et al., <xref ref-type="bibr" rid="B5">2021</xref>). Through AI and robotics, data can be seen as attractive commodities which could compromise privacy (Cath et al., <xref ref-type="bibr" rid="B17">2018</xref>). Researchers are responsible for keeping participants unidentifiable while using their data (Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>). However, data collected from many sources can induce a higher risk of identifying people. While pursuing their research study, ML researchers still struggle to comply with privacy guidelines.</p>
<sec>
<title>3.1.2.5.1. Data protection</title>
<p>According to a study, most people do not think data protection is an issue. One reason to explain this phenomenon is that people might not fully grasp the magnitude of its impact (Coeckelbergh et al., <xref ref-type="bibr" rid="B19">2016</xref>). Indeed, the effect could be very harmful to some people. For instance, data found about a person could decrease their chances of employment or even of getting insurance (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). Instead of focusing on data minimization, data protection should be prioritized to ensure ML models get the most relevant data, ensuring data quality while maintaining privacy (McCradden et al., <xref ref-type="bibr" rid="B44">2020b</xref>). Another point worth mentioning is that the GDPR allows the reuse of personal data for research purposes, which might allow companies who wish to pursue commercial research to bypass certain ethical requirements (Meszaros and Ho, <xref ref-type="bibr" rid="B45">2021</xref>).</p>
</sec>
<sec>
<title>3.1.2.5.2. Privacy vs. science advancement dilemmas</title>
<p>Some technology-based studies face a dichotomy between safeguarding participants&#x00027; data and making scientific advancements. This does not always come easily since ensuring privacy can compromise data quality, while studies with more accurate data usually lead to riskier privacy settings (Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). Indeed, with new data collection methods in public and digital environments, consent and transparency might be overlooked for better research results (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>).</p>
</sec>
</sec>
</sec>
<sec>
<title>3.1.3. Key issues of the current state of AI-specific RE guidelines</title>
<p>Many difficulties have arisen with the soaring evolution of AI. There has been a gap between research ethics and AI research, inconsistent standards regarding AI regulation and guidelines, and a lack of knowledge and training in these new technologies has been widely noticed. Medical researchers are more familiar with research ethics than computer science researchers and technologists (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>; Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>). This shows a disparity in knowledge between different fields.</p>
<p>With new technologies comes the difficulty in assessing them (Aicardi et al., <xref ref-type="bibr" rid="B3">2018</xref>; Aymerich-Franch and Fosch-Villaronga, <xref ref-type="bibr" rid="B7">2020</xref>; Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). Research helps follow AI&#x00027;s progress and ensures it does so responsibly and ethically (Cath et al., <xref ref-type="bibr" rid="B17">2018</xref>). Unfortunately, applied and research ethics are not always in sync (Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). AI standards mostly rely on ethical values rather than concrete normative and legal regulations, which have become insufficient (Samuel and Derrick, <xref ref-type="bibr" rid="B57">2020</xref>; Meszaros and Ho, <xref ref-type="bibr" rid="B45">2021</xref>). The societal aspects of AI are more discussed amongst researchers than the ethics part of research (Samuel and Derrick, <xref ref-type="bibr" rid="B57">2020</xref>; Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>).</p>
<p>Many countries have taken the initiative to regulate AI using ethical standards. However, guidelines vary from one region to another. It has become a strenuous task to establish a consensus of strategies, turn principles into laws, and make them practical (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). It does not only come down to countries that have differing points of views but journals as well. Indeed, validation for an AI research project publication could differ from one journal to another (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>). Even though ethical, legal, and social implications (ELSI) are used to help oversee AI, regulations and AI-specific guidelines remain scarce (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>).</p>
</sec>
<sec>
<title>3.1.4. When research ethics guidelines are applied to AI</title>
<p>While there is a usual emphasis that is being made on ethical approbation for research projects, there are other projects that are not required to follow an ethics guideline. In the United Kingdom, some research projects do not require ethics approval (i.e., social media data, geolocation data, anonymous secondary health data with an agreement) (Samuel and Derrick, <xref ref-type="bibr" rid="B57">2020</xref>). A study highlighted that most papers gathered that used available data from social media did not have an ethical approbation (Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>). Some technology-based research projects ask for consent from their participants but skip requesting ethical approval from a committee (Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). Some non-clinical research projects are exempt from an ethics evaluation (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>). Tools do not always undergo robust testing before validation either (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>). Of course, ethical evaluation remains essential in multiple other settings: when minors or people lacking capacity to make an informed decision are involved, when users are recognizable, when researchers seek users&#x00027; data directly (Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>), when clinical data or applications are used (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>), etc.</p>
</sec>
</sec>
<sec>
<title>3.2. Research ethics board</title>
<p>Historically REBs have focused on protecting human participants in research (e.g., therapeutic, nursing, psychological, or social research) from complying with the requirements of funding or federal agencies like NIH or FDA (Durand, <xref ref-type="bibr" rid="B25">2005</xref>). This approach has continued, and in many countries, REBs are fundamentally essential to ensure that research involving human participants is conducted in compliance with ethics guidelines and national and international regulations.</p>
<sec>
<title>3.2.1. Roles of REB</title>
<p>The primary goal of REBs focuses on reviewing and overseeing research to provide the necessary protection for research participants. REBs consist of groups of experts and stakeholders (clinicians, scientists, community members) who review research protocols with an eye toward ethical concerns. They ensure that protocols comply with regulatory guidelines and can withhold approval until such matters have been addressed. Also, they were designed to play an anticipatory role, predicting what risks might arise within research and ironing out ethical issues before they appeared (Friesen et al., <xref ref-type="bibr" rid="B30">2021</xref>). Accordingly, REBs aim to assess whether the proposed research project meets specific ethical standards regarding the foreseeable impacts on human subjects. However, REBs are less concerned with the broader consequences of research and its downstream applications. Instead, they focus on the direct effects on human subjects during or after the research process (Prunkl et al., <xref ref-type="bibr" rid="B56">2021</xref>). Within their established jurisdiction, REBs can develop a review process independently. Considering the specific context of AI research, REBs would aim to mitigate the risks of potential harm possibly caused by technology. This could be done by reviewing scientific questions relating to the origin and quality of the data, algorithms, and artificial intelligence; confirming the validation steps conducted to ensure the prediction models work; requesting further validation to be carried out if required (Samuel and Derrick, <xref ref-type="bibr" rid="B57">2020</xref>).</p>
</sec>
<sec>
<title>3.2.2. Scope and approaches</title>
<p>AI technologies are rapidly changing health research; these mutations might lead to significant gaps in REB oversight. Some authors who analyzed these challenges suggest an adaptative scope and approach. To achieve an AI-appropriate research ethics review, it is necessary to clearly define the thresholds and characteristics of cardinal research ethics considerations, including what constitutes a &#x0201C;<italic>human participant</italic>, what is a <italic>treatment</italic>, what is a <italic>benefit</italic>, what is a <italic>risk</italic>, what is considered a <italic>publicly available information</italic>, what is considered an <italic>intervention in the public domain</italic>, what is a <italic>medical data</italic>, but also what is <italic>AI research&#x0201D;</italic> (Friesen et al., <xref ref-type="bibr" rid="B30">2021</xref>).</p>
<p>There is an urgent need to tailor the technology and its development, evaluation, and use contexts (i.e., digital mental health) (Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). Health research involving AI features requires intersectoral and interdisciplinary participatory efforts to develop dynamic, adaptive, and relevant normative guidance. It also requires practice navigating the ethical, legal, and social complexities of patient data collection, sharing, analysis, interpretation, and transfer for decision-making in a natural context (Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). Also, these studies imply multi-stakeholder participation (such as regulatory actors, education, and social media).</p>
<p>This diversity of actors seems to be a key aspect in this case. Still, it requires transparent, inclusive, and transferable normative guidance and norms to ensure that all understand each other and meet the normative demands regarding research ethics. Furthermore, bringing together diverse stakeholders and experts is worthwhile, especially when the impact of research can be significant, difficult to foresee, and unlikely to be understood by any single expert, as with AI-driven medical research (Friesen et al., <xref ref-type="bibr" rid="B30">2021</xref>). In this stake, several factors are beneficial to promote cooperation between academic research and industry: inter-organizational trust, collaboration experience, and the breadth of interaction channels. Partnership strategies like collaborative research, knowledge transfer, and research support may be essential to embolden this in much broader terms than strict technology transfer (Aicardi et al., <xref ref-type="bibr" rid="B2">2020</xref>).</p>
</sec>
<sec>
<title>3.2.3. AI research ethics, practices, and governance oversight</title>
<p>According to the results of our review, REBs must assess the following seven considerations of importance during AI research ethics review: (1) Informed consent, (2) benefit-risks assessment, (3) safety and security, (4) validity and effectiveness, (5) user-centric approach and design, (6) transparency. In the literature, some authors have pointed out specific questions about considerations REBs should be aware of. The following <xref ref-type="table" rid="T4">Table 4</xref> reports the main highlights REBs might rely on.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>Main highlights for the reviewed body of literature (divided by key salient ethical considerations).</p></caption> 
<table frame="box" rules="all">
<thead>
<tr style="background-color:&#x00023;919498;color:&#x00023;ffffff">
<th valign="top" align="left"><bold>Concepts</bold></th>
<th valign="top" align="left"><bold>Identified issues</bold></th>
<th valign="top" align="left"><bold>Key reviewed articles on this issue</bold></th>
<th valign="top" align="left"><bold>Key insights and best practices for supporting research ethics stakeholders</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Scope and approaches</td>
<td valign="top" align="left">Intersectoral and interdisciplinary participatory efforts are needed to develop dynamic, adaptative, and relevant normative guidance and practices</td>
<td valign="top" align="left">Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref></td>
<td valign="top" align="left">Work up new ways of collaboration for REBs.</td>
</tr> <tr>
<td valign="top" align="left">Diversity and fair representation</td>
<td valign="top" align="left">Concerns regarding inclusion are often found in RCTs.</td>
<td valign="top" align="left">Grote, <xref ref-type="bibr" rid="B36">2021</xref></td>
<td valign="top" align="left">Relatively little data is found for other types of research. Does not seem to consider research projects with retrospective data.</td>
</tr> <tr>
<td valign="top" align="left">Biases toward vulnerable population</td>
<td valign="top" align="left">AI systems are either fed with actual biased data or generate biased results.</td>
<td valign="top" align="left">Cath et al., <xref ref-type="bibr" rid="B17">2018</xref>; Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>; Grote, <xref ref-type="bibr" rid="B36">2021</xref></td>
<td valign="top" align="left">If the algorithms are not considered unbiased and representative of the general population, results could exclude minorities and, thus, be harmful.</td>
</tr> <tr>
<td valign="top" align="left">Informed consent</td>
<td valign="top" align="left">Transparency and accessibility of relevant information could help participants better understand a situation which will allow them to make a conscious decision. For informed consent, there should be a focus on the impacts and risks that arise from interventions using AI and ML. There is a dilemma between giving out all the information to participants and only the relevant ones they need.</td>
<td valign="top" align="left">Sedenberg et al., <xref ref-type="bibr" rid="B59">2016</xref>; Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>; Grote, <xref ref-type="bibr" rid="B36">2021</xref><break/> Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref><break/> Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref></td>
<td valign="top" align="left">The issue of informed consent raises concerns about the following: - Nature of the information may be disclosed in the consent while the model is still at the preliminary stage of development - What risks can be revealed when we do not know the impacts of the technology? Possibility of causing harm to participants if incomplete or unreliable information is disclosed. The problem of the quality of consent and its scope in a complex and rapidly evolving technological field. Questioning the need to develop new tools for consent. Limits on the possibility of revoking consent compared to other types of research. REBs must make sure that researchers are giving intelligible information to participants.</td>
</tr> <tr>
<td valign="top" align="left">Benefit risks assessment</td>
<td valign="top" align="left"><italic>One trigger point is to establish whether involvement of AI in RCT improves the standard of care. Many factors need to be considered to justify conducting AI RCTs, to risks research participants imposed in studies. Fine subject selection and equal distribution of risks and benefits across different populations must also be considered. Determine risk threshold. Data monitoring and management of the risks intervention requirements (full assessment, based only on study data, etc.) Management of passively collected data (e.g., the content of text messages) vs. predictive algorithms is still under development</italic>.</td>
<td valign="top" align="left">Grote, <xref ref-type="bibr" rid="B36">2021</xref><break/> Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref></td>
<td valign="top" align="left">REBs should clearly define how AI may reduce the trial burden and improve the benefit-risk ratio in a research project. REBs should ensure that participants are not at a higher risk of being part of a minority in a population. REBs should identify and recommend appropriate measures to mitigate the specific risks that are embedded in AI in RCT.</td>
</tr> <tr>
<td valign="top" align="left">Safety and security<break/> (End user-centered)</td>
<td valign="top" align="left">The research project and technologies used should not pose any harm to participants. These issues should be evaluated according to users&#x00027; perspectives; the assessment should consider the reality&#x00027;s context. REB&#x00027;s lack of understanding of AI models makes assessing their impacts on safety difficult. Measures should be established to counteract negative impacts. Anticipate the implications of AI use (human protection, legal act, etc.)</td>
<td valign="top" align="left">Coeckelbergh et al., <xref ref-type="bibr" rid="B19">2016</xref><break/> Coeckelbergh et al., <xref ref-type="bibr" rid="B19">2016</xref><break/> Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref><break/> Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref></td>
<td valign="top" align="left">Adequate risk mitigation for a new technology implies that the REB has a good knowledge of the technology and its impacts REBs and researchers should identify adverse effects of AI systems and their consequences that may harm participants; identify mechanisms to repair potential harms The possibility that the harm is physical or moral can be an issue</td>
</tr> <tr>
<td valign="top" align="left">Transparency</td>
<td valign="top" align="left">This concept will become a challenge with AI&#x00027;s black box, making it difficult to explain each result generated.</td>
<td valign="top" align="left">Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>; Andreotta et al., <xref ref-type="bibr" rid="B5">2021</xref></td>
<td valign="top" align="left">Ensure that the research project is explained in a way that is understandable to participants.</td>
</tr> <tr>
<td valign="top" align="left">Privacy and confidentiality</td>
<td valign="top" align="left">Confusion between governance and confidentiality protection mechanisms. Greater emphasis on governance to the detriment of specific considerations for confidentiality or other ethical issues. Scientific advancement and data quality could impact individuals&#x00027; privacy by collecting data extensively and transferring them.</td>
<td valign="top" align="left">Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref><break/> Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref><break/> Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>; Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref></td>
<td valign="top" align="left">Pay more attention to algorithm and software development, allowing to broaden analysis and ethical evaluation toward ethical considerations toward privacy and confidentiality. Questioning the limits of current anonymization techniques with the use of AI systems. Questioning the new harms that may result from breaches of privacy and violations of confidentiality. Develop mechanisms to prevent, limit, and, if necessary, repair the damage resulting from these new potential breaches of privacy and confidentiality.</td>
</tr>
<tr>
<td valign="top" align="left">Justice, equity, and fairness</td>
<td valign="top" align="left">Standard of fair representation</td>
<td valign="top" align="left">Grote, <xref ref-type="bibr" rid="B36">2021</xref></td>
<td valign="top" align="left">Results do not mention the distribution of research benefits from these technologies. To address this issue, REBs should: -Focus more on these issues to rebalance their approach, which is more centered on governance. -Put AI systems and their potential into question to reduce inequalities and strengthen health equity Access to research benefits should be investigated to ensure a return of individual results. Challenges of transmitting general results to the community.</td>
</tr>
<tr>
<td valign="top" align="left">Validity and effectiveness</td>
<td valign="top" align="left">Consensus to appreciate the normative implications of AI technologies: -technology development -application of technology in real-time conditions The challenging aspect of understanding black box</td>
<td valign="top" align="left">McCradden et al., <xref ref-type="bibr" rid="B42">2020c</xref><break/> Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref></td>
<td valign="top" align="left">REBs do not currently have an effective method to evaluate the validity of results generated by AI. REBs need the right tools to ensure that the expected aims of AI systems are achievable. The development of the system should meet the concrete needs of the populations targeted by the technology In an actual situation, the potential for transformation of the practice and the care offered must be ensured. Adaptations can be complex; in practice, modifications to the protocol are more difficult considering the nature of AI.</td>
</tr>
</tbody>
</table>
</table-wrap>
<sec>
<title>3.2.3.1. Informed consent</title>
<p>Some authors argue that the priority might be to consider whether predictions from a specific machine learning model are appropriate for informing decisions about a particular intervention (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). Others advocate carefully constructing the planned interventions so research participants can understand them (Grote, <xref ref-type="bibr" rid="B36">2021</xref>).</p>
<p>The extent to which researchers should provide extensive information to participants is not evident among stakeholders. So far, research suggests that there is no clear consensus among patients on whether they would want to know this kind of information about themselves (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). Hence the question remains whether patients want to see if they are at risk, mainly if they cannot be told why, as factors included in machine learning models generally cannot be interpreted as having a causal impact on outcomes (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). Therefore, sharing information from an uninterpretable model may adversely affect a patient&#x00027;s perception of their illness, confuse them, and immediate concerns about transparency.</p>
</sec>
<sec>
<title>3.2.3.2. Benefits/risks assessment</title>
<p>The analysis of harms and potential benefits is critical when assessing human research. REBs are well concerned with this assessment to prevent unnecessary risks and promote benefits. Considerations of the potential benefits and harms to patient-participants are necessary for future clinical research, and REBs are optimally positioned to perform this assessment (McCradden et al., <xref ref-type="bibr" rid="B42">2020c</xref>). Additional considerations like benefit/risk ratio or effectiveness and the systematic process described previously are necessary. Risk assessments could have a considerable impact in research involving mobile devices or robotics because preventive action and safety measures may be required in the case of imminent risks. Thus, REB risk assessment seems very important (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>).</p>
<p>Approaching AI research ethics through user-centered design can represent an interesting avenue to understand better how REB can conduct risk/benefices assessment. For researchers, involving users in the design of AI research is likely to promote better research outcomes. Hence, this can be reached by investigating how AI research is actually meeting users&#x00027; needs and how this may generate intended and unintended impacts on them (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>; Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). Indeed there is insufficient reason to believe that AI research will produce positive benefits unless it is evaluated with a focus on patients and situated in the context of clinical decision-making (McCradden et al., <xref ref-type="bibr" rid="B42">2020c</xref>). Consequently, REBs might focus on the broader societal impact of this research (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>).</p>
</sec>
<sec>
<title>3.2.3.3. Safety and security</title>
<p>Safety and security are significant concerns for AI and robotics, and their assessment may rely on end-users&#x00027; perspectives. To address the safety issue, it is not sufficient for robotics researchers to say that their robot is safe based on literature and experimental tests. It is crucial to find out about the perception and opinions of end-users of robots and other stakeholders (Coeckelbergh et al., <xref ref-type="bibr" rid="B19">2016</xref>). Testing technology in real-life scenarios is vital for identifying and adequately assessing technology&#x00027;s risks, anticipating unforeseen problems, and clarifying effective monitoring mechanisms (Cath et al., <xref ref-type="bibr" rid="B17">2018</xref>). On the other hand, there is a potential risk that an AIS misleads the user in realizing a legal act.</p>
</sec>
<sec>
<title>3.2.3.4. Validity and effectiveness</title>
<p>Validity is a crucial consideration and one on which there is consensus to appreciate the normative implications of AI technologies. To this end, it is necessary for research ethics that researchers&#x00027; protocols be explicit about many elements and describe their validation model and performance metrics in a way that allows for assessment of the clinical applicability of their developing technology (McCradden et al., <xref ref-type="bibr" rid="B44">2020b</xref>). In addition, in terms of validity, simulative models have yet to be appropriately compared with standard medical research models (including <italic>in vitro, in vivo</italic>, and clinical models) to ensure they are correctly validated and effective (Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref>). Considering many red flags raised in recent years, AI systems may not work equally well with all sub-populations (racial, ethnic, etc.). Therefore, AI systems must be validated for different subpopulations of patients (McCradden et al., <xref ref-type="bibr" rid="B44">2020b</xref>).</p>
<p>Demonstration of value is essential to ensure the scientific validity of the claims made for technology but also to attest to the proven effectiveness once deployed in a real-world setting and the social utility of a technology (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>). When conducting a trial for the given AI system, the main interest should be to assess its overall reliability, while the interaction with the clinician might be less critical (Grote, <xref ref-type="bibr" rid="B36">2021</xref>).</p>
</sec>
<sec>
<title>3.2.3.5. Transparency</title>
<p>Transparency entails understanding how technology behaves and establishing thresholds for permissible (and impermissible) usages of AI-driven health research. Transparency requires clarifying the reasons and rationales for the technology&#x00027;s design, operation, and impacts (Friesen et al., <xref ref-type="bibr" rid="B30">2021</xref>). Identified risks should be accompanied by detailed measures intended to avoid, reduce, or eliminate the risks. The efficiency of such efforts should be assessed upstream and downstream as part of the quality management process. As far as possible, testing methods, data, and assessment results should be public. Transparent communication is essential to make research participants, as well as future users aware of the technology&#x00027;s logic and functioning (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>).</p>
<p>The implications presented in <xref ref-type="table" rid="T4">Table 4</xref>. Seem to encourage REBs to adopt a more collaborative approach to grasp a better sense of reality in different fields. The analysis also showed that data bias is a flagrant problem whether AI is used or not and that this discriminatory component should be taken care of to avoid emphasizing the problem with AI. Informed consent is another value that REBs prioritize and will have to be adapted to AI because new information might have to be disclosed to participants. Safety and security are always essential to consider. However, other measures will be implemented with AI to ensure that participants are not set in danger. One of the main aspects of AI is data sharing and the risk that this might breach participants&#x00027; privacy. The methods put in place now might not be suitable for AI&#x00027;s fast evolution. The questions of justice, equality, and fairness that have not been resolved in our current society will also have to be instigated in the AI era. Finally, the importance of validity was raised numerous times. Unfortunately, REBs do not have the right tools to evaluate AI. It will be necessary for AI to meet the population&#x00027;s needs. Furthermore, definitions of specific values and principles that REBs usually respond to will have to be reviewed and adapted according to AI.</p>
</sec>
</sec>
<sec>
<title>3.2.4. Limitations and challenges</title>
<p>Our results point to several discrepancies between the critical considerations for AI research ethics and REB review of health research and AI/ML data.</p>
<sec>
<title>3.2.4.1. Consent forms</title>
<p>According to our review, there is a disproportionate focus on consent before other ethical issues. Authors argue that the big piece the REBs ask for relies on consent, not the AI aspect of the project. This finding suggests that narrowing AI research ethics around consent concerns remains problematic. In some stances, the disproportionate focus on consent, along with the importance REBs place on consent forms and participant information sheets, has settled how research ethics is defined, e.g., viewed as a proxy for ethics best practice, or in some cases, as an ethics panacea (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>).</p>
</sec>
<sec>
<title>3.2.4.2. Safety, security, and validity</title>
<p>Authors report a lack of knowledge for safety review. It appears clear that REBs may not have the experience or expertise to conduct a risk assessment to evaluate the probability or magnitude of potential harm. Similarly, the training data used to inform the algorithm development are often not considered to qualify as human subjects research, which &#x02013; even in a regulated environment &#x02013; makes a prospective review for safety potentially unavailable (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>).</p>
<p>On the other hand, REBs lack appropriate assessing processes for assessing whether AI systems are valid, effective, and apposite. The requirement to evaluate the evidence of effectiveness adds to a range of other considerations with which REBs must deal (i.e., the protection of participants and the fairness in the distribution of benefits and burdens). Therefore, there is still much to be done to equip REBs to evaluate the effectiveness of AI technologies, interventions, and research (Friesen et al., <xref ref-type="bibr" rid="B30">2021</xref>).</p>
</sec>
<sec>
<title>3.2.4.3. Privacy and confidentiality</title>
<p>Researchers point to a disproportionate focus on data privacy and governance before other ethical issues in medical health research with AI tools. Focus on privacy and data governance issues warrants further attention, as privacy issues may overshadow other issues. Indeed it seems problematic and led to a narrowing of ethics and responsibility debates being perpetuated throughout the ethics ecosystem, often at the expense of other ethical issues, such as questions around justice and fairness (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>). REBs appear to be less concerned about the results themselves. One explained that when reviewing their AI-associated research ethics applications, REBs focus more on questions of data privacy than other ethical issues, such as those related to the research and the research finding. Others painted a similar picture of how data governance issues were a centralized focus when discussing their interactions with their REB. According to these stakeholders, REBs focus less on the actual algorithm than how the data is handled, and the issue remains around data access and not about the software (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>).</p>
</sec>
<sec>
<title>3.2.4.4. Governance, oversight, and process</title>
<p>Lack of expertise appears to be a significant concern in our results. Indeed, even when there is oversight from a research ethics committee, authors observe that REB members often lack the experience or confidence regarding particular issues associated with digital research (Samuel and Derrick, <xref ref-type="bibr" rid="B57">2020</xref>).</p>
<p>Some authors advocate that ML researchers should complement the membership of REBs since they are better situated to evaluate the specific risks and potential unintended harms linked to the methodology of ML. On the other hand, REBs should be empowered to fulfill their role in protecting the interests of patients and participants and enable the ethical translation of healthcare ML (McCradden et al., <xref ref-type="bibr" rid="B42">2020c</xref>). However, we can notice that researchers expressed different views about REBs&#x00027; expertise. While most acknowledged a lack of AI-specific proficiency, for many, this remains straightforward because the ethical issues of their AI research were nonexceptional compared to other ethics issues raised by &#x0201C;big data&#x0201D; (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>).</p>
<p>Limits of process and regulation are another concern faced by REBs, including a lack of consistency in decision-making within and across REBs, a lack of transparency, poor representation of the participants and public they are meant to represent, insufficient training, and a lack of measures to examine their effectiveness (Friesen et al., <xref ref-type="bibr" rid="B30">2021</xref>). There are several opinions on the need for and the effectiveness of REBs, with critics lamenting excessive bureaucracy, lack of reliability, inefficiency, and, importantly, high variance in outcomes (Prunkl et al., <xref ref-type="bibr" rid="B56">2021</xref>). To address the existing gap of knowledge between different fields, training could be used to help rebalance this and ensure sufficient expertise for all research experts to pursue responsible innovation (Stahl and Coeckelbergh, <xref ref-type="bibr" rid="B60">2016</xref>).</p>
<p>Researchers described the lack of standards and regulations for governing AI at the level of societal impact; the way that ethics committees in institutions work is still acceptable. But there is a need for another level of thinking that combines everything and does not look at one project simultaneously (Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>).</p>
<p>Finally, researchers have acknowledged the lack of ethical guidance, and some REBs report feeling ill-equipped to keep pace with rapidly changing technologies used in research (Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>).</p>
</sec>
</sec>
<sec>
<title>3.2.5. Stakeholder perceptions and engagement</title>
<p>Researchers&#x00027; perspectives on AI research ethics may vary. While some claim that researchers often take action to counteract the adverse outcomes created by their research projects (Stahl and Coeckelbergh, <xref ref-type="bibr" rid="B60">2016</xref>), others promulgate that researchers do not always notice these outcomes (Aymerich-Franch and Fosch-Villaronga, <xref ref-type="bibr" rid="B7">2020</xref>). When the latter occurs, researchers are pressed to find solutions to deal with those outcomes (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>).</p>
<p>Furthermore, researchers are expected to engage more in AI research ethics. Researchers must demonstrate cooperation with certain institutions (i.e., industries and governments) (Cath et al., <xref ref-type="bibr" rid="B17">2018</xref>). Researchers are responsible for ensuring that their research project is conducted responsibly by considering participants&#x00027; needs (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). Usually, research ethics consist of different researchers coming from multidisciplinary fields who are better equipped to answer further ethical and societal questions (Aicardi et al., <xref ref-type="bibr" rid="B3">2018</xref>). However, there could be a clash of interests between parties while setting goals for a research project (Battistuzzi et al., <xref ref-type="bibr" rid="B8">2021</xref>).</p>
<p>A lot of the time, different stakeholders do not necessarily understand other groups&#x00027; realities. Therefore, research is vital to ensure that stakeholders can understand one another and be in the same scheme of things. This will help advance AI research ethics (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>).</p>
<p>Responsibility for ensuring a responsible utilization of AI lies within various groups of stakeholders (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). <xref ref-type="fig" rid="F3">Figure 3</xref> portrays some of these groups often mentioned throughout the literature. This figure aims to illustrate the amount and variety of stakeholders needed to collaborate to ensure using AI in a responsible matter.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Overview of the stakeholders involved in regulation regarding AI in research ethics: the main active stakeholders (dark blue) and the main passive stakeholders (light blue).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-06-1149082-g0003.tif"/>
</fig>
<p>Many others, such as the private sector, can be added to the list. Studies have shown that private companies&#x00027; main interest is profit over improving health with the data collected using AI (McCradden et al., <xref ref-type="bibr" rid="B43">2020a</xref>). Another problematic element with the private sector: they do not often fall under the regulation of ethical oversight boards, which means that AI systems or robots that come from private companies do not necessarily follow an accepted ethical guideline (Sedenberg et al., <xref ref-type="bibr" rid="B59">2016</xref>). This goes beyond ethical research concerns.</p>
</sec>
<sec>
<title>3.2.6. Key practices and processes for AI research</title>
<p>REBs may face new challenges in the context of research involving AI tools. Authors are calling for specific oversight mechanisms, especially for medical research projects.</p>
<sec>
<title>3.2.6.1. Avoid bias in AI data</title>
<p>While AI tools provide new opportunities to enhance medical health research, there is an emerging consensus among stakeholders regarding bias concerns in AI data, particularly in clinical trials. Since bias can worsen pre-existing disparities, researchers should proactively target a wide range of participants to establish sufficient evidence of an AI system&#x00027;s clinical benefit across different populations. To mitigate selection bias, REBs may require randomization in AI clinical trials. To achieve this, researchers must start by collecting more and better data from social minority groups (Grote, <xref ref-type="bibr" rid="B36">2021</xref>). Also, awareness of biases concerns should be taken into account in the validation phase, where the performance of the AI system gets measured in a benchmark data set. Hence it is crucial to test AI systems for different subpopulations. Therefore, affirmative action in recruiting research participants in AI RCTs deems us ethically permissible (Grote, <xref ref-type="bibr" rid="B36">2021</xref>). However, authors reported that stakeholders might encounter challenges accessing needed data in a context where severe legal constraints are imposed on sharing medical data (Grote, <xref ref-type="bibr" rid="B36">2021</xref>).</p>
</sec>
<sec>
<title>3.2.6.2. Attention to vulnerable populations</title>
<p>Vulnerable populations require excellent protection against risks they may face in research.</p>
<p>When involving vulnerable populations, such as those with a mental health diagnosis, in AI medical health research, additional precautions should be considered to ensure that those involved in the study are duly protected from harm &#x02013; including stigma and economic and legal implications. In addition, it is essential to consider whether access barriers might exclude some people (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>).</p>
</sec>
<sec>
<title>3.2.6.3. Diversity, inclusion, and fairness</title>
<p>Another issue, which needs to be raised when considering critical practices and scope in AI research, relates to fair representation, diversity, and inclusion. According to Grote, one should explore concerns for the distribution of research participants and representatives for the state, country, or even world region in which the AI system gets tested. Here the author advocates if we should instead aim for a parity distribution of different gender, racial and ethnic groups. Hence, he raised several questions to support the reflection of REB on diversity, inclusion, and fairness issues: How should the reference classes for the different subpopulations be determined? Also, what conditions must be met for fair subject selection in AI RCTs? And finally, when, if ever, is it morally justifiable to randomize research participants in medical AI trials? (Grote, <xref ref-type="bibr" rid="B36">2021</xref>).</p>
</sec>
<sec>
<title>3.2.6.4. Guidance to assess ethical issues in research involving robotics</title>
<p>The aging population and scarcity of health resources are significant challenges healthcare systems face today. Consequently, people with disabilities, especially elders with cognitive and mental impairments, are the most affected. The evolving field of research with assistive robots may be useful in providing care and assistance to these people. However, robotics research requires specific guidance when participants have physical or cognitive impairments. Indeed particular challenges are related to informed consent, confidentiality, and participant rights (Battistuzzi et al., <xref ref-type="bibr" rid="B8">2021</xref>). According to some authors, REBs should ask several questions to address these issues: Is the research project expected to enhance the quality of care for the research participants? What is/are the ethical issue/s illustrated in this study? What are the facts? Is any important information not available in the research? Who are the stakeholders? Which course of action best fits with the recommendations and requirements set out in the &#x0201C;Ethical Considerations&#x0201D; section of the study protocol? How can that course of action be implemented in practice? Could the ethical issue/s presented in the case be prevented? If so, how? (Battistuzzi et al., <xref ref-type="bibr" rid="B8">2021</xref>).</p>
<p>Which ethical and social issues may neurorobotics raise, and are mechanisms currently implemented sufficiently to identify and address these? Is the notion that we may analyze, understand and reproduce what makes us human rooted in something other than reason (Aicardi et al., <xref ref-type="bibr" rid="B2">2020</xref>)?</p>
</sec>
<sec>
<title>3.2.6.5. Understanding of the process behind AI/ML data</title>
<p>A good understanding of the process behind AI/ML tools might be of interest to REBs when assessing the risk/benefit ratio of medical research involving AI. However, there seems to be a lack of awareness of how AI researchers gain results. Authors argue that it would not be impossible to induce perception about the external environment in the neuron culture and to interpret the signals from the neuron culture as motor commands without a basic understanding of this neural code (Bentzen, <xref ref-type="bibr" rid="B11">2017</xref>). Indeed, when using digital health technologies, the first step is to ask whether the tools, be they apps or sensors, or AI applied to large data sets, have demonstrated value for outcomes. One should ask whether they are clinically effective, or if they measure what they purport to measure (validity) consistently (reliability), and finally, if these innovations also improve access to those at the highest risk of health disparities (Nebeker et al., <xref ref-type="bibr" rid="B52">2019</xref>).</p>
<p>Indeed, the ethical issues of AI research raise major questions within the literature. What may seem surprising at first sight is that the body of literature is still relatively small and appears to be in an embryonic state regarding the ethics of the development and use of AI (outside the scope of academic research). The literature is thus more concerned with the broad questions of what constitutes research ethics in AI-specific research and with pointing out the gaps in normative guidelines, procedures, and infrastructures adapted to the oversight of responsible and ethical research in AI. Perhaps unsurprisingly, most of the questions related to study within the health sector. This is to be expected given the ascendancy of health in general within the research ethics field (Faden et al., <xref ref-type="bibr" rid="B27">2013</xref>). Thus, most considerations relate to applied health research, the implications for human participants (whether in digital health issues, research protocols, or interactions with different forms of robots), and whether projects should be subject to ethics review.</p>
<p>Specifically in AI-specific research ethics, interestingly, traditional issues of participant protection (including confidentiality, consent, and autonomy in general) and research involving digital technologies intersect and are furthered by the uses of AI. Indeed, as AI requires big data and behaves very distinctly from other technologies, the primary considerations raised by the body of literature studied were predominantly classical in AI ethics but contextualized and exacerbated within research ethics practices. For instance, one of the most prevalent ethical considerations raised and discussed was privacy and the new challenges regarding the massive amount of data collected and its use. If a breach of confidentiality were to happen or data collection would lead to discovering further information, this would raise the possibility of harming individuals (Ford et al., <xref ref-type="bibr" rid="B29">2020</xref>; Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>). In addition, informed consent was widely mentioned and focused on transparency and explainability when the issues were AI-specific. Indeed, AI&#x00027;s black-box issue of explainability was raised many times. This is a challenge because it is not always easy to justify the results generated by AI (Jacobson et al., <xref ref-type="bibr" rid="B39">2020</xref>; Andreotta et al., <xref ref-type="bibr" rid="B5">2021</xref>). This then poses a problem with transparency. Indeed, participants expect to have the necessary information relevant to the trial to make an informed and conscious decision regarding their participation. Not having adequate knowledge to share with participants might not align with informed consent.</p>
<p>Furthermore, another principle was brought up many times, which was responsibility. Responsibility is shared chiefly between the researcher and the participant (Gooding and Kariotis, <xref ref-type="bibr" rid="B33">2021</xref>). Now that AI is added to the equation, it has become harder to determine who strictly should be held accountable for the occurrence of certain events (i.e., data error) and in what context (Meszaros and Ho, <xref ref-type="bibr" rid="B45">2021</xref>; Samuel and Gemma, <xref ref-type="bibr" rid="B58">2021</xref>). While shared responsibility is an idea many share and wish to implement, it is not easy. Indeed, as seen in <xref ref-type="fig" rid="F3">Figure 3</xref>, many stakeholders (e.g., lawmakers, AI developers, AI users) may participate in responsibility sharing. However, much work will have to be put into finding a fair way to share responsibility between each party involved.</p>
</sec>
</sec>
</sec>
</sec>
<sec id="s4">
<title>4. Discussion</title>
<p>Our results have implications mainly on three levels as shown in <xref ref-type="fig" rid="F4">Figure 4</xref>. Indeed, AI-specific implications for research ethics is first addressed followed by REBs who take on these challenges. Finally, new research avenues are discussed before ending with the limitations.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Line of progression on AI ethics resolution in research.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="frai-06-1149082-g0004.tif"/>
</fig>
<sec>
<title>4.1. AI-specific implications for research ethics</title>
<p>The issues raised by AI are eminently global. It is interesting to see in the articles presented in the scoping review that researchers in different countries are asking questions colored by the jurisdictional, social, and normative context in which the authors work. However, there appears to be heterogeneity in the advancement of AI research ethics thinking; this is particularly evident in the progress of research ethics initiatives within countries (see <xref ref-type="supplementary-material" rid="SM1">Supplementary Table 1</xref>). A striking finding is that very little development has been done regarding AI-specific standards and guidelines to frame and support research ethics worldwide.</p>
<p>At this point, the literature does not discuss the content of norms and their application to AI research. Instead, it makes initial observations about AI&#x00027;s issues and challenges to research ethics. In this sense, it is possible to see that the authors indicate new challenges posed by the emergence of AI in research ethics. AI makes many principles more challenging to assess (it seems quite difficult to use the current guidelines to balance the risks and benefits). One example is that it has become unclear which level of transparency is adequate (Geis et al., <xref ref-type="bibr" rid="B31">2019</xref>). AI validity, on the other hand, is not always done in an optimal manner throughout AI&#x00027;s lifecycle (Vollmer et al., <xref ref-type="bibr" rid="B65">2020</xref>). Accountability remains a continuing issue since it is still unclear who to hold accountable and to what extent with AI in play (Greatbatch et al., <xref ref-type="bibr" rid="B34">2019</xref>). In addition, AI is also known to amplify certain traditional issues in research ethics. For example, AI blurs the notion of free and informed consent since the information a patient or participant needs regarding AI is yet to be determined (Gerke and Timo Minssen, <xref ref-type="bibr" rid="B32">2020</xref>). Privacy&#x00027;s getting harder to manage because it has become possible with AI to identify individuals by analyzing all the data available, even after deidentification (Ahuja, <xref ref-type="bibr" rid="B1">2019</xref>). Data bias is another leading example where AI would not necessarily detect data bias it&#x00027;s being fed but could also generate more biased results (Auger et al., <xref ref-type="bibr" rid="B6">2020</xref>).</p>
<p>Interestingly, the very distinction between the new AI-related issues and the old, amplified ones is still not entirely clear to researchers. For instance, while AI is quickly targeted for generating biased results, the source of the problem could come from biased data fed to AI (Cath et al., <xref ref-type="bibr" rid="B17">2018</xref>; Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>; Grote, <xref ref-type="bibr" rid="B36">2021</xref>). Another issue is the lack of robustness, where it is challenging to rely entirely on AI to always give accurate results (Grote, <xref ref-type="bibr" rid="B36">2021</xref>). However, this issue is also found in human-based decision-making. Thus, the most efficient use of AI could depend on context. The final decision could be reserved for humans limiting AI&#x00027;s role as an assistive tool (Ienca and Ignatiadis, <xref ref-type="bibr" rid="B38">2020</xref>). Therefore, drawing a picture of what is new and less so is difficult. However, there is no doubt that AI is disrupting the field of research ethics, its processes, practices, and standards. This also points to the fact that no AI-specific Research Ethics Guidelines can help give a sense of how best to evaluate AI in a compatible way with RE guidance.</p>
<p>Another observation is that research ethics (and a fortiori research ethics committees) are very limited in their approach to AI development and research. This means that research ethics only comes into play at a specific point in developing AI technologies, interventions, and knowledge, i.e., after developing an AIS and before its implementation in a real context. Thus, research ethics, understood as it has been developed in most countries, focuses on what happens within public organizations and when human participants are involved. This excludes technological developments developed by industry and does not require ethical certification. Therefore, the vast majority of AIS outside the health and social services sector will not be subject to research ethics reviews, such as data found in social media or geolocation (Samuel and Derrick, <xref ref-type="bibr" rid="B57">2020</xref>). But even within the health sector, AIS that do not directly interact with patients could largely be excluded from the scope of research ethics and the mandate of REBs. This makes the field of AI research ethics very new and small compared to responsible AI innovation.</p>
</sec>
<sec>
<title>4.2. What this means for REBs</title>
<p>No author seems to indicate that REBs are prepared and equipped to evaluate research projects involving AI as rigorously, confidently, and consistently as for more traditional research protocols (i.e., not involving AI). One paper from Sedenberg et al. (<xref ref-type="bibr" rid="B59">2016</xref>) expressively indicates that the current REB model should be replicated in the private sector to help oversee and guide AI development (Sedenberg et al., <xref ref-type="bibr" rid="B59">2016</xref>). Arguably the call is mostly about adding an appraising actor to private sector technology developments than praising REBs for their mastery and competence in AI research ethics review. Yet, it still holds a relatively positive perception of the current readiness and relevance of REBs to research ethics. This may also reflect a lack of awareness (from uninformed stakeholders) of the limitations faced by REBs, which on paper can probably be seen as being able to evaluate research protocols involving AI and other projects. This is, however, disputed or refuted by the rest of the literature studied.</p>
<p>The bulk of the body of literature reviewed was more circumspect about the capacity of REBs. Not that they are not competent, but rather that they do not have the tools to start with a normative framework relevant to AI research, conceptually rigorous and comprehensive, and performative and appropriate to the mandates, processes, and practices of REBs. Over the last several decades, REBs have primarily relied on somewhat comprehensive and, to some extent, harmonized, regulations and sets of frameworks to inform and guide their ethical evaluation. The lack therefore, REBs face new challenges without any tools to support them with their decisions on AI dilemmas. The authors of our body of literature thus seem to indicate a higher expectation on all stakeholders to find solutions to address the specificities and challenges of AI in research ethics.</p>
<p>One of the first points is quite simple: determining when research involving AI should be subject to research ethics review. This simple point and observation is not consensual. Then, we can raise some serious concerns about the current mandate of REBs and the ability to evaluate AI with their current means and framework. Not only are they missing clear guidelines to do any kind of standard assessments on AI in research ethics, but they are also missing clearly defined roles on their account. Indeed, should their role be extended to look not just at research but also at the use of downstream technology? Or does this require another ethics oversight body that would look more at the technology in real life? This raises the question of how a lifecycle evaluative process can best be structured and how a continuum of evaluation can be developed that is adapted to this adaptive technology.</p>
</sec>
<sec>
<title>4.3. New research avenues</title>
<p>After looking at the heterogeneity of norms and regulations regarding AI in different countries, there should be an interest in initiating an international comparative analysis. The aim would be to investigate how REBs have adapted their practices in evaluating AI-based research projects without much input and support from norms. This analysis could raise many questions (i.e., could there be key issues that are impossible to universalize?).</p>
<sec>
<title>4.3.1. The scope and approach of ethics review by REBs must be revisited in light of the specificities of research using AI</title>
<p>The primary considerations we discuss above raise new challenges on the scope and approaches of REB practices when reviewing research with AI. Furthermore, applications developed within the research framework often rely on a population-based system, leading REBs to question whether their assessment should focus on a systematic individual approach rather than societal considerations and their underlying foundations.</p>
<p>However, AI research is still emerging, underlining the difficulties of completing such a debate. Finally, one can wonder about the importance of current guidelines in AI within the process of ethical evaluation by the REBs. Should this reflection be limited only to the REBs? Or should it include other actors meaning scientists or civil society?</p>
<p>AI ethics is not limited to research. While it is less discussed, AI ethics raises many existential questions. Dynamics such as the patient-physician relationship will have to adapt to a new reality (Chassang et al., <xref ref-type="bibr" rid="B18">2021</xref>). With human tasks being delegated to AI, notions of personhood (Aymerich-Franch and Fosch-Villaronga, <xref ref-type="bibr" rid="B7">2020</xref>), autonomy (Aicardi et al., <xref ref-type="bibr" rid="B2">2020</xref>), and human status in our society (Farisco et al., <xref ref-type="bibr" rid="B28">2020</xref>) are threatened. This leads to delving into the question of &#x0201C;what it is to be human?&#x0201D;. Robots used in therapies aimed to care for patients (i.e., autistic children) could induce attachment issues and other psychological impacts (Coeckelbergh et al., <xref ref-type="bibr" rid="B19">2016</xref>). This projects another issue: AI overreliance, a similar problem brought up by current technological tools (i.e., cell phones) (Holte and Richard, <xref ref-type="bibr" rid="B37">2021</xref>).</p>
</sec>
<sec>
<title>4.3.2. Updating and adapting processes in ethics committees</title>
<p>AI ethics is still an emerging field. The REBs ensure the application of ethical frameworks, laws, and regulations. Our results suggest that research in AI involves complex issues that emerge around these new research strategies and methodologies using technologies such as computer science, mathematics, or digital technology. Thus, REBs&#x00027; concerns remain on recognizing and assessing ethical issues that arise from these studies and adapting to rapid changes in this emerging field.</p>
<p>In research ethics, respect for a person&#x00027;s dignity is essential. In several normative frameworks, i.e., the TCPS in Canada, it means respect for persons, concern for wellbeing, and justice. In AI research, REBs might need to reassess the notion of consent or the participant&#x00027;s place in the study. As with all research, REBs must ensure informed consent. However, there does not seem to be a clear consensus on the standard for providing informed consent in AI research. For example, REBs should consider the issue of AI&#x00027;s interpretability in a research consent form; to translate transparent and intelligible results.</p>
<p>Another issue that REBs consider here is the role of participants in AI research. Indeed, active participant involvement is not always necessary in AI research to complete the data collection to meet the research objectives. It is often the case when data collection is completed from connected digital devices or by querying databases. However, the consequences amplified the phenomenon of dematerialization of research participation while facilitating data circulation.</p>
<p>Furthermore, AI research and the use of these new technologies call on REBs to be aware of the changes this implies for the research participant, particularly concerns such as the continuous consent process, management of withdrawal, or the duration of participation in the research.</p>
<p>While protecting the individual participant takes center stage in the evaluation of REBs, research with AI may focus more on using data obtained from databases held by governments, private bodies, institutions, or academics. In this context, should concerns for societal wellbeing prevail over the wellbeing of the individual? There does not appear to be a clear consensus on what principles should be invoked to address this concern.</p>
</sec>
</sec>
<sec>
<title>4.4. Limitations</title>
<p>The focal point of AI evaluation was often about privacy protection and data governance, not AI&#x00027;s ethics. While data protection and governance are massively essential issues, it should be equally important to investigate AI issues, not to leave out concerns that should be dealt with, such as AI validity, explainability, and transparency. In addition, FAIR and the ethics of care, which are starting to become standard approaches in the field, were not invoked in the articles to inform AI ethics in research. This might be due to the study&#x00027;s lack of literature on AI ethics compared to research ethics in general.</p>
<p>Another limitation worth outlining is that our final sample mainly reflected the reality and issues found in healthcare, despite having a scoping review open to all fields using AI. This could be due to the fact that AI is becoming more prominent in the field of healthcare (Davenport and Kalakota, <xref ref-type="bibr" rid="B22">2019</xref>). The field is also often linked to the development and presence of research ethics boards (Edwards and Tracey Stone, <xref ref-type="bibr" rid="B26">2007</xref>). Having healthcare outshine the rest of the fields in our sample could also be attributed to research ethics mostly stemming from multiple medical research incidents throughout history (Aita and Marie-Claire, <xref ref-type="bibr" rid="B4">2005</xref>).</p>
<p>Furthermore, throughout the studied articles, few to none mentioned countries were non-affluent. This poses concerns about widening disparities between developed and developing countries. Therefore, it is vital to acknowledge the asymmetry of legislative and societal norms between countries to better serve their needs and avoid colonized practices.</p>
<p>Finally, this topic lacks maturity. This study primarily shows that REBs cannot find guidance from the literature. Indeed, there is a scarcity of findings in the literature regarding recommendations and practices to adopt in research using AI. There are even fewer findings that specifically aim to equip REBs. Reported suggestions are often about privileged behavior that governments or researchers should adopt rather than establishing the proper criteria REBs should follow during their assessments. Therefore, this study does not lead to findings directly applicable to REBs practice and should not be used as a tool for REBs.</p>
</sec>
</sec>
<sec id="s5">
<title>5. Conclusion</title>
<p>Every field has its ethical challenges and needs. The results in this article have shown this reality. Indeed, we&#x00027;ve navigated through some of AI ethics general issues before investigating AI ethics research-specific issues. This led us to discern what research ethics boards focus on more adeptly during their evaluations and the limits imposed on them when evaluating AI ethics in research. While AI is a promising field to explore and invest in, many caveats force us to develop a better understanding of these systems. With AI&#x00027;s development, many societal challenges will come our way, whether they are current ongoing issues, new AI-specific ones, or those that remain unknown to us. Ethical reflections are taking a step forward while normative guidelines adaptation to AI&#x00027;s reality is still dawdling. This impacts REBs and most stakeholders involved with AI. However, throughout the literature, many suggestions and recommendations were provided. This could allow us to build a framework with a clear set of practices that could be implemented for real-world use.</p>
</sec>
<sec sec-type="author-contributions" id="s6">
<title>Author contributions</title>
<p>SBG: data collection, data curation, writing&#x02014;original draft, and writing&#x02014;review and editing. PG: conceptualization, methodology, data collection, writing&#x02014;original draft, writing&#x02014;review and editing, supervision, and project administration. JCBP: conceptualization, methodology, data collection, writing&#x02014;review and editing, supervision, project administration, and funding acquisition. All authors contributed to the article and approved the submitted version.</p>
</sec>
</body>
<back>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s7">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<sec sec-type="supplementary-material" id="s8">
<title>Supplementary material</title>
<p>The Supplementary Material for this article can be found online at: <ext-link ext-link-type="uri" xlink:href="https://www.frontiersin.org/articles/10.3389/frai.2023.1149082/full#supplementary-material">https://www.frontiersin.org/articles/10.3389/frai.2023.1149082/full#supplementary-material</ext-link></p>
<supplementary-material xlink:href="Table_1.docx" id="SM1" mimetype="application/vnd.openxmlformats-officedocument.wordprocessingml.document" xmlns:xlink="http://www.w3.org/1999/xlink"/>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahuja</surname> <given-names>A. S.</given-names></name></person-group> (<year>2019</year>). <article-title>The impact of artificial intelligence in medicine on the future role of the physician</article-title>. <source>PeerJ</source> <volume>7</volume>, <fpage>e7702</fpage>. <pub-id pub-id-type="doi">10.7717/peerj.7702</pub-id><pub-id pub-id-type="pmid">31592346</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aicardi</surname> <given-names>C.</given-names></name> <name><surname>Akintoye</surname> <given-names>S.</given-names></name> <name><surname>Fothergill</surname> <given-names>B. T.</given-names></name> <name><surname>Guerrero</surname> <given-names>M.</given-names></name> <name><surname>Klinker</surname> <given-names>G.</given-names></name> <name><surname>Knight</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Ethical and social aspects of neurorobotics</article-title>. <source>Sci. Eng. Ethics</source> <volume>26</volume>, <fpage>2533</fpage>&#x02013;<lpage>2546</lpage>. <pub-id pub-id-type="doi">10.1007/s11948-020-00248-8</pub-id><pub-id pub-id-type="pmid">32700245</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aicardi</surname> <given-names>C.</given-names></name> <name><surname>Fothergill</surname> <given-names>B. T.</given-names></name> <name><surname>Rainey</surname> <given-names>S.</given-names></name> <name><surname>Stahl</surname> <given-names>B. C.</given-names></name> <name><surname>Harris</surname> <given-names>E.</given-names></name></person-group> (<year>2018</year>). <article-title>Accompanying technology development in the human brain project: from foresight to ethics management</article-title>. <source>Futures</source> <volume>102</volume>, <fpage>114</fpage>&#x02013;<lpage>124</lpage>. <pub-id pub-id-type="doi">10.1016/j.futures.2018.01.005</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aita</surname> <given-names>M.</given-names></name> <name><surname>Marie-Claire</surname> <given-names>R.</given-names></name></person-group> (<year>2005</year>). <article-title>Essentials of research ethics for healthcare professionals</article-title>. <source>Nurs. Health Sci.</source> <volume>7</volume>, <fpage>119</fpage>&#x02013;<lpage>125</lpage>. <pub-id pub-id-type="doi">10.1111/j.1442-2018.2005.00216.x</pub-id><pub-id pub-id-type="pmid">15877688</pub-id></citation></ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Andreotta</surname> <given-names>A. J.</given-names></name> <name><surname>Kirkham</surname> <given-names>N.</given-names></name> <name><surname>Rizzi</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>AI, big data, and the future of consent</article-title>. <source>AI Soc.</source> <volume>17</volume>, <fpage>1</fpage>&#x02013;<lpage>14</lpage>. <pub-id pub-id-type="doi">10.1007/s00146-021-01262-5</pub-id><pub-id pub-id-type="pmid">34483498</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Auger</surname> <given-names>S. D.</given-names></name> <name><surname>Jacobs</surname> <given-names>B. M.</given-names></name> <name><surname>Dobson</surname> <given-names>R.</given-names></name> <name><surname>Marshall</surname> <given-names>C. R.</given-names></name> <name><surname>Noyce</surname> <given-names>A. J.</given-names></name></person-group> (<year>2020</year>). <article-title>Big data, machine learning and artificial intelligence: a neurologist&#x00027;s guide</article-title>. <source>Pract Neurol</source> <volume>21</volume>, <fpage>4</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1136/practneurol-2020-002688</pub-id><pub-id pub-id-type="pmid">32994368</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Aymerich-Franch</surname> <given-names>L.</given-names></name> <name><surname>Fosch-Villaronga</surname> <given-names>E.</given-names></name></person-group> (<year>2020</year>). <article-title>A self-guiding tool to conduct research with embodiment technologies responsibly</article-title>. <source>Front. Robotic. AI</source> <volume>7</volume>, <fpage>22</fpage>. <pub-id pub-id-type="doi">10.3389/frobt.2020.00022</pub-id><pub-id pub-id-type="pmid">33501191</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Battistuzzi</surname> <given-names>L.</given-names></name> <name><surname>Papadopoulos</surname> <given-names>C.</given-names></name> <name><surname>Hill</surname> <given-names>T.</given-names></name> <name><surname>Castro</surname> <given-names>N.</given-names></name> <name><surname>Bruno</surname> <given-names>B.</given-names></name> <name><surname>Sgorbissa</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Socially assistive robots, older adults and research ethics: the case for case-based ethics training</article-title>. <source>Int. J. Soc. Robotics</source> <volume>13</volume>, <fpage>647</fpage>&#x02013;<lpage>659</lpage>. <pub-id pub-id-type="doi">10.1007/s12369-020-00652-x</pub-id></citation>
</ref>
<ref id="B9">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>B&#x000E9;lisle-Pipon</surname> <given-names>J. -C.</given-names></name> <name><surname>Couture</surname> <given-names>V.</given-names></name> <name><surname>Roy</surname> <given-names>M. -C.</given-names></name> <name><surname>Ganache</surname> <given-names>I.</given-names></name> <name><surname>Goetghebeur</surname> <given-names>M.</given-names></name> <name><surname>Cohen</surname> <given-names>I. G.</given-names></name></person-group> (<year>2021</year>). <article-title>What makes artificial intelligence exceptional in health technology assessment?</article-title>. <source>Front. Artif. Intell.</source> <volume>4</volume>, <fpage>736697</fpage>. <pub-id pub-id-type="doi">10.3389/frai.2021.736697</pub-id><pub-id pub-id-type="pmid">34796318</pub-id></citation></ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>B&#x000E9;lisle-Pipon</surname> <given-names>J. C.</given-names></name> <name><surname>Monteferrante</surname> <given-names>E.</given-names></name> <name><surname>Roy</surname> <given-names>M. C.</given-names></name> <name><surname>Couture</surname> <given-names>V.</given-names></name></person-group> (<year>2022</year>). <article-title>Artificial intelligence ethics has a black box problem</article-title>. <source>AI Soc</source>. <pub-id pub-id-type="doi">10.1007/s00146-021-01380-0</pub-id></citation>
</ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bentzen</surname> <given-names>M. M.</given-names></name></person-group> (<year>2017</year>). <article-title>Black boxes on wheels: research challenges and ethical problems in MEA-based robotics</article-title>. <source>Ethics Inf. Technol.</source> <volume>19</volume>, <fpage>19</fpage>&#x02013;<lpage>28</lpage>. <pub-id pub-id-type="doi">10.1007/s10676-016-9415-z</pub-id></citation>
</ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bonnet</surname> <given-names>F.</given-names></name> <name><surname>B&#x000E9;n&#x000E9;dicte</surname> <given-names>R.</given-names></name></person-group> (<year>2009</year>). <article-title>La r&#x000E9;gulation &#x000E9;thique de la recherche aux &#x000E9;tats-unis: histoire, &#x000E9;tat des lieux et enjeux</article-title>. <source>Gen&#x000E8;ses</source> <volume>2</volume>, <fpage>87</fpage>&#x02013;<lpage>108</lpage>. <pub-id pub-id-type="doi">10.3917/gen.075.0087</pub-id></citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Braun</surname> <given-names>V.</given-names></name> <name><surname>Victoria</surname> <given-names>C.</given-names></name></person-group> (<year>2006</year>). <article-title>Using thematic analysis in psychology</article-title>. <source>Qualitative Res. Psychol.</source> <volume>3</volume>, <fpage>101</fpage>&#x02013;<lpage>177</lpage>. <pub-id pub-id-type="doi">10.1191/1478088706qp063oa</pub-id></citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brynjolfsson</surname> <given-names>E.</given-names></name> <name><surname>Andrew</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Artificial intelligence, for real</article-title>. <source>Harvard Bus. Rev.</source> <volume>1</volume>, <fpage>1</fpage>&#x02013;<lpage>31</lpage>.</citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Calmet</surname> <given-names>J.</given-names></name> <name><surname>John</surname> <given-names>A. C.</given-names></name></person-group> (<year>1997</year>). <article-title>A perspective on symbolic mathematical computing and artificial intelligence</article-title>. <source>Annal. Mathematics Artif. Int.</source> <volume>19</volume>, <fpage>261</fpage>&#x02013;<lpage>277</lpage>. <pub-id pub-id-type="doi">10.1023/A:1018920108903</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cath</surname> <given-names>C.</given-names></name></person-group> (<year>2018</year>). <source>Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges</source>. <publisher-loc>London</publisher-loc>: <publisher-name>The Royal Society Publishing</publisher-name>.<pub-id pub-id-type="pmid">30322996</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cath</surname> <given-names>C.</given-names></name> <name><surname>Wachter</surname> <given-names>S.</given-names></name> <name><surname>Mittelstadt</surname> <given-names>B.</given-names></name> <name><surname>Taddeo</surname> <given-names>M.</given-names></name> <name><surname>Floridi</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>Artificial intelligence and the &#x02018;good society&#x00027;: the US, EU, and UK approach</article-title>. <source>Sci. Eng. Ethics</source> <volume>24</volume>, <fpage>505</fpage>&#x02013;<lpage>528</lpage>. <pub-id pub-id-type="doi">10.1007/s11948-017-9901-7</pub-id><pub-id pub-id-type="pmid">28353045</pub-id></citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chassang</surname> <given-names>G.</given-names></name> <name><surname>Thomsen</surname> <given-names>M.</given-names></name> <name><surname>Rumeau</surname> <given-names>P.</given-names></name> <name><surname>Sedes</surname> <given-names>F.</given-names></name> <name><surname>Delfin</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>An interdisciplinary conceptual study of artificial intelligence (AI) for helping benefit-risk assessment practices</article-title>. <source>AI Commun.</source> <volume>34</volume>, <fpage>121</fpage>&#x02013;<lpage>146</lpage>. <pub-id pub-id-type="doi">10.3233/AIC-201523</pub-id></citation>
</ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Coeckelbergh</surname> <given-names>M.</given-names></name> <name><surname>Pop</surname> <given-names>C.</given-names></name> <name><surname>Simut</surname> <given-names>R.</given-names></name> <name><surname>Peca</surname> <given-names>A.</given-names></name> <name><surname>Pintea</surname> <given-names>S.</given-names></name> <name><surname>David</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>A survey of expectations about the role of robots in robot-assisted therapy for children with ASD: ethical acceptability, trust, sociability, appearance, and attachment</article-title>. <source>Sci. Eng. Ethics</source> <volume>22</volume>, <fpage>47</fpage>&#x02013;<lpage>65</lpage>. <pub-id pub-id-type="doi">10.1007/s11948-015-9649-x</pub-id><pub-id pub-id-type="pmid">25894654</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Colquhoun</surname> <given-names>H. L.</given-names></name> <name><surname>Danielle Levac</surname> <given-names>K. K. O.</given-names></name> <name><surname>Sharon Straus</surname> <given-names>A. C. T.</given-names></name> <name><surname>Laure Perrier</surname> <given-names>M. K.</given-names></name></person-group> (<year>2014</year>). <article-title>Scoping reviews: time for clarity in definition, methods, and reporting</article-title>. <source>J. Clin. Epidemiol.</source> <volume>67</volume>, <fpage>1291</fpage>&#x02013;<lpage>1294</lpage>. <pub-id pub-id-type="doi">10.1016/j.jclinepi.2014.03.013</pub-id><pub-id pub-id-type="pmid">25034198</pub-id></citation></ref>
<ref id="B21">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Copeland</surname> <given-names>B. J.</given-names></name></person-group> (<year>2022</year>). <source>Artificial Intelligence</source>. <publisher-loc>Chicago, IL</publisher-loc>: <publisher-name>Encyclopedia Britannica</publisher-name>.</citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davenport</surname> <given-names>T.</given-names></name> <name><surname>Kalakota</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <article-title>The potential for artificial intelligence in healthcare</article-title>. <source>Future Healthc. J.</source> <volume>6</volume>, <fpage>94</fpage>&#x02013;<lpage>98</lpage>. <pub-id pub-id-type="doi">10.7861/futurehosp.6-2-94</pub-id><pub-id pub-id-type="pmid">31363513</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Davenport</surname> <given-names>T. H.</given-names></name> <name><surname>Rajeev</surname> <given-names>R.</given-names></name></person-group> (<year>2018</year>). <article-title>Artificial intelligence for the real world</article-title>. <source>Harvard Bus. Rev.</source> <volume>96</volume>, <fpage>108</fpage>&#x02013;<lpage>116</lpage>.</citation>
</ref>
<ref id="B24">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Dignum</surname> <given-names>V.</given-names></name></person-group> (<year>2018</year>). <source>Ethics in Artificial Intelligence: Introduction to the Special Issue.</source> <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer</publisher-name>. <pub-id pub-id-type="doi">10.1007/s10676-018-9450-z</pub-id><pub-id pub-id-type="pmid">37359797</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Durand</surname> <given-names>G.</given-names></name></person-group> (<year>2005</year>). <source>Introduction G&#x000E9;n&#x000E9;rale &#x000E0; La Bio&#x000E9;thique, Histoire Concepts et Outils</source>. <publisher-loc>Vatican</publisher-loc>: <publisher-name>Fides</publisher-name>.</citation>
</ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Edwards</surname> <given-names>S. J. L.</given-names></name> <name><surname>Tracey Stone</surname> <given-names>T. S.</given-names></name></person-group> (<year>2007</year>). <article-title>Differences between research ethics committees</article-title>. <source>Int. J. Technol. Assess. Health Care</source> <volume>23</volume>, <fpage>17</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.1017/S0266462307051525</pub-id><pub-id pub-id-type="pmid">17234012</pub-id></citation></ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Faden</surname> <given-names>R. R.</given-names></name> <name><surname>Nancy</surname> <given-names>E.</given-names></name> <name><surname>Kass</surname> <given-names>S. N. G.</given-names></name> <name><surname>Peter Pronovost</surname> <given-names>S. T.</given-names></name> <name><surname>Tom</surname> <given-names>L.</given-names></name></person-group> (<year>2013</year>). <article-title>An ethics framework for a learning health care system: a departure from traditional research ethics and clinical ethics</article-title>. <source>Hastings Center Rep.</source> <volume>43</volume>, <fpage>S16</fpage>&#x02013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1002/hast.134</pub-id><pub-id pub-id-type="pmid">23315888</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Farisco</surname> <given-names>M.</given-names></name> <name><surname>Evers</surname> <given-names>K.</given-names></name> <name><surname>Salles</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Towards establishing criteria for the ethical analysis of artificial intelligence</article-title>. <source>Sci. Eng. Ethics</source> <volume>26</volume>, <fpage>2413</fpage>&#x02013;<lpage>2425</lpage>. <pub-id pub-id-type="doi">10.1007/s11948-020-00238-w</pub-id><pub-id pub-id-type="pmid">32638285</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ford</surname> <given-names>E.</given-names></name> <name><surname>Scarlett Shepherd</surname> <given-names>K. J.</given-names></name> <name><surname>Lamiece</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>Toward an ethical framework for the text mining of social media for health research: a systematic review</article-title>. <source>Front. Digital Health</source> <volume>2</volume>, <fpage>592237</fpage>. <pub-id pub-id-type="doi">10.3389/fdgth.2020.592237</pub-id><pub-id pub-id-type="pmid">34713062</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Friesen</surname> <given-names>P.</given-names></name> <name><surname>Douglas-Jones</surname> <given-names>R.</given-names></name> <name><surname>Marks</surname> <given-names>M.</given-names></name> <name><surname>Pierce</surname> <given-names>R.</given-names></name> <name><surname>Fletcher</surname> <given-names>K.</given-names></name> <name><surname>Mishra</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Governing AI-driven health research: are IRBs up to the task?</article-title> <source>Ethics Hum. Res.</source> <volume>43</volume>, <fpage>35</fpage>&#x02013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1002/eahr.500085</pub-id><pub-id pub-id-type="pmid">33683015</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Geis</surname> <given-names>J. R.</given-names></name> <name><surname>Brady</surname> <given-names>A. P.</given-names></name> <name><surname>Wu</surname> <given-names>C. C.</given-names></name> <name><surname>Spencer</surname> <given-names>J.</given-names></name> <name><surname>Ranschaert</surname> <given-names>E.</given-names></name> <name><surname>Jaremko</surname> <given-names>J. L.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Ethics of artificial intelligence in radiology: summary of the joint european and north American multisociety statement</article-title>. <source>Radiology</source> <volume>293</volume>, <fpage>436</fpage>&#x02013;<lpage>440</lpage>. <pub-id pub-id-type="doi">10.1148/radiol.2019191586</pub-id><pub-id pub-id-type="pmid">31585825</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gerke</surname> <given-names>S.</given-names></name> <name><surname>Timo Minssen</surname> <given-names>G. C.</given-names></name></person-group> (<year>2020</year>). <source>Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare. Artificial Intelligence in Healthcare.</source> <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>Elsevier</publisher-name>, <fpage>295</fpage>&#x02013;<lpage>336</lpage>.<pub-id pub-id-type="pmid">32245804</pub-id></citation></ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gooding</surname> <given-names>P.</given-names></name> <name><surname>Kariotis</surname> <given-names>T.</given-names></name></person-group> (<year>2021</year>). <article-title>Ethics and law in research on algorithmic and data-driven technology in mental health care: scoping review</article-title>. <source>JMIR Ment. Health</source> <volume>8</volume>, <fpage>e24668</fpage>. <pub-id pub-id-type="doi">10.2196/24668</pub-id><pub-id pub-id-type="pmid">34110297</pub-id></citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Greatbatch</surname> <given-names>O.</given-names></name> <name><surname>Garrett</surname> <given-names>A.</given-names></name> <name><surname>Snape</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>The impact of artificial intelligence on the current and future practice of clinical cancer genomics</article-title>. <source>Genet. Res.</source> <volume>101</volume>, <fpage>e9</fpage>. <pub-id pub-id-type="doi">10.1017/S0016672319000089</pub-id><pub-id pub-id-type="pmid">31668155</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Greene</surname> <given-names>D.</given-names></name> <name><surname>Hoffmann</surname> <given-names>A. L.</given-names></name> <name><surname>Stark</surname> <given-names>L.</given-names></name></person-group> (<year>2019</year>). <source>Better, Nicer, Clearer</source>, <publisher-loc>Fairer</publisher-loc>: <publisher-name>A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning</publisher-name>. <pub-id pub-id-type="doi">10.24251/HICSS.2019.258</pub-id></citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Grote</surname> <given-names>T.</given-names></name></person-group> (<year>2021</year>). <article-title><italic>Randomised controlled trials in</italic> medical AI: ethical considerations</article-title>. <source>J. Med. Ethics.</source> <volume>48</volume>, <fpage>899</fpage>&#x02013;<lpage>906</lpage>. <pub-id pub-id-type="doi">10.1136/medethics-2020-107166</pub-id><pub-id pub-id-type="pmid">33990429</pub-id></citation></ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Holte</surname> <given-names>A. J.</given-names></name> <name><surname>Richard</surname> <given-names>F.</given-names></name></person-group> (<year>2021</year>). <article-title>Tethered to texting: reliance on texting and emotional attachment to cell phones</article-title>. <source>Curr. Psychol.</source> <volume>40</volume>, <fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1007/s12144-018-0037-y</pub-id></citation>
</ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ienca</surname> <given-names>M.</given-names></name> <name><surname>Ignatiadis</surname> <given-names>K.</given-names></name></person-group> (<year>2020</year>). <article-title>Artificial intelligence in clinical neuroscience: methodological and ethical challenges</article-title>. <source>AJOB Neurosci.</source> <volume>11</volume>, <fpage>77</fpage>&#x02013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.1080/21507740.2020.1740352</pub-id><pub-id pub-id-type="pmid">32228387</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jacobson</surname> <given-names>N. C.</given-names></name> <name><surname>Bentley</surname> <given-names>K. H.</given-names></name> <name><surname>Walton</surname> <given-names>A.</given-names></name> <name><surname>Wang</surname> <given-names>S. B.</given-names></name> <name><surname>Fortgang</surname> <given-names>R. G.</given-names></name> <name><surname>Millner</surname> <given-names>A. J.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Ethical dilemmas posed by mobile health and machine learning in psychiatry research</article-title>. <source>Bull World Health Organ.</source> <volume>98</volume>, <fpage>270</fpage>&#x02013;<lpage>276</lpage>. <pub-id pub-id-type="doi">10.2471/BLT.19.237107</pub-id><pub-id pub-id-type="pmid">32284651</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>M. D.</given-names></name> <name><surname>Ken Chang</surname> <given-names>X. M.</given-names></name> <name><surname>Adam Bernheim</surname> <given-names>M. C.</given-names></name> <name><surname>Sharon</surname> <given-names>R.</given-names></name> <name><surname>Steinberger</surname> <given-names>J.</given-names></name> <name><surname>Brent</surname> <given-names>P. L</given-names></name></person-group>. (<year>2021</year>). <article-title>Radiology implementation considerations for artificial intelligence (AI) applied to COVID-19, From the AJR Special Series on AI Applications</article-title>. <source>AJR</source>. <volume>291</volume>, <fpage>15</fpage>&#x02013;<lpage>23</lpage>. <pub-id pub-id-type="doi">10.2214/AJR.21.26717</pub-id><pub-id pub-id-type="pmid">34612681</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mazurek</surname> <given-names>G.</given-names></name> <name><surname>Karolina</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>Perception of privacy and data protection in the context of the development of artificial intelligence</article-title>. <source>J. Manage. Anal.</source> <volume>6</volume>, <fpage>344</fpage>&#x02013;<lpage>364</lpage>. <pub-id pub-id-type="doi">10.1080/23270012.2019.1671243</pub-id></citation>
</ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCradden</surname> <given-names>M. D.</given-names></name> <name><surname>Anderson</surname> <given-names>J. A.</given-names></name> <name><surname>Zlotnik Shaul</surname> <given-names>R.</given-names></name></person-group> (<year>2020c</year>). <article-title>Accountability in the machine learning pipeline: the critical role of research ethics oversight</article-title>. <source>Am. J. Bioeth.</source> <volume>20</volume>, <fpage>40</fpage>&#x02013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1080/15265161.2020.1820111</pub-id><pub-id pub-id-type="pmid">33103980</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCradden</surname> <given-names>M. D.</given-names></name> <name><surname>Baba</surname> <given-names>A.</given-names></name> <name><surname>Saha</surname> <given-names>A.</given-names></name> <name><surname>Ahmad</surname> <given-names>S.</given-names></name> <name><surname>Boparai</surname> <given-names>K.</given-names></name> <name><surname>Fadaiefard</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2020a</year>). <article-title>ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study</article-title>. <source>CMAJ Open</source> <volume>8</volume>, <fpage>E90</fpage>&#x02013;<lpage>e95</lpage>. <pub-id pub-id-type="doi">10.9778/cmajo.20190151</pub-id><pub-id pub-id-type="pmid">32071143</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McCradden</surname> <given-names>M. D.</given-names></name> <name><surname>Stephenson</surname> <given-names>E. A.</given-names></name> <name><surname>Anderson</surname> <given-names>J. A.</given-names></name></person-group> (<year>2020b</year>). <article-title>Clinical research underlies ethical integration of healthcare artificial intelligence</article-title>. <source>Nat. Med.</source> <volume>26</volume>, <fpage>1325</fpage>&#x02013;<lpage>1326</lpage>. <pub-id pub-id-type="doi">10.1038/s41591-020-1035-9</pub-id><pub-id pub-id-type="pmid">32908273</pub-id></citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meszaros</surname> <given-names>J.</given-names></name> <name><surname>Ho</surname> <given-names>C. H.</given-names></name></person-group> (<year>2021</year>). <article-title>AI research and data protection: can the same rules apply for commercial and academic research under the GDPR?</article-title> <source>Comput. Law Security Rev.</source> <volume>41</volume>, <fpage>532</fpage>. <pub-id pub-id-type="doi">10.1016/j.clsr.2021.105532</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Miller</surname> <given-names>L. F.</given-names></name></person-group> (<year>2020</year>). <article-title>Responsible research for the construction of maximally humanlike automata: the paradox of unattainable informed consent</article-title>. <source>Ethics Inf. Technol.</source> <volume>22</volume>, <fpage>297</fpage>&#x02013;<lpage>305</lpage>. <pub-id pub-id-type="doi">10.1007/s10676-017-9427-3</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mills</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <source>Artificial Intelligence in Law: The State of Play 2016</source>. <publisher-loc>Eegan, MN</publisher-loc>: <publisher-name>Thomson Reuters Legal Executive Institute</publisher-name>.</citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mintz</surname> <given-names>Y.</given-names></name> <name><surname>Brodie</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <article-title>Introduction to artificial intelligence in medicine</article-title>. <source>Minim Invasive Ther. Allied Technol.</source> <volume>28</volume>, <fpage>73</fpage>&#x02013;<lpage>81</lpage>. <pub-id pub-id-type="doi">10.1080/13645706.2019.1575882</pub-id><pub-id pub-id-type="pmid">30810430</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Moher</surname> <given-names>D.</given-names></name> <name><surname>Alessandro Liberati</surname> <given-names>J. T.</given-names></name> <name><surname>Douglas</surname> <given-names>G.</given-names></name> <name><surname>Altman</surname> <given-names>A. G.</given-names></name></person-group> (<year>2009</year>). <article-title>Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement</article-title>. <source>Annal. Int. Med.</source> <volume>151</volume>, <fpage>264</fpage>&#x02013;<lpage>269</lpage>. <pub-id pub-id-type="doi">10.7326/0003-4819-151-4-200908180-00135</pub-id><pub-id pub-id-type="pmid">20171303</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>M&#x000FC;ller</surname> <given-names>V. C.</given-names></name></person-group> (<year>2021</year>). <source>Ethics of Artificial Intelligence and Robotics. The Stanford Encyclopedia of Philosophy</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/">https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/</ext-link></citation>
</ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Munn</surname> <given-names>Z.</given-names></name> <name><surname>Micah</surname> <given-names>D. J.</given-names></name> <name><surname>Peters</surname> <given-names>C. S.</given-names></name> <name><surname>Catalin Tufanaru</surname> <given-names>A. M.</given-names></name> <name><surname>Edoardo</surname> <given-names>A.</given-names></name></person-group> (<year>2018</year>). <article-title>Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach</article-title>. <source>BMC Med. Res. Methodol.</source> <volume>18</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1186/s12874-018-0611-x</pub-id><pub-id pub-id-type="pmid">30453902</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nebeker</surname> <given-names>C.</given-names></name> <name><surname>Torous</surname> <given-names>J.</given-names></name> <name><surname>Bartlett Ellis</surname> <given-names>R. J.</given-names></name></person-group> (<year>2019</year>). <article-title>Building the case for actionable ethics in digital health research supported by artificial intelligence</article-title>. <source>BMC Med.</source> <volume>17</volume>, <fpage>137</fpage>. <pub-id pub-id-type="doi">10.1186/s12916-019-1377-7</pub-id><pub-id pub-id-type="pmid">31311535</pub-id></citation></ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nittas</surname> <given-names>V.</given-names></name> <name><surname>Paola Daniore</surname> <given-names>C. L.</given-names></name> <name><surname>Felix Gille</surname> <given-names>J. A.</given-names></name> <name><surname>Shannon Hubbs</surname> <given-names>M. A. P.</given-names></name> <name><surname>Effy Vayena</surname> <given-names>A. B.</given-names></name></person-group> (<year>2023</year>). <article-title>Beyond high hopes: a scoping review of the 2019&#x02013;2021 scientific discourse on machine learning in medical imaging</article-title>. <source>PLOS Digital Health</source> <volume>2</volume>, <fpage>e0000189</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pdig.0000189</pub-id><pub-id pub-id-type="pmid">36812620</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>O&#x00027;Sullivan</surname> <given-names>S.</given-names></name> <name><surname>Nevejans</surname> <given-names>N.</given-names></name> <name><surname>Allen</surname> <given-names>C.</given-names></name> <name><surname>Blyth</surname> <given-names>A.</given-names></name> <name><surname>Leonard</surname> <given-names>S.</given-names></name> <name><surname>Pagallo</surname> <given-names>U.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery</article-title>. <source>Int. J. Med. Robot</source> <volume>15</volume>, <fpage>e1968</fpage>. <pub-id pub-id-type="doi">10.1002/rcs.1968</pub-id><pub-id pub-id-type="pmid">30397993</pub-id></citation></ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Page</surname> <given-names>S. A.</given-names></name> <name><surname>Jeffrey</surname> <given-names>N.</given-names></name></person-group> (<year>2017</year>). <article-title>Improving the process of research ethics review</article-title>. <source>Res. Integ. Peer Rev.</source> <volume>2</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1186/s41073-017-0038-7</pub-id><pub-id pub-id-type="pmid">29451537</pub-id></citation></ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Prunkl</surname> <given-names>C. E. A.</given-names></name> <name><surname>Ashurst</surname> <given-names>C.</given-names></name> <name><surname>Anderljung</surname> <given-names>M.</given-names></name> <name><surname>Webb</surname> <given-names>H.</given-names></name> <name><surname>Leike</surname> <given-names>J.</given-names></name> <name><surname>Dafoe</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Institutionalizing Ethics in AI through broader impact requirements</article-title>. <source>Nat. Mac. Int.</source> <volume>3</volume>, <fpage>104</fpage>&#x02013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1038/s42256-021-00298-y</pub-id></citation>
</ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Samuel</surname> <given-names>G.</given-names></name> <name><surname>Derrick</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). <article-title>Defining ethical standards for the application of digital tools to population health research</article-title>. <source>Bull World Health Organ.</source> <volume>98</volume>, <fpage>239</fpage>&#x02013;<lpage>244</lpage>. <pub-id pub-id-type="doi">10.2471/BLT.19.237370</pub-id><pub-id pub-id-type="pmid">32284646</pub-id></citation></ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Samuel</surname> <given-names>G. J. C.</given-names></name> <name><surname>Gemma</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>Boundaries between research ethics and ethical research use in artificial intelligence health research</article-title>. <source>J. Emp. Res. Hum. Res. Ethics</source> <volume>16</volume>, <fpage>325</fpage>&#x02013;<lpage>337</lpage>. <pub-id pub-id-type="doi">10.1177/15562646211002744</pub-id><pub-id pub-id-type="pmid">33733915</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sedenberg</surname> <given-names>E.</given-names></name> <name><surname>Chuang</surname> <given-names>J.</given-names></name> <name><surname>Mulligan</surname> <given-names>D.</given-names></name></person-group> (<year>2016</year>). <article-title>Designing commercial therapeutic robots for privacy preserving systems and ethical research practices within the home</article-title>. <source>Int. J. Soc. Robotics</source> <volume>8</volume>, <fpage>575</fpage>&#x02013;<lpage>587</lpage>. <pub-id pub-id-type="doi">10.1007/s12369-016-0362-y</pub-id></citation>
</ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stahl</surname> <given-names>B. C.</given-names></name> <name><surname>Coeckelbergh</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Ethics of healthcare robotics: towards responsible research and innovation</article-title>. <source>Robotic. Auton. Syst.</source> <volume>86</volume>, <fpage>152</fpage>&#x02013;<lpage>161</lpage>. <pub-id pub-id-type="doi">10.1016/j.robot.2016.08.018</pub-id></citation>
</ref>
<ref id="B61">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Stark</surname> <given-names>L.</given-names></name> <name><surname>Pylyshyn</surname> <given-names>Z.</given-names></name></person-group> (<year>2020</year>). <source>Intelligence Artificielle (IA) Au Canada</source>. <publisher-loc>Ottawa</publisher-loc>: <publisher-name>Encyclop&#x000E9;die Canadienne</publisher-name>.</citation>
</ref>
<ref id="B62">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Stone</surname> <given-names>P.</given-names></name> <name><surname>Brooks</surname> <given-names>R. E. B.</given-names></name> <name><surname>Calo</surname> <given-names>O. E.</given-names></name></person-group> (<year>2016</year>). <source>One Hundred Year Study on Artificial Intelligence (AI100).</source> <publisher-loc>Redwood, CA</publisher-loc>: <publisher-name>Stanford University Press</publisher-name></citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sukums</surname> <given-names>F.</given-names></name> <name><surname>Deogratias Mzurikwao</surname> <given-names>D. S.</given-names></name> <name><surname>Rebecca Chaula</surname> <given-names>J. M.</given-names></name> <name><surname>Twaha Kabika</surname> <given-names>J. K.</given-names></name> <name><surname>Bernard Ngowi</surname> <given-names>J. N.</given-names></name> <name><surname>Andrea</surname> <given-names>S. W.</given-names></name> <etal/></person-group>. (<year>2023</year>). <article-title>The use of artificial intelligence-based innovations in the health sector in Tanzania: a scoping review</article-title>. <source>Health Policy Technol.</source> <volume>5</volume>, <fpage>100728</fpage>. <pub-id pub-id-type="doi">10.1016/j.hlpt.2023.100728</pub-id></citation>
</ref>
<ref id="B64">
<citation citation-type="web"><person-group person-group-type="author"><collab>The Royal Society The Alan Turing Institute.</collab></person-group> (<year>2019</year>). <source>The AI Revolution in Scientific Research. The Royal Society</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf?la=en-GB&#x00026;hash=5240F21B56364A00053538A0BC29FF5F">https://royalsociety.org/-/media/policy/projects/ai-and-society/AI-revolution-in-science.pdf?la=en-GB&#x00026;hash=5240F21B56364A00053538A0BC29FF5F</ext-link></citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vollmer</surname> <given-names>S.</given-names></name> <name><surname>Bilal</surname> <given-names>A.</given-names></name> <name><surname>Mateen</surname> <given-names>G. B.</given-names></name> <name><surname>Franz</surname> <given-names>J.</given-names></name> <name><surname>Kir&#x000E1;ly</surname> <given-names>R. G.</given-names></name> <name><surname>Pall Jonsson</surname> <given-names>S. C.</given-names></name></person-group> (<year>2020</year>). <article-title>Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness</article-title>. <source>BMJ</source> <volume>20</volume>, <fpage>368</fpage>.<pub-id pub-id-type="pmid">32238345</pub-id></citation></ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>Y.</given-names></name> <name><surname>Liu</surname> <given-names>X.</given-names></name> <name><surname>Cao</surname> <given-names>X.</given-names></name> <name><surname>Huang</surname> <given-names>C.</given-names></name> <name><surname>Liu</surname> <given-names>E.</given-names></name> <name><surname>Qian</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Artificial intelligence: A powerful paradigm for scientific research</article-title>. <source>The Innovation.</source> <volume>2</volume>, <fpage>100179</fpage>. <pub-id pub-id-type="doi">10.1016/j.xinn.2021.100179</pub-id><pub-id pub-id-type="pmid">34877560</pub-id></citation></ref>
</ref-list> 
</back>
</article> 