<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdata.2020.00025</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Mini Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Considerations for a More Ethical Approach to Data in AI: On Data Representation and Infrastructure</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Baird</surname> <given-names>Alice</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/727887/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Schuller</surname> <given-names>Bj&#x000F6;rn</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/419411/overview"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg</institution>, <addr-line>Augsburg</addr-line>, <country>Germany</country></aff>
<aff id="aff2"><sup>2</sup><institution>Group on Language, Audio &#x00026; Music, Imperial College London</institution>, <addr-line>London</addr-line>, <country>United Kingdom</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Fabrizio Riguzzi, University of Ferrara, Italy</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Stefania Costantini, University of L&#x00027;Aquila, Italy; Radu Prodan, Alpen-Adria-Universit&#x000E4;t Klagenfurt, Austria</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Alice Baird <email>alicebaird&#x00040;ieee.org</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Machine Learning and Artificial Intelligence, a section of the journal Frontiers in Big Data</p></fn></author-notes>
<pub-date pub-type="epub">
<day>02</day>
<month>09</month>
<year>2020</year>
</pub-date>
<pub-date pub-type="collection">
<year>2020</year>
</pub-date>
<volume>3</volume>
<elocation-id>25</elocation-id>
<history>
<date date-type="received">
<day>16</day>
<month>01</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>02</day>
<month>07</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2020 Baird and Schuller.</copyright-statement>
<copyright-year>2020</copyright-year>
<copyright-holder>Baird and Schuller</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<abstract><p>Data shapes the development of Artificial Intelligence (AI) as we currently know it, and for many years centralized networking infrastructures have dominated both the sourcing and subsequent use of such data. Research suggests that centralized approaches result in poor representation, and as AI is now integrated more in daily life, there is a need for efforts to improve on this. The AI research community has begun to explore managing data infrastructures more democratically, finding that decentralized networking allows for more transparency which can alleviate core ethical concerns, such as selection-bias. With this in mind, herein, we present a mini-survey framed around data representation and data infrastructures in AI. We outline four key considerations (<italic>auditing, benchmarking, confidence and trust, explainability and interpretability</italic>) as they pertain to data-driven AI, and propose that reflection of them, along with improved interdisciplinary discussion may aid the mitigation of data-based AI ethical concerns, and ultimately improve individual wellbeing when interacting with AI.</p></abstract>
<kwd-group>
<kwd>artificial intelligence</kwd>
<kwd>machine learning</kwd>
<kwd>ethical AI</kwd>
<kwd>decentralization</kwd>
<kwd>selection-bias</kwd>
</kwd-group>
<counts>
<fig-count count="2"/>
<table-count count="1"/>
<equation-count count="0"/>
<ref-count count="131"/>
<page-count count="11"/>
<word-count count="9326"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Artificial intelligence (AI) in its current form relies heavily on large quantities of data (Yavuz, <xref ref-type="bibr" rid="B125">2019</xref>), and data-driven Deep Neural Networks (DNNs) have prompted fast-paced development of AI (Greene, <xref ref-type="bibr" rid="B42">2020</xref>). Currently, the research community is under great strain to keep up with the potential ethical concerns which arise as a result of this (Naughton, <xref ref-type="bibr" rid="B80">2019</xref>). Within the AI community such ethical concerns can require quite some disentanglement (Allen et al., <xref ref-type="bibr" rid="B2">2006</xref>), and it is not until recently that AI-based research groups have begun to provide public manifestos concerning the ethics of AI, e.g., Google&#x00027;s DeepMind, and the Partnership AI.<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref></p>
<p>The <italic>Ethics of AI</italic> (Boddington, <xref ref-type="bibr" rid="B15">2017</xref>) is now an essential topic for researchers, both internal and external, to core-machine learning and differs from <italic>Machine Ethics</italic> (Baum et al., <xref ref-type="bibr" rid="B9">2018</xref>). The latter refers to giving conscious ethical based decision-making power to machines. The <italic>Ethics of AI</italic>, although somewhat informing <italic>Machine Ethics</italic>, refers more broadly to decisions made by researchers and covers issues of diversity and representation, e.g., to avoid discrimination (Zliobaite, <xref ref-type="bibr" rid="B131">2015</xref>) or inherent latent biases (van Otterlo, <xref ref-type="bibr" rid="B114">2018</xref>). Herein, our discussion focuses on topics relating to the <italic>Ethics of AI</italic> unless otherwise stated.</p>
<p>There has been recent research which shows promise for improved data learning from smaller quantities (&#x0201C;merely a few minutes&#x0201D;) of data (Chen et al., <xref ref-type="bibr" rid="B20">2018</xref>). However, machine learning algorithms developed for AI commonly require substantial quantities of data (Schneider, <xref ref-type="bibr" rid="B97">2020</xref>). In this regard, <italic>Big Data</italic> ethics for AI algorithms are an expanding discussion point (Berendt et al., <xref ref-type="bibr" rid="B13">2015</xref>; Mittelstadt and Floridi, <xref ref-type="bibr" rid="B74">2016</xref>). Crowdsourcing (i.e., data gathered from large amounts of paid or unpaid individuals via the internet), is one approach to collect such quantities of data. However, ethical concerns including worker exploitation (Schlagwein et al., <xref ref-type="bibr" rid="B96">2019</xref>), may have implications on the validity of the data. Additionally researchers utilize <italic>in-the-wild</italic> internet sources, e.g., YouTube (Abu-El-Haija et al., <xref ref-type="bibr" rid="B1">2016</xref>) or Twitter (Beach, <xref ref-type="bibr" rid="B10">2019</xref>), and apply unsupervised labeling methods (Jan, <xref ref-type="bibr" rid="B50">2020</xref>). However, in Parikh et al. (<xref ref-type="bibr" rid="B84">2019</xref>), the authors describe how approaches for automated collection and labeling can result in the propagation of historical and social biases (Osoba and Welser IV, <xref ref-type="bibr" rid="B82">2017</xref>). In the health domain, such bias could have serious consequences, leading to misdiagnosis or incorrect treatment plans (Mehrabi et al., <xref ref-type="bibr" rid="B71">2019</xref>).</p>
<p>One method to avoid bias in AI is through the acquisition of diverse data sources (Demchenko et al., <xref ref-type="bibr" rid="B27">2013</xref>). With <italic>Veracity</italic> (i.e., habitual truthfulness) being one of the <italic>5 Vs</italic> (e.g., Velocity, Volume, Value, Variety and Veracity) for defining truly Big Data (Khan et al., <xref ref-type="bibr" rid="B58">2019</xref>). However, big data is commonly, stored in <italic>centralized</italic> infrastructures which limit transparency, and democratic, decentralized (i.e., peer-to-peer blockchain-based) approaches are becoming prevalent (Luo et al., <xref ref-type="bibr" rid="B67">2019</xref>).</p>
<p>Centralized data storage can be efficient and beneficial to the &#x0201C;central&#x0201D; body to which the infrastructure belongs. However, it is precisely this factor amongst others (i.e., proprietary modeling of underrepresented data) that are problematic (Ferrer et al., <xref ref-type="bibr" rid="B34">2019</xref>).</p>
<p>Furthermore, centralized platforms limit the access and knowledge that data providers receive. The General Data Protection Regulation (GDPR) was established within the European Union (The-European-Commission, <xref ref-type="bibr" rid="B110">2019</xref>) to partly tackle this. GDPR is a set of regulations of which the core goal is to protect the data of individuals that are utilized by third parties. In its current form, GDPR promotes a centralized approach, supporting what are known as <italic>commercial governance platforms</italic>. These platforms control restrictions to employees based on a data providers request but primarily function as a centralized repository. In essence, GDPR meant that companies needed to re-ask for data-consent more transparently. However, the &#x0201C;terms of agreement&#x0201D; certificate remains the basis, and 90 % of users are known to ignore its detail (Deloitte, <xref ref-type="bibr" rid="B26">2016</xref>).</p>
<p>As a counter approach to the centralized storage of data, for some time researchers have proposed the need for a <italic>decentralized</italic> (cf. <xref ref-type="fig" rid="F1">Figure 1</xref>) networking in which individual data is more easily protected (i.e., there is no &#x0201C;single point&#x0201D; of failure). In this infrastructure, individuals have more agency concerning the use of their data (Kahani and Beadle, <xref ref-type="bibr" rid="B55">1997</xref>). Primarily, individuals choose to access parts of a network rather than its entirety. On a large scale, this paradigm would remove the known biases of centralized networks, as targeted collection, for example, would be less accessible by companies and sources of the data more complex to identify. In this way, various encryption algorithms, including homomorphic encryption (a method which allows for data processing while encrypted), or data masking, are being integrated within decentralized networks, allowing for identity preservation (Setia et al., <xref ref-type="bibr" rid="B100">2019</xref>). Federated Learning (FL) (Hu et al., <xref ref-type="bibr" rid="B46">2019</xref>), is one approach which can be applied to decentralized networks to improve privacy (Marnau, <xref ref-type="bibr" rid="B68">2019</xref>). In FL, weights are passed from the host device and updated locally, instead of raw data leaving a device (Yang et al., <xref ref-type="bibr" rid="B124">2019</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>A simplified overview of a typical centralized (left) and decentralized (right) network infrastructure. In the right figure individuals choose the modality to share (as indicated by circle, square, and triangle icons), and users in the network have agency in how their data is used. In the left figure, the AI is essentially a black-box, and users make all modalities of data available to all components of the AI infrastructure.</p></caption>
<graphic xlink:href="fdata-03-00025-g0001.tif"/>
</fig>
<p>With these topics in mind, in this contribution, we aim to outline core ethical considerations, which relate to data and the ethics of AI. Our focus remains on the ethics of data representation and data infrastructure, particularly <italic>selection-bias</italic> and <italic>decentralization</italic>. We chose these topics due to their common pairing in the literature. A regular talking-point in machine learning is <italic>selection-bias</italic> and a networking infrastructure which may help to more transparently observe this is <italic>decentralization</italic> (Swan, <xref ref-type="bibr" rid="B109">2015</xref>; Montes and Goertzel, <xref ref-type="bibr" rid="B76">2019</xref>).</p>
<p>Our contribution is structured as follows; firstly we shortly define key terminology used throughout the manuscript in section 2, followed by a brief background and overview of the core themes as they pertain to AI in section 3. We then introduce our ethical data considerations in section 4 providing specific definitions and general ethical concerns. Following this in section 5, we connect these ethical considerations more closely with data representation and infrastructure, and in turn, outline technical approaches which help reduce the aforementioned ethical concerns. Finally, we offer concluding remarks in section 7.</p>
</sec>
<sec id="s2">
<title>2. Terminology</title>
<p>There are a variety of core terms which are used throughout this manuscript which may have a dual meaning in the machine learning community. For this reason, we first define here three core terms, <italic>ethics, bias</italic>, and <italic>decentralization</italic> used within our discussion.</p>
<p>As mentioned previously, we focus on the <italic>Ethics of AI</italic> rather than <italic>Machine Ethics</italic>. However, further to this, we use the term <italic>ethics</italic> based on guidelines within applied ethics, particularly in relation to machine understanding. In D&#x000F6;ring et al. (<xref ref-type="bibr" rid="B31">2011</xref>), the principles of <italic>beneficence, non-maleficence, autonomy</italic>, and <italic>justice</italic> are set out as being fundamental considerations for those working in AI. Although this is particular to emotionally aware systems, we consider that such principles are relevant across AI research. Of particular relevance to this contribution, is autonomy, i.e., a duty for systems to avoid interference, and respect an individual&#x00027;s capacity for decision-making. This principle impacts upon both <italic>data representation</italic> and <italic>infrastructure</italic> choices (e.g., centralized or decentralized).</p>
<p>We consistently refer to the term <italic>bias</italic> throughout our contribution. First introduced to machine learning by Mitchell (<xref ref-type="bibr" rid="B73">1980</xref>), we typically discuss statistical biases, unless otherwise stated, which may include absolute or relative biases. To be more specific, we focus closely on data in this contribution, and therefore dominantly refer to <italic>selection-bias</italic>. <italic>Selection-bias</italic> stems in part from prejudice-based biases (Stark, <xref ref-type="bibr" rid="B107">2015</xref>). However, <italic>selection-bias</italic> falls within statistical biases as it is a consequence of conscious (hence prejudice) or unconscious data selection. <italic>Selection-bias</italic> is particularly relevant to AI given that real randomization (or diverse representation) of data is not always possible.</p>
<p>As a critical aspect of our contribution, relating to the mitigation of bias, through a more ethical approach to data infrustructure, we consistently refer to <italic>decentralized</italic> AI. A broad definition of <italic>decentralization</italic> is the distribution of power moving away from central authorities. In the context of AI, when discussing <italic>decentralization</italic>, we refer to decentralized architectures which allow for this type of distribution, in regards to data sourcing, management and analysis. We do touch on literature relating to blockchain, which is a well-known decentralized approach. However, the term is utilized here more generally and is not exclusive to the blockchain.</p>
</sec>
<sec id="s3">
<title>3. Background: Bias and Decentralization in AI</title>
<p>Funding and global research efforts in the field of AI have increased in the last decade, particularly in the areas of health, transportation, and communication (Mou, <xref ref-type="bibr" rid="B79">2019</xref>). Along with this increase has come a rise in ethical demands related to Big Data (Herschel and Miori, <xref ref-type="bibr" rid="B45">2017</xref>). Although <italic>true</italic> Big Data is said to need <italic>Veracity</italic>, the reality of this is sometimes different, with large-scale data often showing particular biases toward clustered demographics (Price and Ball, <xref ref-type="bibr" rid="B85">2014</xref>). As a result, terms, such as <italic>Machine Learning Fairness</italic>&#x02014;promoted initially by Google Inc.<xref ref-type="fn" rid="fn0002"><sup>2</sup></xref>&#x02014;is now regularly referred to in an endeavor to build <italic>trust</italic> and show ethical sensitivity (Mehrabi et al., <xref ref-type="bibr" rid="B71">2019</xref>). In this regard, IBM released their AI Explainability 360 Toolkit<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> in which the overarching goal appears to be improving <italic>trust</italic> in AI, through more deeply researching machine learning biases, as it pertains to the research areas of fairness, robustness and <italic>explainability</italic>.</p>
<p>Three common forms of bias are discussed concerning AI, i.e., interaction-bias, latent-bias, and <italic>selection-bias</italic>. <italic>Selection-bias</italic> occurs when the data used within a paradigm is selected with bias, leading to misrepresentation rather than generalization. In particular, researchers are repeatedly finding bias in regards to gender (Gao and Ai, <xref ref-type="bibr" rid="B38">2009</xref>). Wang et al. (<xref ref-type="bibr" rid="B117">2019a</xref>) found for example that models tend to have a bias toward a particular gender even when a dataset is balanced&#x02014;which could point to lower level architecture-based biases (Koene, <xref ref-type="bibr" rid="B59">2017</xref>). <italic>Selection-bias</italic> is essential to combat when referring to models developed for human interaction. Based on data decision making, a bias can propagate through system architectures, leading to lower accuracy on a generalized population. Lack of generalization is particularly problematic for domains, such as health, where this may result in a breach of patient safety (Challen et al., <xref ref-type="bibr" rid="B18">2019</xref>).</p>
<p>Furthermore, the evaluation of <italic>fairness</italic> in machine learning is another prominent topic, highlighted as a machine learning consideration in Hutchinson and Mitchell (<xref ref-type="bibr" rid="B48">2019</xref>). Additionally, researchers propose <italic>fairness metrics</italic> for evaluating the bias which is inherent to a model (Friedler et al., <xref ref-type="bibr" rid="B35">2019</xref>), including the Disparate Impact or Demographic Parity Constraint (DPC). DPC groups underprivileged classes and compares them to privileged classes as a single group. Similarly, there are novel architectures which mitigate bias through prioritization of minority samples, and the authors of this approach suggest that there is an improvement in <italic>generalized fairness</italic> (Lohia et al., <xref ref-type="bibr" rid="B66">2019</xref>).</p>
<p>A core contributing factor to bias in AI is the management of data. Current AI networking is based on centralized infrastructure (cf. <xref ref-type="fig" rid="F1">Figure 1</xref>), where individuals present a unified data source to a central server. This centralization approach not only limits privacy but also creates a homogeneous representation, which is less characteristic of the individual interacting (Sueur et al., <xref ref-type="bibr" rid="B108">2012</xref>).</p>
<p><italic>Decentralization</italic> in AI was initially coined as a term to describe &#x0201C;autonomous agents in a multi-agents world&#x0201D; (Miiller, <xref ref-type="bibr" rid="B72">1990</xref>), and researchers have proposed <italic>decentralization</italic> for large AI architectures e.g., integrating machine learning with a Peer-to-peer style blockchain approach Zheng et al., <xref ref-type="bibr" rid="B130">2018</xref>] to improve <italic>fairness</italic> and <italic>bias</italic> (Barclay et al., <xref ref-type="bibr" rid="B8">2018</xref>). In this architecture, collaborative incentives are offered to the network users and approaches allow for improved identity-representation, as well as more control in regards to data-usage, resulting in more freedom and higher privacy. Furthermore, a decentralized network may inherently be more ethical as more individuals are interacting with and refining the network with agency (Montes and Goertzel, <xref ref-type="bibr" rid="B76">2019</xref>).</p>
<p>For individuals interfacing with AI, privacy is a concern (Montes and Goertzel, <xref ref-type="bibr" rid="B76">2019</xref>). Improving privacy is a core advantage of decentralized data approaches (Daneshgar et al., <xref ref-type="bibr" rid="B25">2019</xref>). In a centralized approach, anonymization processes exist (e.g., that which are enforced by GDPR), although it is unclear how this is consistently applied. To this end, identification of a participant in the data source may not be needed, yet, unique aspects of their character (e.g., how they pronounce a particular word), are still easily identified (Regan and Jesse, <xref ref-type="bibr" rid="B89">2019</xref>).</p>
<p>There are multiple organizations and corporations which focus on the benefits of <italic>decentralization</italic>, including Effect.AI and SingularityNET<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref> Such organizations promote benefits including &#x0201C;diverse ecosystems&#x0201D; and &#x0201C;knowledge sharing.&#x0201D; The Decentralized AI Alliance<xref ref-type="fn" rid="fn0005"><sup>5</sup></xref> is another organization which integrates AI and blockchain, promoting collaborative problem-solving. In general, the term <italic>decentralization</italic> comes not only from technical network logistic but from philosophical &#x0201C;transhuman&#x0201D; ideologies (Smith, <xref ref-type="bibr" rid="B104">2019</xref>). In regards to the latter, <italic>decentralization</italic> promotes the improvement of human-wellbeing through democratical interfacing with technology Goertzel (<xref ref-type="bibr" rid="B39">2007</xref>). This democratic view is one aspect of <italic>decentralization</italic> that aids in the reduction of AI bias (Singh, <xref ref-type="bibr" rid="B103">2018</xref>).</p>
<p>Similarly, there are organizations which focus primarily on the challenge of bias in AI, from many viewpoints including race, gender, age, and disability<xref ref-type="fn" rid="fn0006"><sup>6</sup></xref>, most of which implement responsible research and innovation (RRI). When applying RRI to the AI community, the aim is to encourage researchers to anticipate and analyse potential risks of their network, and ensure that the development of AI is socially acceptable, needed, and sustainable (Stahl and Wright, <xref ref-type="bibr" rid="B105">2018</xref>). Biases are an essential aspect of AI RRI (Fussel, <xref ref-type="bibr" rid="B36">2017</xref>), as poor identity-representation has dire consequences for real-world models (Zliobaite, <xref ref-type="bibr" rid="B131">2015</xref>).</p>
</sec>
<sec id="s4">
<title>4. Methodology: Ethical Data Considerations</title>
<p>There are an array of concerns relating to the ethics of AI, including, joblessness, inequality, security, and prejudices (Hagendorff, <xref ref-type="bibr" rid="B44">2019</xref>). With this in mind, academic and industry-based research groups are providing tools to tackle these ethical concerns (cf. <xref ref-type="table" rid="T1">Table 1</xref>), mainly based on four key areas. In this section, we introduce and conceptually discuss these four ethical considerations&#x02014;<italic>auditing, benchmarking, confidence and trust</italic> and <italic>explainability and interpretability</italic>&#x02014;chosen, due to their prominence within the AI community. As well as this, these four aspects, each have a pivotal impact on data representation, and an inherent relation to data infrastructures. An overview of a typical machine learning workflow with these four considerations highlighted based on their position in time is given in <xref ref-type="fig" rid="F2">Figure 2</xref>. To this end, herein, we first define our four considerations more concretely, followed by a description of specific ethical concerns ([&#x000B1;]) which relate to them.</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Brief overview of prominent ethical AI tools which have been made available by both academic and industry research groups.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th valign="top" align="left"><bold>Tool</bold></th>
<th valign="top" align="center"><bold>A</bold></th>
<th valign="top" align="center"><bold>B</bold></th>
<th valign="top" align="center"><bold>E &#x00026; I</bold></th>
<th valign="top" align="center"><bold>C &#x00026; T</bold></th>
<th valign="top" align="left"><bold>Description</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Gender Shades (Buolamwini and Gebru, <xref ref-type="bibr" rid="B17">2018</xref>)</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="left">An <italic>intersectional</italic> approach to inclusive product testing for AI, relating specifically to gender and race bias.</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">What-If Tool (Google, <xref ref-type="bibr" rid="B41">2020</xref>)</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="left">Allows users to analyse their machine learning model through the use of an interactive visual interface.</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">IBM: AI Explainability 360 Toolkit (Arya et al., <xref ref-type="bibr" rid="B4">2020</xref>)</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="left">Contains state-of-the-art algorithms that allow for improved interpretability and explainability of machine learning models.</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">IBM: AI Fairness 360 Open Source Toolkit (Bellamy et al., <xref ref-type="bibr" rid="B12">2019</xref>)</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">X</td>
<td valign="top" align="left">Provides a series of metrics for datasets and models to test for biases explicitly, including a clear explanations for those metrics.</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">LIME (Ribeiro et al., <xref ref-type="bibr" rid="B90">2016</xref>)</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">X</td>
<td valign="top" align="left">A general eXplainable-AI toolkit which allows users to reason better for why a model makes certain predictions.</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">openAI: baseline, Gym, Microscope (Brockman et al., <xref ref-type="bibr" rid="B16">2016</xref>)</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="left">Provides reproducible reinforcement learning algorithms with benchmarked performances based on published results. As well as visualization methods for observing significant layers and neuron activations.</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">Procgen: Benchmark (Cobbe et al., <xref ref-type="bibr" rid="B21">2019</xref>)</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="left">Procedurally-generated environments which provide a benchmark for the speed of a reinforcement learning algorithms generalization.</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">PwC: Responsible AI Toolkit (Waterhouse Cooper, <xref ref-type="bibr" rid="B122">2019</xref>)</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">X</td>
<td valign="top" align="left">A collection of customizable frameworks to harness AI in an ethical and responsible manner.</td>
</tr> <tr style="border-top: thin solid #000000;">
<td valign="top" align="left">Pymetrics: Audit AI (Trindel et al., <xref ref-type="bibr" rid="B113">2019</xref>)</td>
<td valign="top" align="center">X</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="center">&#x02013;</td>
<td valign="top" align="left">Contains tools to measure and mitigate the effects of discriminatory patterns, designed specifically for socially sensitive decision processes.</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<p><italic>We highlight their target ethical consideration, namely (A)uditing, (B)enchmarking, (E)xplainability and (I)nterpretabiltiy, (C)onfidence and (T)rust</italic>.</p>
</table-wrap-foot>
</table-wrap>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>An overview of a machine learning workflow, (1) Data collection and pre-processing, (2) developments of machine learning models, (3) evaluation of model outcomes (i.e., performance), (4) integration of the developed AI in a real-world scenario. We place the four considerations introduced in Section 4 across time. Positions of individual considerations are not static, we define their placement over time, based primarily on their relationship to one another.</p></caption>
<graphic xlink:href="fdata-03-00025-g0002.tif"/>
</fig>
<sec>
<title>4.1. Auditing</title>
<p>In the context of AI data, <italic>auditing</italic> is not dissimilar to research domains, such as economics. An auditor regularly checks aspects of the system, including the data validity itself. For example Fern&#x000E1;ndez and Fern&#x000E1;ndez (<xref ref-type="bibr" rid="B33">2019</xref>) propose an AI-based recruiting systems&#x02014;in which the candidate&#x00027;s data is validated by a manual (i.e., human) auditor. In <xref ref-type="fig" rid="F2">Figure 2</xref> we have assigned <italic>auditing</italic> to every aspect of the AI workflow, although it is commonly only integrated during earlier development stages.</p>
<p>[&#x000B1;] <italic>Auditing</italic> is integral as acquisition scales up to <italic>Big Data</italic>. The process of managing what Schembera and Dur&#x000E1;n (<xref ref-type="bibr" rid="B95">2020</xref>), describes as &#x0201C;tangible data&#x0201D; can be extremely time-consuming and costly for those involved and human or machine error can propagate, resulting in biases or leading to mostly unusable data (L&#x00027;heureux et al., <xref ref-type="bibr" rid="B63">2017</xref>). On the other side, is the <italic>auditing</italic> of &#x0201C;dark data.&#x0201D; This data type is estimated to be 90% (Johnson, <xref ref-type="bibr" rid="B52">2015</xref>) of all stored data, and is largely unknown to the user. The literature currently focuses on <italic>auditing</italic> tangible data, as yet there is less attention for dark data (Trajanov et al., <xref ref-type="bibr" rid="B112">2018</xref>).</p>
</sec>
<sec>
<title>4.2. Benchmarking</title>
<p>In machine learning, <italic>benchmarking</italic> is the process of evaluating novel approaches against well-establish approaches or databases of the same task. To this end, it often comes at a later stage during the AI workflow (cf. <xref ref-type="fig" rid="F2">Figure 2</xref>). In the computer vision domain, this has been particularly successful in pushing forward developments (Westphal et al., <xref ref-type="bibr" rid="B123">2019</xref>), with data sets, such as MNIST (LeCun and Cortes, <xref ref-type="bibr" rid="B61">2010</xref>) or CIFAR-10 (Krizhevsky et al., <xref ref-type="bibr" rid="B60">2009</xref>), continuously benchmarked against in both an academic and industry setting. Pre-trained networks are another <italic>benchmarking</italic> tool. Networks, such as imageNet (Simon et al., <xref ref-type="bibr" rid="B102">2016</xref>) are well-known and consistently applied, given the quantity of data and promising results (Wang et al., <xref ref-type="bibr" rid="B121">2019d</xref>).</p>
<p>[&#x000B1;] Multimodal analysis is becoming more ubiquitous in machine learning (Stappen et al., <xref ref-type="bibr" rid="B106">2020</xref>), due to well-known and longstanding advantages (Johnston et al., <xref ref-type="bibr" rid="B53">1997</xref>). When datasets are multimodal <italic>benchmarking</italic> improvements accurately becomes complex (Liu et al., <xref ref-type="bibr" rid="B64">2017</xref>), and aspects, such as modality miss-matches are common (Zhang and Hua, <xref ref-type="bibr" rid="B128">2015</xref>). Additionally, given the rapid developments in machine learning approaches, outdated methods may be held as benchmarks for longer than is scientifically meaningful.</p>
</sec>
<sec>
<title>4.3. Confidence and Trust</title>
<p>In AI data, the terms <italic>confidence and trust</italic> are applied to ensure reliability, i.e., having <italic>confidence</italic> in the data results in deeper <italic>trust</italic> (Arnold et al., <xref ref-type="bibr" rid="B3">2019</xref>). In this context, <italic>trust</italic> is a qualitative term, and although <italic>confidence</italic> can fall into these interpretations relating to enhanced moral understanding (Blass, <xref ref-type="bibr" rid="B14">2018</xref>), the term <italic>confidence</italic> typically refers to a quantifiable measure to base <italic>trust</italic> on (Zhang et al., <xref ref-type="bibr" rid="B129">2001</xref>; Keren et al., <xref ref-type="bibr" rid="B57">2018</xref>).</p>
<p>[&#x000B1;] Not providing an overall <italic>confidence</italic> for resulting predictions, can result in a substantial risk to the user (Ikuta et al., <xref ref-type="bibr" rid="B49">2003</xref>), i.e., if a trained network has an inherent bias, a <italic>confidence</italic> measure improve the transparency of this. Furthermore, to increase <italic>trust</italic> in AI, developers are attempting to replicate human-like characteristics, e.g., how robots walk (Nikolova et al., <xref ref-type="bibr" rid="B81">2018</xref>). Adequately reproducing such characteristics, requires substantial data sources from refined demographics. This concern falls primarily into <italic>Machine Ethics</italic>, with the need for binary gender identifications (Baird et al., <xref ref-type="bibr" rid="B7">2017</xref>), and the societal effect of doing so challenged (J&#x000F8;rgensen et al., <xref ref-type="bibr" rid="B54">2018</xref>).</p>
</sec>
<sec>
<title>4.4. Explainability and Interpretability</title>
<p>Often referred to as XAI (eXplainable AI) and arguably at the core of the ethical debate in the field of AI is <italic>explanabilty</italic> and <italic>interpretability</italic>. These terms are synonymous for the need to understand algorithms&#x00027; decision making (Molnar, <xref ref-type="bibr" rid="B75">2019</xref>; Tjoa and Guan, <xref ref-type="bibr" rid="B111">2019</xref>). However, a distinction can be made, <italic>interpretability</italic> being methods for better understanding a machine learning architecture or data source (i.e., the <italic>how</italic>), and <italic>explainability</italic> being methods for understanding <italic>why</italic> particular decision were made.</p>
<p>[&#x000B1;] A surge in machine learning research, has come from international challenges (Schuller et al., <xref ref-type="bibr" rid="B98">2013</xref>; Ringeval et al., <xref ref-type="bibr" rid="B91">2019</xref>)&#x02014;driving improvements in accuracy across multiple machine learning domains (Meer et al., <xref ref-type="bibr" rid="B70">2000</xref>). However, this fast-paced environment often leaves less time for interpreting how particular features may have explicitly impacted a result, or for an explanation of a models decision-making process. Without this, the meaning of any result is less easy to substantiate (Vellido et al., <xref ref-type="bibr" rid="B115">2012</xref>).</p>
</sec>
</sec>
<sec id="s5">
<title>5. Discussion: Representation and Infrastructure</title>
<p>Having defined our four key consideration more concretely, we now discuss them more closely with representation (w.r.t., bias) and infrastructure of AI data in mind. Where meaningful, we highlight technical approaches which are implemented to reduce the aforementioned ethical concerns.</p>
<sec>
<title>5.1. Auditing</title>
<p>There are many methods being developed to make collecting and annotating data in an automatic way possible, including <italic>data mining</italic> of web-based images (Zafar et al., <xref ref-type="bibr" rid="B126">2019</xref>), and <italic>active learning</italic> (AL) for semi-automatic labeling (Wang et al., <xref ref-type="bibr" rid="B119">2019c</xref>). For data tagging by autonomous agents, some have shown concerns that making agents responsible for this, may lead to incorrect tagging caused by an initial human error. A concern which becomes more problematic given the now large quantities of child viewers, who may be <italic>suggested</italic> inappropriate content (Papadamou et al., <xref ref-type="bibr" rid="B83">2019</xref>). Further to this when annotating data, one ethical issue which can propagate <italic>selection-bias</italic> is poorly balanced manual vs. automatic annotations. In other words, if automatic annotation procedures learn false aspects early on, these may then be replicated (Rothwell et al., <xref ref-type="bibr" rid="B92">2015</xref>). In an AL paradigm (Ayache and Qu&#x000E9;not, <xref ref-type="bibr" rid="B5">2008</xref>), an <italic>oracle</italic> (i.e., expert auditor) is kept in the loop, and where the AL model is uncertain at a particular level of <italic>confidence</italic>, the oracle must provide the label (Settles et al., <xref ref-type="bibr" rid="B101">2008</xref>). In the case of specialist domains, such as bird sound classification, having such an expert is crucial, as variances in the audio signal can be quite slight (Qian et al., <xref ref-type="bibr" rid="B86">2017</xref>).</p>
<p>Within a larger <italic>decentralized</italic> network, utilizing auditors allows for a democratic style of data management. Blockchain AI networks, for example, run in a peer-to-peer (P2P) fashion, meaning that no changes can be made to the system without the agreement of all others in the network. In a P2P network, there is an incentive for individual participation in the <italic>auditing</italic> process (e.g., an improved overall experience) (Dinh and Thai, <xref ref-type="bibr" rid="B30">2018</xref>). However, the realization of <italic>auditing</italic> in AI does lead to some technical challenges in regards to public verification of sensitive data (Diakopoulos and Friedler, <xref ref-type="bibr" rid="B29">2017</xref>), as well as making the AI only a partial reduction of human time-cost. Nevertheless, the need for <italic>auditing</italic> in AI has been highlighted consistently in the literature as a bias mitigating approach (Saleiro et al., <xref ref-type="bibr" rid="B93">2018</xref>)</p>
</sec>
<sec>
<title>5.2. Benchmarking</title>
<p>It has been noted in many domains of research that <italic>benchmarking</italic> and therefore generalizing against a well-established organization, may result in the continued propagation of poor standards concerning historical biases (Denrell, <xref ref-type="bibr" rid="B28">2005</xref>). Survey-based evaluations of the state-of-the-art modalities and baselines results are one resource to help mitigate this issue (Liu et al., <xref ref-type="bibr" rid="B65">2011</xref>; Cummins et al., <xref ref-type="bibr" rid="B23">2018</xref>). However, constant updates to benchmarks should be made, updating both techniques for acquisition and methods for setting baselines. Although there is no rule of thumb in this case, it is generally accepted in machine learning that <italic>benchmarking</italic> against resources that are no longer considered to be state-of-the-art will not bring valid results. Furthermore, in the realm of human-data, and specifically within the European Union, there is often a limited time that data can be stored (The-European-Commission, <xref ref-type="bibr" rid="B110">2019</xref>). In this way, not only will benchmarked data sets become outdated in terms of techniques, but it is unethical to utilize such data, as reproducibility may not be possible.</p>
<p>Of note, a considerable contribution for ethics-based <italic>benchmarking</italic> is the aforementioned open-source IBM AI Explainability 360 Toolkit, in which one aspect is the Adversarial Robustness 360 Toolbox. This toolbox provides state-of-the-art paradigms for adversarial attacks (i.e., subtle alterations to data), and allows researchers to benchmark their approaches in a controlled environment to allow for more easy <italic>interpretation</italic> of possible network issues.</p>
</sec>
<sec>
<title>5.3. Confidence and Trust</title>
<p>Given the general fear that members of the public have for AI&#x02014;mostly attributed to false depictions in movies and literature &#x02013; improving <italic>confidence and trust</italic> in AI is now at the forefront for many corporations. To this end, researchers and corporations continually introduce state-of-the-art aids for tackling famous AI problems, such as the IBM AI Fairness 360 Toolkit. As well as this, to improve <italic>trust</italic> groups, such as &#x0201C;IBM Building Trust in AI&#x0201D;<xref ref-type="fn" rid="fn0007"><sup>7</sup></xref>, make this their specific focus. In this particular group, developing human-like aspects is given a priority, as research has shown that humans <italic>trust</italic> the general capability of more human-like representations over purely mechanical ones (Charalambous et al., <xref ref-type="bibr" rid="B19">2016</xref>). However, the well-known uncanny valley (which refers to familiarity and likeability, concerning human-likeness) suggests that data-driven representations requiring <italic>trust</italic> should be very-near human-like (Mori et al., <xref ref-type="bibr" rid="B78">2012</xref>), and action may result in biased binary representations, which may be problematic in terms of identity politics (J&#x000F8;rgensen et al., <xref ref-type="bibr" rid="B54">2018</xref>).</p>
<p>Another effort in improving <italic>trust</italic> comes from blockchain. Blockchain is a specific <italic>decentralized</italic> approach known as a distributed digital ledger, in which transactions can only be altered with the specific agreement of subsequent (connected) blocks (Zheng et al., <xref ref-type="bibr" rid="B130">2018</xref>). Blockchain is said to offer deeper <italic>trust</italic> for a user within a network, due to the specific need for collaboration (Mathews et al., <xref ref-type="bibr" rid="B69">2017</xref>). This approach offers further accountability, as decisions, or alterations are agreed upon by those within the network. More specifically, <italic>trust</italic> is established through algorithms known as consensus algorithms (Lee, <xref ref-type="bibr" rid="B62">2002</xref>).</p>
<p>As mentioned, one quantifiable measure to build on <italic>trust</italic> are <italic>confidence measures</italic>, sometimes referred to as <italic>uncertainty measures</italic> i.e., those applied in a semi-automated labeling paradigm. A <italic>confidence</italic> measure evaluates the accuracy of a model&#x00027;s predictions against a ground truth or set of weights and provides a metric of <italic>confidence</italic> in the resulting prediction (Jha et al., <xref ref-type="bibr" rid="B51">2019</xref>). Herein, we follow this definition for <italic>confidence</italic> as a measure, i.e., how accurate is the current system prediction, as a means of understanding any risk (Duncan, <xref ref-type="bibr" rid="B32">2015</xref>). This definition allows researchers to have a margin of error and can be a crucial aspect of the health domain to avoid false-positives (Bechar et al., <xref ref-type="bibr" rid="B11">2017</xref>).</p>
<p>Given the &#x0201C;black-box&#x0201D; nature of deep learning, there have been numerous approaches to quantifying <italic>confidence</italic> (Kendall and Cipolla, <xref ref-type="bibr" rid="B56">2016</xref>; Keren et al., <xref ref-type="bibr" rid="B57">2018</xref>). One popular procedure for measuring <italic>confidence</italic> is the <italic>Monte Carlo dropout</italic>. In this approach, several iterations are made, each time &#x0201C;dropping&#x0201D; a portion of the network, and calculating <italic>confidence</italic> or uncertainty based on the variance of each prediction (Gal and Ghahramani, <xref ref-type="bibr" rid="B37">2016</xref>).</p>
<p>As an additional note, <italic>data-reliability</italic> is a term often referred to in regards to both <italic>confidence and trust</italic>. Typically this is the process of statistically representing the significance of any findings from the database in a well-established scientific fashion, particularly considering the context of the domain it is targeted toward (Morgan and Waring, <xref ref-type="bibr" rid="B77">2004</xref>). Statistical tests, such as the <italic>p</italic>-value, which is used across research domains, including machine learning, remains controversial. A <italic>p</italic>-value, states the strength (significance) of evidence provided and suffers from the &#x0201C;dancing <italic>p</italic>-value phenomena&#x0201D; Cumming (<xref ref-type="bibr" rid="B22">2013</xref>). This phenomenon essentially shows that in a more real-world setting the <italic>p</italic>-value can range (within the same experimental settings) from &#x0003C;0.001 to 0.5, i.e., from very significant to not significant all. Given this limitation, the researcher may present a biased experiment, in an endeavor to report a significant result. This limitation of the <italic>p</italic>-value, amongst other statistical tests, has gained criticism in recent years, due to their extensive misuse by the machine learning community (Vidgen and Yasseri, <xref ref-type="bibr" rid="B116">2016</xref>).</p>
</sec>
<sec>
<title>5.4. Explainability and Interpretability</title>
<p>Researchers continue to work towards more accurately understanding the decisions made by deep networks (Husz&#x000E1;r, <xref ref-type="bibr" rid="B47">2015</xref>; Rai, <xref ref-type="bibr" rid="B88">2020</xref>). Machine learning models must be interpretable and offer a clear use-case. At the core of this, data itself in such systems should also be explainable i.e., designed data acquisition, with plausible goals. Machine learning is a pattern recognition task, and due to this visualization of data is one way to help with detailing both <italic>interpretability</italic> and <italic>explainability</italic> of a system by (1) better understanding the feature space, and (2) better understanding possible choices. In regards to the bias in AI, visualization of data-points allows for a more easily determined observation of any class dominance. Clustering is a particular pre-processing step applied in <italic>Big Data</italic>-based deep learning (Samek et al., <xref ref-type="bibr" rid="B94">2017</xref>). Popular algorithms which apply this type of visualization include <italic>t</italic>-distributed stochastic neighbor embedding (<italic>t</italic>-SNE) (Zeiler and Fergus, <xref ref-type="bibr" rid="B127">2014</xref>) and Laplacian Eigenmaps (Sch&#x000FC;tt et al., <xref ref-type="bibr" rid="B99">2019</xref>). More recently, there has been a surge in approaches for visualizing attention over data points (Guo et al., <xref ref-type="bibr" rid="B43">2019</xref>). These approaches are particularly promising as they show visually the areas of activation which are learnt most consistently for each class by a network (Wang et al., <xref ref-type="bibr" rid="B118">2019b</xref>), therefore highlighting areas of bias more easily, and improving communication methods to those outside the field.</p>
<p>To this end, <italic>decentralization</italic> with integrated blockchain is one approach which has been noted as improving <italic>interpretability</italic>, mainly as data is often-publicly accessible (Dinh and Thai, <xref ref-type="bibr" rid="B30">2018</xref>). For example, where bias begins to form, the diversity of modalities and ease in identification means that individual blocks can be excluded entirely from a network to meet a more accurate representation (Dai et al., <xref ref-type="bibr" rid="B24">2019</xref>).</p>
</sec>
</sec>
<sec id="s6">
<title>6. Future Directions</title>
<p>Due in part to the ethics-based commitments by some of the larger AI companies, we see from this review that, there is momentum toward a more ethical AI future. However, <bold>interdisciplinarity</bold> in AI research is one aspect which requires more attention. To the best of the authors&#x00027; knowledge, most public forums (particularly those based on a centralized infrastructure) come from a mono-domain viewpoint (e.g., engineering). Incorporating multiple disciplines in the discussion appears to be more prominent with those promoting <italic>decentralized</italic> AI.</p>
<p>Interdisciplinary will not only improve implementation of the four ethical consideration described herein, but has been shown to be a necessary step forward for the next AI phase of Artificial General Intelligence (AGI), proposed by the decentralized community (Goertzel and Pennachin, <xref ref-type="bibr" rid="B40">2007</xref>). Interdisiplinarity is particularly of value as infrastructures developed in this way more easily tackle ethical concerns relating to; (i) integration, (ii) <italic>selection-bias</italic>, and (iii) <italic>trust</italic>.</p>
<p>Seamless <bold>integration</bold> of AI is necessary for its success and adoption by the general public. Aspects including cultural and environmental impact need to be considered, and various experts should provide knowledge on the target area. For example, the synthesized voice of bus announcements not representing the community to which it speaks may have a negative impact on those communities, and a closer analysis of the voice that best represents that community would be more ethically considerate. In this way, working alongside linguists and sociologists may aid development.</p>
<p>Similarly, from our literature overview, we observe that knowledge of <bold>selection-bias</bold> often requires contributions from experts with non-technical backgrounds, and an approach for facilitating discussion between fields of research would be a valuable next step. For example, within the machine learning community, techniques, such as <italic>few-shot learning</italic> are receiving more attention in recent years (Wang and Yao, <xref ref-type="bibr" rid="B120">2019</xref>), however, perceptual-based biases pose difficulties for such approaches (Azad et al., <xref ref-type="bibr" rid="B6">2020</xref>), and discussion from experts of the targeted domains may help understand the bias at an earlier stage. Despite this, communication between fields speaking different &#x0201C;languages&#x0201D; (i.e., anthropology and engineering), is a challenge in itself, which should be addressed by the community. Furthermore, due to historical stereotypes, AI continues to lack in <bold>trust</bold> by the general user. Users who without an understanding of the vocabulary of the field, may not be able to grasp the concept of such networks. Through a better collaboration with various academic researchers, communicating AI to the general public may also see an improvement, which in turn will help to build trust and improve wellbeing of the user during AI interaction.</p>
</sec>
<sec sec-type="conclusions" id="s7">
<title>7. Conclusion</title>
<p>The themes of data representation and infrastructure as they pertain to <italic>selection-bias</italic> and <italic>decentralization</italic> in AI algorithms have been discussed throughout this contribution. Within these discussion points, we have highlighted four key consideration; <italic>auditing, benchmarking, confidence and trust</italic>, and <italic>explainability and interpretability</italic> to be taken into account when handling AI data more ethically.</p>
<p>From our observation, we conclude that for all of the four considerations, issues which may stem from multimodal approaches should be treated cautiously. In other words, relating to <italic>auditing</italic>, there should be standards for each modality monitored, as this follows through into the ability for accurate <italic>benchmarking</italic>. In this same way, although the literature may argue this, <italic>confidence and trust</italic> come from diverse representations of human data, which in turn are more <italic>explainable</italic> to the general public due to its inherent human-like attributes.</p>
<p>With this in mind, we see that efforts are being made, for fully audited, benchmarkable, confident, trustworthy, explainable and interpretable machine learning approaches. However, standardization for the inclusion of all of these aspects is still needed. Furthermore, with the inclusion of multiple members who take equal responsibility, <italic>decentralization</italic> may enable the ethical aspects highlighted herein. We see that through social-media (which is in some sense a decentralized network for communication) group morality is developed. Opinions of a political nature, for example, are highlighted, and any prejudices or general wrongdoing is often shunned and which can have enormous impact on business (Radzik et al., <xref ref-type="bibr" rid="B87">2020</xref>). In this way, a more transparent and open platform makes masking potential network biases a challenge.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>AB: literature analysis, manuscript preparation, editing, and drafting manuscript. BS: drafting manuscript and manuscript editing. All authors revised, developed, read, and approved the final manuscript.</p>
</sec>
<sec id="s9">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abu-El-Haija</surname> <given-names>S.</given-names></name> <name><surname>Kothari</surname> <given-names>N.</given-names></name> <name><surname>Lee</surname> <given-names>J.</given-names></name> <name><surname>Natsev</surname> <given-names>A. P.</given-names></name> <name><surname>Toderici</surname> <given-names>G.</given-names></name> <name><surname>Varadarajan</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Youtube-8m: a large-scale video classification benchmark</article-title>. <source>arXiv</source> 1609.08675.</citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Allen</surname> <given-names>C.</given-names></name> <name><surname>Wallach</surname> <given-names>W.</given-names></name> <name><surname>Smit</surname> <given-names>I.</given-names></name></person-group> (<year>2006</year>). <article-title>Why machine ethics?</article-title> <source>IEEE Intell. Syst</source>. <volume>21</volume>, <fpage>12</fpage>&#x02013;<lpage>17</lpage>. <pub-id pub-id-type="doi">10.1109/MIS.2006.83</pub-id></citation></ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Arnold</surname> <given-names>M.</given-names></name> <name><surname>Bellamy</surname> <given-names>R. K. E.</given-names></name> <name><surname>Hind</surname> <given-names>M.</given-names></name> <name><surname>Houde</surname> <given-names>S.</given-names></name> <name><surname>Mehta</surname> <given-names>S.</given-names></name> <name><surname>Mojsilovic</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Factsheets: increasing trust in ai services through supplier&#x00027;s declarations of conformity</article-title>. <source>IBM J. Res. Dev</source>. <volume>63</volume>, <fpage>6:1</fpage>&#x02013;<lpage>6:13</lpage>. <pub-id pub-id-type="doi">10.1147/JRD.2019.2942288</pub-id></citation></ref>
<ref id="B4">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Arya</surname> <given-names>V.</given-names></name> <name><surname>Bellamy</surname> <given-names>R. K.</given-names></name> <name><surname>Chen</surname> <given-names>P.-Y.</given-names></name> <name><surname>Dhurandhar</surname> <given-names>A.</given-names></name> <name><surname>Hind</surname> <given-names>M.</given-names></name> <name><surname>Hoffman</surname> <given-names>S. C.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>AI explainability 360: hands-on tutorial</article-title>, in <source>Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</source> (<publisher-loc>Barcelona</publisher-loc>), <fpage>696</fpage>.</citation></ref>
<ref id="B5">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ayache</surname> <given-names>S.</given-names></name> <name><surname>Qu&#x000E9;not</surname> <given-names>G.</given-names></name></person-group> (<year>2008</year>). <article-title>Video corpus annotation using active learning</article-title>, in <source>European Conference on Information Retrieval</source> (<publisher-loc>Glasgow</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>187</fpage>&#x02013;<lpage>198</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-540-78646-7_19</pub-id></citation></ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Azad</surname> <given-names>R.</given-names></name> <name><surname>Fayjie</surname> <given-names>A. R.</given-names></name> <name><surname>Kauffman</surname> <given-names>C.</given-names></name> <name><surname>Ayed</surname> <given-names>I. B.</given-names></name> <name><surname>Pedersoli</surname> <given-names>M.</given-names></name> <name><surname>Dolz</surname> <given-names>J.</given-names></name></person-group> (<year>2020</year>). <article-title>On the texture bias for few-shot cnn segmentation</article-title>. <source>arXiv</source> 2003.04052.</citation></ref>
<ref id="B7">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Baird</surname> <given-names>A.</given-names></name> <name><surname>J&#x000F8;rgensen</surname> <given-names>S. H.</given-names></name> <name><surname>Parada-Cabaleiro</surname> <given-names>E.</given-names></name> <name><surname>Hantke</surname> <given-names>S.</given-names></name> <name><surname>Cummins</surname> <given-names>N.</given-names></name> <name><surname>Schuller</surname> <given-names>B.</given-names></name></person-group> (<year>2017</year>). <article-title>Perception of paralinguistic traits in synthesized voices</article-title>, in <source>Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences</source> (<publisher-loc>London</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>17</fpage>. <pub-id pub-id-type="doi">10.1145/3123514.3123528</pub-id></citation></ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Barclay</surname> <given-names>I.</given-names></name> <name><surname>Preece</surname> <given-names>A.</given-names></name> <name><surname>Taylor</surname> <given-names>I.</given-names></name></person-group> (<year>2018</year>). <article-title>Defining the collective intelligence supply chain</article-title>. <source>arXiv</source> 1809.09444.</citation></ref>
<ref id="B9">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Baum</surname> <given-names>K.</given-names></name> <name><surname>Hermanns</surname> <given-names>H.</given-names></name> <name><surname>Speith</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>From machine ethics to machine explainability and back</article-title>, in <source>Proceedings of International Symposium on Artificial Intelligence and Mathematics</source> (<publisher-loc>Fort Lauderdale, FL</publisher-loc>: <publisher-name>FL</publisher-name>).</citation></ref>
<ref id="B10">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Beach</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <source>Threat Detection on Twitter Using Corpus Linguistics</source>. <publisher-loc>Burlington, NH</publisher-loc>: <publisher-name>University of Vermont Libraries</publisher-name>.</citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bechar</surname> <given-names>M. E. A.</given-names></name> <name><surname>Settouti</surname> <given-names>N.</given-names></name> <name><surname>Chikh</surname> <given-names>M. A.</given-names></name> <name><surname>Adel</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Reinforced confidence in self-training for a semi-supervised medical data classification</article-title>. <source>Int. J. Appl. Pattern Recogn</source>. <volume>4</volume>, <fpage>107</fpage>&#x02013;<lpage>127</lpage>. <pub-id pub-id-type="doi">10.1504/IJAPR.2017.085323</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bellamy</surname> <given-names>R. K.</given-names></name> <name><surname>Dey</surname> <given-names>K.</given-names></name> <name><surname>Hind</surname> <given-names>M.</given-names></name> <name><surname>Hoffman</surname> <given-names>S. C.</given-names></name> <name><surname>Houde</surname> <given-names>S.</given-names></name> <name><surname>Kannan</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Ai fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias</article-title>. <source>IBM J. Res. Dev</source>. <volume>63</volume>, <fpage>4</fpage>&#x02013;<lpage>1</lpage>. <pub-id pub-id-type="doi">10.1147/JRD.2019.2942287</pub-id></citation></ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berendt</surname> <given-names>B.</given-names></name> <name><surname>B&#x000FC;chler</surname> <given-names>M.</given-names></name> <name><surname>Rockwell</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Is it research or is it spying? Thinking-through ethics in big data AI and other knowledge sciences</article-title>. <source>Kunstl. Intell</source>. <volume>29</volume>, <fpage>223</fpage>&#x02013;<lpage>232</lpage>. <pub-id pub-id-type="doi">10.1007/s13218-015-0355-2</pub-id><pub-id pub-id-type="pmid">25308931</pub-id></citation></ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blass</surname> <given-names>J. A.</given-names></name></person-group> (<year>2018</year>). <article-title>You, me, or us: balancing individuals&#x00027; and societies&#x00027; moral needs and desires in autonomous systems</article-title>. <source>AI Matters</source> <volume>3</volume>, <fpage>44</fpage>&#x02013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1145/3175502.3175512</pub-id></citation></ref>
<ref id="B15">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Boddington</surname> <given-names>P.</given-names></name></person-group> (<year>2017</year>). <source>Towards a Code of Ethics for Artificial Intelligence</source>. <publisher-loc>Cham</publisher-loc>: <publisher-name>Springer International Publishing</publisher-name>.</citation></ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Brockman</surname> <given-names>G.</given-names></name> <name><surname>Cheung</surname> <given-names>V.</given-names></name> <name><surname>Pettersson</surname> <given-names>L.</given-names></name> <name><surname>Schneider</surname> <given-names>J.</given-names></name> <name><surname>Schulman</surname> <given-names>J.</given-names></name> <name><surname>Tang</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Openai gym</article-title>. <source>CoRR</source> abs/1606.01540.</citation></ref>
<ref id="B17">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Buolamwini</surname> <given-names>J.</given-names></name> <name><surname>Gebru</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>Gender shades: intersectional accuracy disparities in commercial gender classification</article-title>, in <source>Conference on Fairness, Accountability and Transparency</source> (<publisher-loc>New York, NY</publisher-loc>), <fpage>77</fpage>&#x02013;<lpage>91</lpage>.</citation></ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Challen</surname> <given-names>R.</given-names></name> <name><surname>Denny</surname> <given-names>J.</given-names></name> <name><surname>Pitt</surname> <given-names>M.</given-names></name> <name><surname>Gompels</surname> <given-names>L.</given-names></name> <name><surname>Edwards</surname> <given-names>T.</given-names></name> <name><surname>Tsaneva-Atanasova</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Artificial intelligence, bias and clinical safety</article-title>. <source>BMJ Qual. Saf</source>. <volume>28</volume>, <fpage>231</fpage>&#x02013;<lpage>237</lpage>. <pub-id pub-id-type="doi">10.1136/bmjqs-2018-008370</pub-id><pub-id pub-id-type="pmid">30636200</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Charalambous</surname> <given-names>G.</given-names></name> <name><surname>Fletcher</surname> <given-names>S.</given-names></name> <name><surname>Webb</surname> <given-names>P.</given-names></name></person-group> (<year>2016</year>). <article-title>The development of a scale to evaluate trust in industrial human-robot collaboration</article-title>. <source>Int. J. Soc. Robot</source>. <volume>8</volume>, <fpage>193</fpage>&#x02013;<lpage>209</lpage>. <pub-id pub-id-type="doi">10.1007/s12369-015-0333-8</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>Y.</given-names></name> <name><surname>Assael</surname> <given-names>Y.</given-names></name> <name><surname>Shillingford</surname> <given-names>B.</given-names></name> <name><surname>Budden</surname> <given-names>D.</given-names></name> <name><surname>Reed</surname> <given-names>S.</given-names></name> <name><surname>Zen</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Sample efficient adaptive text-to-speech</article-title>. <source>arXiv</source> 1809.10460.</citation></ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cobbe</surname> <given-names>K.</given-names></name> <name><surname>Hesse</surname> <given-names>C.</given-names></name> <name><surname>Hilton</surname> <given-names>J.</given-names></name> <name><surname>Schulman</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Leveraging procedural generation to benchmark reinforcement learning</article-title>. <source>arXiv</source> 1912.01588.</citation></ref>
<ref id="B22">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Cumming</surname> <given-names>G.</given-names></name></person-group> (<year>2013</year>). <source>Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis</source>. <publisher-loc>Abingdon</publisher-loc>: <publisher-name>Routledge</publisher-name>. <pub-id pub-id-type="pmid">24220629</pub-id></citation></ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cummins</surname> <given-names>N.</given-names></name> <name><surname>Baird</surname> <given-names>A.</given-names></name> <name><surname>Schuller</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>Speech analysis for health: current state-of-the-art and the increasing impact of deep learning</article-title>. <source>Methods</source> <volume>151</volume>, <fpage>41</fpage>&#x02013;<lpage>54</lpage>. <pub-id pub-id-type="doi">10.1016/j.ymeth.2018.07.007</pub-id><pub-id pub-id-type="pmid">30099083</pub-id></citation></ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dai</surname> <given-names>H.-N.</given-names></name> <name><surname>Zheng</surname> <given-names>Z.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name></person-group> (<year>2019</year>). <article-title>Blockchain for internet of things: a survey</article-title>. <source>IEEE Internet Things J</source>. <volume>6</volume>, <fpage>8076</fpage>&#x02013;<lpage>8094</lpage>. <pub-id pub-id-type="doi">10.1109/JIOT.2019.2920987</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Daneshgar</surname> <given-names>F.</given-names></name> <name><surname>Sianaki</surname> <given-names>O. A.</given-names></name> <name><surname>Guruwacharya</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>Blockchain: a research framework for data security and privacy</article-title>, in <source>Workshops of the International Conference on Advanced Information Networking and Applications</source> (<publisher-loc>Caserta</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>966</fpage>&#x02013;<lpage>974</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-15035-8_95</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Deloitte</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <source>Global Mobile Consumer Survey 2016</source>. <publisher-loc>London</publisher-loc>: <publisher-name>Deloitte, UK Cut</publisher-name>.</citation></ref>
<ref id="B27">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Demchenko</surname> <given-names>Y.</given-names></name> <name><surname>Grosso</surname> <given-names>P.</given-names></name> <name><surname>De Laat</surname> <given-names>C.</given-names></name> <name><surname>Membrey</surname> <given-names>P.</given-names></name></person-group> (<year>2013</year>). <article-title>Addressing big data issues in scientific data infrastructure</article-title>, in <source>2013 International Conference on Collaboration Technologies and Systems (CTS)</source> (<publisher-loc>San Diego, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>48</fpage>&#x02013;<lpage>55</lpage>. <pub-id pub-id-type="doi">10.1109/CTS.2013.6567203</pub-id></citation></ref>
<ref id="B28">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Denrell</surname> <given-names>J.</given-names></name></person-group> (<year>2005</year>). <article-title>Selection bias and the perils of benchmarking</article-title>. <source>Harvard Bus. Rev</source>. <volume>83</volume>, <fpage>114</fpage>&#x02013;<lpage>119</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://hbr.org/2005/04/selection-bias-and-the-perils-of-benchmarking">https://hbr.org/2005/04/selection-bias-and-the-perils-of-benchmarking</ext-link>. <pub-id pub-id-type="pmid">15807044</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Diakopoulos</surname> <given-names>N.</given-names></name> <name><surname>Friedler</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <source>How to Hold Algorithms Accountable. MIT Technology Review</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://bit.ly/2f8Iple">http://bit.ly/2f8Iple</ext-link></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dinh</surname> <given-names>T. N.</given-names></name> <name><surname>Thai</surname> <given-names>M. T.</given-names></name></person-group> (<year>2018</year>). <article-title>AI and blockchain: a disruptive integration</article-title>. <source>Computer</source> <volume>51</volume>, <fpage>48</fpage>&#x02013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1109/MC.2018.3620971</pub-id></citation></ref>
<ref id="B31">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>D&#x000F6;ring</surname> <given-names>S.</given-names></name> <name><surname>Goldie</surname> <given-names>P.</given-names></name> <name><surname>McGuinness</surname> <given-names>S.</given-names></name></person-group> (<year>2011</year>). <article-title>Principalism: a method for the ethics of emotion-oriented machines</article-title>, in <source>Emotion-Oriented Systems: The Humaine Handbook</source>, eds <person-group person-group-type="editor"><name><surname>Cowie</surname> <given-names>R.</given-names></name> <name><surname>Pelachaud</surname> <given-names>C.</given-names></name> <name><surname>Petta</surname> <given-names> P.</given-names></name></person-group> (<publisher-loc>Berlin; Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>713</fpage>&#x02013;<lpage>724</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-15184-2_38</pub-id></citation></ref>
<ref id="B32">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Duncan</surname> <given-names>B.</given-names></name></person-group> (<year>2015</year>). <source>Importance of Confidence Intervals. Insights Association</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://bit.ly/2pgT4kM">http://bit.ly/2pgT4kM</ext-link></citation></ref>
<ref id="B33">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Fern&#x000E1;ndez</surname> <given-names>C.</given-names></name> <name><surname>Fern&#x000E1;ndez</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Ethical and legal implications of ai recruiting software</article-title>. <source>ERCIM News</source> <volume>116</volume>, <fpage>22</fpage>&#x02013;<lpage>23</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://ercim-news.ercim.eu/en116/special/ethical-and-legal-implications-of-ai-recruiting-software">https://ercim-news.ercim.eu/en116/special/ethical-and-legal-implications-of-ai-recruiting-software</ext-link>.</citation></ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ferrer</surname> <given-names>A. J.</given-names></name> <name><surname>Marqu&#x000E8;s</surname> <given-names>J. M.</given-names></name> <name><surname>Jorba</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Towards the decentralised cloud: survey on approaches and challenges for mobile, <italic>ad hoc</italic>, and edge computing</article-title>. <source>ACM Comput. Surv</source>. <volume>51</volume>:<fpage>111</fpage>. <pub-id pub-id-type="doi">10.1145/3243929</pub-id></citation></ref>
<ref id="B35">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Friedler</surname> <given-names>S. A.</given-names></name> <name><surname>Scheidegger</surname> <given-names>C.</given-names></name> <name><surname>Venkatasubramanian</surname> <given-names>S.</given-names></name> <name><surname>Choudhary</surname> <given-names>S.</given-names></name> <name><surname>Hamilton</surname> <given-names>E. P.</given-names></name> <name><surname>Roth</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>A comparative study of fairness-enhancing interventions in machine learning</article-title>, in <source>Proceedings of the Conference on Fairness, Accountability, and Transparency</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>329</fpage>&#x02013;<lpage>338</lpage>. <pub-id pub-id-type="doi">10.1145/3287560.3287589</pub-id></citation></ref>
<ref id="B36">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Fussel</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <source>AI Professor Details Real-World Dangers of Algorithm Bias. Gizmodo</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://bit.ly/2GDoudz">http://bit.ly/2GDoudz</ext-link></citation></ref>
<ref id="B37">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gal</surname> <given-names>Y.</given-names></name> <name><surname>Ghahramani</surname> <given-names>Z.</given-names></name></person-group> (<year>2016</year>). <article-title>Dropout as a bayesian approximation: representing model uncertainty in deep learning</article-title>, in <source>International Conference on Machine Learning</source> (<publisher-loc>New York, NY</publisher-loc>), <fpage>1050</fpage>&#x02013;<lpage>1059</lpage>.</citation></ref>
<ref id="B38">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Gao</surname> <given-names>W.</given-names></name> <name><surname>Ai</surname> <given-names>H.</given-names></name></person-group> (<year>2009</year>). <article-title>Face gender classification on consumer images in a multiethnic environment</article-title>, in <source>Proceedings of International Conference on Advances in Biometrics</source> (<publisher-loc>Alghero</publisher-loc>), <fpage>169</fpage>&#x02013;<lpage>178</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-642-01793-3_18</pub-id></citation></ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Goertzel</surname> <given-names>B.</given-names></name></person-group> (<year>2007</year>). <article-title>Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to ray Kurzweil&#x00027;s the singularity is near, and McDermott&#x00027;s critique of Kurzweil</article-title>. <source>Artif. Intell</source>. <volume>171</volume>, <fpage>1161</fpage>&#x02013;<lpage>1173</lpage>. <pub-id pub-id-type="doi">10.1016/j.artint.2007.10.011</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Goertzel</surname> <given-names>B.</given-names></name> <name><surname>Pennachin</surname> <given-names>C.</given-names></name></person-group> (<year>2007</year>). <source>Artificial General Intelligence</source>. Vol. 2. <publisher-loc>Berlin; Heidelberg</publisher-loc>: <publisher-name>Springer</publisher-name>. <pub-id pub-id-type="pmid">16610958</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="web"><person-group person-group-type="author"><collab>Google</collab></person-group> (<year>2020</year>). <source>What If Tool</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://pair-code.github.io/what-if-tool/">https://pair-code.github.io/what-if-tool/</ext-link></citation></ref>
<ref id="B42">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Greene</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <source>2010&#x02013;2019: The Rise of Deep Learning. The Next Web</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://thenextweb.com/artificial-intelligence/2020/01/02/2010-2019-the-rise-of-deep-learning/">https://thenextweb.com/artificial-intelligence/2020/01/02/2010-2019-the-rise-of-deep-learning/</ext-link></citation></ref>
<ref id="B43">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Guo</surname> <given-names>H.</given-names></name> <name><surname>Zheng</surname> <given-names>K.</given-names></name> <name><surname>Fan</surname> <given-names>X.</given-names></name> <name><surname>Yu</surname> <given-names>H.</given-names></name> <name><surname>Wang</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Visual attention consistency under image transforms for multi-label image classification</article-title>, in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Long Beach, CA</publisher-loc>), <fpage>729</fpage>&#x02013;<lpage>739</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2019.00082</pub-id></citation></ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hagendorff</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>The ethics of AI ethics-an evaluation of guidelines</article-title>. <source>arXiv</source> 1903.03425.</citation></ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Herschel</surname> <given-names>R.</given-names></name> <name><surname>Miori</surname> <given-names>V. M.</given-names></name></person-group> (<year>2017</year>). <article-title>Ethics &#x00026; big data</article-title>. <source>Technol. Soc</source>. <volume>49</volume>, <fpage>31</fpage>&#x02013;<lpage>36</lpage>. <pub-id pub-id-type="doi">10.1016/j.techsoc.2017.03.003</pub-id></citation></ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hu</surname> <given-names>C.</given-names></name> <name><surname>Jiang</surname> <given-names>J.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name></person-group> (<year>2019</year>). <article-title>Decentralized federated learning: a segmented gossip approach</article-title>. <source>arXiv</source> 1908.07782.</citation></ref>
<ref id="B47">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Husz&#x000E1;r</surname> <given-names>F.</given-names></name></person-group> (<year>2015</year>). <source>Accuracy vs Explainability of Machine Learning Models. inFERENCe</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://bit.ly/2GAfW7c">http://bit.ly/2GAfW7c</ext-link></citation></ref>
<ref id="B48">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Hutchinson</surname> <given-names>B.</given-names></name> <name><surname>Mitchell</surname> <given-names>M.</given-names></name></person-group> (<year>2019</year>). <article-title>50 years of test (un) fairness: lessons for machine learning</article-title>, in <source>Proceedings of the Conference on Fairness, Accountability, and Transparency</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>49</fpage>&#x02013;<lpage>58</lpage>. <pub-id pub-id-type="doi">10.1145/3287560.3287600</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ikuta</surname> <given-names>K.</given-names></name> <name><surname>Ishii</surname> <given-names>H.</given-names></name> <name><surname>Nokata</surname> <given-names>M.</given-names></name></person-group> (<year>2003</year>). <article-title>Safety evaluation method of design and control for human-care robots</article-title>. <source>Int. J. Robot. Res</source>. <volume>22</volume>, <fpage>281</fpage>&#x02013;<lpage>297</lpage>. <pub-id pub-id-type="doi">10.1177/0278364903022005001</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jan</surname> <given-names>T. G.</given-names></name></person-group> (<year>2020</year>). <article-title>Clustering of tweets: a novel approach to label the unlabelled tweets</article-title>, in <source>Proceedings of ICRIC 2019</source> (<publisher-loc>Jammu</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>671</fpage>&#x02013;<lpage>685</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-29407-6_48</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Jha</surname> <given-names>S.</given-names></name> <name><surname>Raj</surname> <given-names>S.</given-names></name> <name><surname>Fernandes</surname> <given-names>S.</given-names></name> <name><surname>Jha</surname> <given-names>S. K.</given-names></name> <name><surname>Jha</surname> <given-names>S.</given-names></name> <name><surname>Jalaian</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Attribution-based confidence metric for deep neural networks</article-title>, in <source>Advances in Neural Information Processing Systems</source> (<publisher-loc>Vancouver</publisher-loc>), <fpage>11826</fpage>&#x02013;<lpage>11837</lpage>.</citation></ref>
<ref id="B52">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>H.</given-names></name></person-group> (<year>2015</year>). <source>Digging Up Dark Data: What Puts IBM at the Forefront of Insight Economy. Silicon Angle</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://siliconangle.com/2015/10/30/ibm-is-at-the-forefront-of-insight-economy-ibminsight/">https://siliconangle.com/2015/10/30/ibm-is-at-the-forefront-of-insight-economy-ibminsight/</ext-link></citation></ref>
<ref id="B53">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Johnston</surname> <given-names>M.</given-names></name> <name><surname>Cohen</surname> <given-names>P. R.</given-names></name> <name><surname>McGee</surname> <given-names>D.</given-names></name> <name><surname>Oviatt</surname> <given-names>S. L.</given-names></name> <name><surname>Pittman</surname> <given-names>J. A.</given-names></name> <name><surname>Smith</surname> <given-names>I.</given-names></name></person-group> (<year>1997</year>). <article-title>Unification-based multimodal integration</article-title>, in <source>Proceedings of the Eighth Conference on European Chapter of the Association for Computational Linguistics</source> (<publisher-loc>Madrid</publisher-loc>: <publisher-name>Association for Computational Linguistics</publisher-name>), <fpage>281</fpage>&#x02013;<lpage>288</lpage>. <pub-id pub-id-type="doi">10.3115/979617.979653</pub-id></citation></ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>J&#x000F8;rgensen</surname> <given-names>S. H.</given-names></name> <name><surname>Baird</surname> <given-names>A. E.</given-names></name> <name><surname>Juutilainen</surname> <given-names>F. T.</given-names></name> <name><surname>Pelt</surname> <given-names>M.</given-names></name> <name><surname>H&#x000F8;jholdt</surname> <given-names>N. C.</given-names></name></person-group> (<year>2018</year>). <article-title>[multi&#x00027;vocal]: reflections on engaging everyday people in the development of a collective non-binary synthesized voice</article-title>. <source>ScienceOpen Res</source>. <pub-id pub-id-type="doi">10.14236/ewic/EVAC18.41</pub-id></citation></ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kahani</surname> <given-names>M.</given-names></name> <name><surname>Beadle</surname> <given-names>H.</given-names></name></person-group> (<year>1997</year>). <article-title>Decentralised approaches for network management</article-title>. <source>ACM SIGCOMM Comput. Commun. Rev</source>. <volume>27</volume>, <fpage>36</fpage>&#x02013;<lpage>47</lpage>. <pub-id pub-id-type="doi">10.1145/263932.263940</pub-id></citation></ref>
<ref id="B56">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Kendall</surname> <given-names>A.</given-names></name> <name><surname>Cipolla</surname> <given-names>R.</given-names></name></person-group> (<year>2016</year>). <article-title>Modelling uncertainty in deep learning for camera relocalization</article-title>, in <source>2016 IEEE International Conference on Robotics and Automation (ICRA)</source> (<publisher-loc>Stockholm</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>4762</fpage>&#x02013;<lpage>4769</lpage>. <pub-id pub-id-type="doi">10.1109/ICRA.2016.7487679</pub-id></citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Keren</surname> <given-names>G.</given-names></name> <name><surname>Cummins</surname> <given-names>N.</given-names></name> <name><surname>Schuller</surname> <given-names>B.</given-names></name></person-group> (<year>2018</year>). <article-title>Calibrated prediction intervals for neural network regressors</article-title>. <source>IEEE Access</source> <volume>6</volume>, <fpage>54033</fpage>&#x02013;<lpage>54041</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2018.2871713</pub-id></citation></ref>
<ref id="B58">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Khan</surname> <given-names>N.</given-names></name> <name><surname>Naim</surname> <given-names>A.</given-names></name> <name><surname>Hussain</surname> <given-names>M. R.</given-names></name> <name><surname>Naveed</surname> <given-names>Q. N.</given-names></name> <name><surname>Ahmad</surname> <given-names>N.</given-names></name> <name><surname>Qamar</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>The 51 v&#x00027;s of big data: survey, technologies, characteristics, opportunities, issues and challenges</article-title>, in <source>Proceedings of the International Conference on Omni-Layer Intelligent Systems</source> (<publisher-loc>Crete</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>19</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1145/3312614.3312623</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Koene</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Algorithmic bias: addressing growing concerns [leading edge]</article-title>. <source>IEEE Technol. Soc. Mag</source>. <volume>36</volume>, <fpage>31</fpage>&#x02013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1109/MTS.2017.2697080</pub-id></citation></ref>
<ref id="B60">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Krizhevsky</surname> <given-names>A.</given-names></name> <name><surname>Nair</surname> <given-names>V.</given-names></name> <name><surname>Hinton</surname> <given-names>G.</given-names></name></person-group> (<year>2009</year>). <source>CIFAR-10</source>. <publisher-loc>Toronto</publisher-loc>: <publisher-name>Canadian Institute for Advanced Research</publisher-name>.</citation></ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>LeCun</surname> <given-names>Y.</given-names></name> <name><surname>Cortes</surname> <given-names>C.</given-names></name></person-group> (<year>2010</year>). <source>MNIST Handwritten Digit Database</source>.</citation></ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lee</surname> <given-names>H.-S.</given-names></name></person-group> (<year>2002</year>). <article-title>Optimal consensus of fuzzy opinions under group decision making environment</article-title>. <source>Fuzzy Sets Syst</source>. <volume>132</volume>, <fpage>303</fpage>&#x02013;<lpage>315</lpage>. <pub-id pub-id-type="doi">10.1016/S0165-0114(02)00056-8</pub-id></citation></ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>L&#x00027;heureux</surname> <given-names>A.</given-names></name> <name><surname>Grolinger</surname> <given-names>K.</given-names></name> <name><surname>Elyamany</surname> <given-names>H. F.</given-names></name> <name><surname>Capretz</surname> <given-names>M. A.</given-names></name></person-group> (<year>2017</year>). <article-title>Machine learning with big data: challenges and approaches</article-title>. <source>IEEE Access</source> <volume>5</volume>, <fpage>7776</fpage>&#x02013;<lpage>7797</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2017.2696365</pub-id></citation></ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>A.</given-names></name> <name><surname>Xu</surname> <given-names>N.</given-names></name> <name><surname>Nie</surname> <given-names>W.</given-names></name> <name><surname>Su</surname> <given-names>Y.</given-names></name> <name><surname>Wong</surname> <given-names>Y.</given-names></name> <name><surname>Kankanhalli</surname> <given-names>M. S.</given-names></name></person-group> (<year>2017</year>). <article-title>Benchmarking a multimodal and multiview and interactive dataset for human action recognition</article-title>. <source>IEEE Trans. Cybern</source>. <volume>47</volume>, <fpage>1781</fpage>&#x02013;<lpage>1794</lpage>. <pub-id pub-id-type="doi">10.1109/TCYB.2016.2582918</pub-id></citation></ref>
<ref id="B65">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Feris</surname> <given-names>R. S.</given-names></name> <name><surname>Sun</surname> <given-names>M.</given-names></name></person-group> (<year>2011</year>). <source>Benchmarking Datasets for Human Activity Recognition</source>. <publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>.</citation></ref>
<ref id="B66">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Lohia</surname> <given-names>P. K.</given-names></name> <name><surname>Ramamurthy</surname> <given-names>K. N.</given-names></name> <name><surname>Bhide</surname> <given-names>M.</given-names></name> <name><surname>Saha</surname> <given-names>D.</given-names></name> <name><surname>Varshney</surname> <given-names>K. R.</given-names></name> <name><surname>Puri</surname> <given-names>R.</given-names></name></person-group> (<year>2019</year>). <article-title>Bias mitigation post-processing for individual and group fairness</article-title>, in <source>ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</source> (<publisher-loc>Brighton</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2847</fpage>&#x02013;<lpage>2851</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2019.8682620</pub-id></citation></ref>
<ref id="B67">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Luo</surname> <given-names>Y.</given-names></name> <name><surname>Jin</surname> <given-names>H.</given-names></name> <name><surname>Li</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <article-title>A blockchain future for secure clinical data sharing: a position paper</article-title>, in <source>Proceedings of the ACM International Workshop on Security in Software Defined Networks</source> &#x00026; <italic>Network Function Virtualization</italic> (<publisher-loc>Richardson, TX</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>23</fpage>&#x02013;<lpage>27</lpage>. <pub-id pub-id-type="doi">10.1145/3309194.3309198</pub-id></citation></ref>
<ref id="B68">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Marnau</surname> <given-names>N.</given-names></name></person-group> (<year>2019</year>). <source>Comments on the &#x0201C;Draft Ethics Guidelines for Trustworthy AI&#x0201D; by the High-Level Expert Group on Artificial Intelligence</source>. <publisher-loc>Westminster</publisher-loc>: <publisher-name>European Commission</publisher-name>.</citation></ref>
<ref id="B69">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mathews</surname> <given-names>M.</given-names></name> <name><surname>Robles</surname> <given-names>D.</given-names></name> <name><surname>Bowe</surname> <given-names>B.</given-names></name></person-group> (<year>2017</year>). <source>Bim&#x0002B; Blockchain: A Solution to the Trust Problem in Collaboration</source>? <publisher-loc>Dublin</publisher-loc>: <publisher-name>Dublin Institute of Technology</publisher-name>.</citation></ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Meer</surname> <given-names>P.</given-names></name> <name><surname>Stewart</surname> <given-names>C. V.</given-names></name> <name><surname>Tyler</surname> <given-names>D. E.</given-names></name></person-group> (<year>2000</year>). <article-title>Robust computer vision: an interdisciplinary challenge</article-title>. <source>Comput. Vision Image Understand</source>. <volume>78</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1006/cviu.1999.0833</pub-id></citation></ref>
<ref id="B71">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mehrabi</surname> <given-names>N.</given-names></name> <name><surname>Morstatter</surname> <given-names>F.</given-names></name> <name><surname>Saxena</surname> <given-names>N.</given-names></name> <name><surname>Lerman</surname> <given-names>K.</given-names></name> <name><surname>Galstyan</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>A survey on bias and fairness in machine learning</article-title>. <source>arXiv</source> 1908.09635.</citation></ref>
<ref id="B72">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Miiller</surname> <given-names>Y.</given-names></name></person-group> (<year>1990</year>). <article-title>Decentralized artificial intelligence</article-title>, in <source>Decentralised AI</source> (<publisher-loc>Amsterdam</publisher-loc>), <fpage>3</fpage>&#x02013;<lpage>13</lpage>.</citation></ref>
<ref id="B73">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Mitchell</surname> <given-names>T. M.</given-names></name></person-group> (<year>1980</year>). <source>The Need for Biases in Learning Generalizations</source>. <publisher-loc>New Brunswick, NJ</publisher-loc>: <publisher-name>Department of Computer Science, Laboratory for Computer Science Research</publisher-name>. <pub-id pub-id-type="pmid">29611913</pub-id></citation></ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mittelstadt</surname> <given-names>B. D.</given-names></name> <name><surname>Floridi</surname> <given-names>L.</given-names></name></person-group> (<year>2016</year>). <article-title>The ethics of big data: current and foreseeable issues in biomedical contexts</article-title>. <source>Sci. Eng. Ethics</source> <volume>22</volume>, <fpage>303</fpage>&#x02013;<lpage>341</lpage>. <pub-id pub-id-type="doi">10.1007/s11948-015-9652-2</pub-id><pub-id pub-id-type="pmid">26002496</pub-id></citation></ref>
<ref id="B75">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Molnar</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <source>Interpretable Machine Learning</source>. <publisher-loc>Munich</publisher-loc>: <publisher-name>Lulu.com</publisher-name>. <pub-id pub-id-type="pmid">28858338</pub-id></citation></ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Montes</surname> <given-names>G. A.</given-names></name> <name><surname>Goertzel</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>Distributed, decentralized, and democratized artificial intelligence</article-title>. <source>Technol. Forecast. Soc. Change</source> <volume>141</volume>, <fpage>354</fpage>&#x02013;<lpage>358</lpage>. <pub-id pub-id-type="doi">10.1016/j.techfore.2018.11.010</pub-id></citation></ref>
<ref id="B77">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Morgan</surname> <given-names>S.</given-names></name> <name><surname>Waring</surname> <given-names>C.</given-names></name></person-group> (<year>2004</year>). <source>Guidance on Testing Data Reliability</source>. <publisher-loc>Austin, TX</publisher-loc>. Available online at: <ext-link ext-link-type="uri" xlink:href="http://bit.ly/2kjNgX4">http://bit.ly/2kjNgX4</ext-link></citation></ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mori</surname> <given-names>M.</given-names></name> <name><surname>MacDorman</surname> <given-names>K. F.</given-names></name> <name><surname>Kageki</surname> <given-names>N.</given-names></name></person-group> (<year>2012</year>). <article-title>The uncanny valley [from the field]</article-title>. <source>IEEE Robot. Autom. Mag</source>. <volume>19</volume>, <fpage>98</fpage>&#x02013;<lpage>100</lpage>. <pub-id pub-id-type="doi">10.1109/MRA.2012.2192811</pub-id></citation></ref>
<ref id="B79">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Mou</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <source>Artificial Intelligence: Investment Trends and Selected Industry Uses. IFC</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://bit.ly/3af5z6V">https://bit.ly/3af5z6V</ext-link></citation></ref>
<ref id="B80">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Naughton</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <source>AI Is Making Literary Leaps&#x02013;Now We Need the Rules to Catch Up. The Guardian</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.theguardian.com/commentisfree/2019/nov/02/ai-artificial-intelligence-language-openai-cpt2-release/">https://www.theguardian.com/commentisfree/2019/nov/02/ai-artificial-intelligence-language-openai-cpt2-release/</ext-link></citation></ref>
<ref id="B81">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Nikolova</surname> <given-names>G.</given-names></name> <name><surname>Kotev</surname> <given-names>V.</given-names></name> <name><surname>Dantchev</surname> <given-names>D.</given-names></name> <name><surname>Kiriazov</surname> <given-names>P.</given-names></name></person-group> (<year>2018</year>). <article-title>Basic inertial characteristics of human body by walking</article-title>, in <source>Proceedings of The 15th International Symposium on Computer Methods in Biomechanics and Biomedical Engineering and the 3rd Conference on Imaging and Visualization, CMBBE</source> (<publisher-loc>Lisbon</publisher-loc>), <fpage>26</fpage>&#x02013;<lpage>29</lpage>.</citation></ref>
<ref id="B82">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Osoba</surname> <given-names>O. A.</given-names></name> <name><surname>Welser</surname> <given-names>I. V. W.</given-names></name></person-group> (<year>2017</year>). <source>An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence</source>. <publisher-loc>Santa Monica, CA</publisher-loc>: <publisher-name>Rand Corporation</publisher-name>.</citation></ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papadamou</surname> <given-names>K.</given-names></name> <name><surname>Papasavva</surname> <given-names>A.</given-names></name> <name><surname>Zannettou</surname> <given-names>S.</given-names></name> <name><surname>Blackburn</surname> <given-names>J.</given-names></name> <name><surname>Kourtellis</surname> <given-names>N.</given-names></name> <name><surname>Leontiadis</surname> <given-names>I.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Disturbed youtube for kids: characterizing and detecting disturbing content on youtube</article-title>. <source>arXiv</source> 1901.07046.</citation></ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Parikh</surname> <given-names>R. B.</given-names></name> <name><surname>Teeple</surname> <given-names>S.</given-names></name> <name><surname>Navathe</surname> <given-names>A. S.</given-names></name></person-group> (<year>2019</year>). <article-title>Addressing bias in artificial intelligence in health care</article-title>. <source>JAMA</source> <volume>322</volume>, <fpage>2377</fpage>&#x02013;<lpage>2378</lpage>. <pub-id pub-id-type="doi">10.1001/jama.2019.18058</pub-id><pub-id pub-id-type="pmid">31755905</pub-id></citation></ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Price</surname> <given-names>M.</given-names></name> <name><surname>Ball</surname> <given-names>P.</given-names></name></person-group> (<year>2014</year>). <article-title>Big data, selection bias, and the statistical patterns of mortality in conflict</article-title>. <source>SAIS Rev. Int. Affairs</source> <volume>34</volume>, <fpage>9</fpage>&#x02013;<lpage>20</lpage>. <pub-id pub-id-type="doi">10.1353/sais.2014.0010</pub-id></citation></ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qian</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Baird</surname> <given-names>A.</given-names></name> <name><surname>Schuller</surname> <given-names>B.</given-names></name></person-group> (<year>2017</year>). <article-title>Active learning for bird sounds classification</article-title>. <source>Acta Acust. United Acust</source>. <volume>103</volume>, <fpage>361</fpage>&#x02013;<lpage>364</lpage>. <pub-id pub-id-type="doi">10.3813/AAA.919064</pub-id><pub-id pub-id-type="pmid">29092546</pub-id></citation></ref>
<ref id="B87">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Radzik</surname> <given-names>L.</given-names></name> <name><surname>Bennett</surname> <given-names>C.</given-names></name> <name><surname>Pettigrove</surname> <given-names>G.</given-names></name> <name><surname>Sher</surname> <given-names>G.</given-names></name></person-group> (<year>2020</year>). <source>The Ethics of Social Punishment: The Enforcement of Morality in Everyday Life</source>. <publisher-loc>Cambridge</publisher-loc>: <publisher-name>Cambridge Core</publisher-name>.</citation></ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rai</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Explainable AI: from black box to glass box</article-title>. <source>J. Acad. Market. Sci</source>. <volume>48</volume>:<fpage>137</fpage>&#x02013;<lpage>141</lpage>. <pub-id pub-id-type="doi">10.1007/s11747-019-00710-5</pub-id></citation></ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Regan</surname> <given-names>P. M.</given-names></name> <name><surname>Jesse</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Ethical challenges of edtech, big data and personalized learning: twenty-first century student sorting and tracking</article-title>. <source>Ethics Inform. Technol</source>. <volume>21</volume>, <fpage>167</fpage>&#x02013;<lpage>179</lpage>. <pub-id pub-id-type="doi">10.1007/s10676-018-9492-2</pub-id></citation></ref>
<ref id="B90">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ribeiro</surname> <given-names>M. T.</given-names></name> <name><surname>Singh</surname> <given-names>S.</given-names></name> <name><surname>Guestrin</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Why should I trust you?&#x0201D; Explaining the predictions of any classifier</article-title>, in <source>Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source> (<publisher-loc>San Francisco, CA</publisher-loc>), <fpage>1135</fpage>&#x02013;<lpage>1144</lpage>. <pub-id pub-id-type="doi">10.1145/2939672.2939778</pub-id></citation></ref>
<ref id="B91">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Ringeval</surname> <given-names>F.</given-names></name> <name><surname>Schuller</surname> <given-names>B.</given-names></name> <name><surname>Valstar</surname> <given-names>M.</given-names></name> <name><surname>Cummins</surname> <given-names>N.</given-names></name> <name><surname>Cowie</surname> <given-names>R.</given-names></name> <name><surname>Tavabi</surname> <given-names>L.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>AVEC 2019 workshop and challenge: state-of-mind, detecting depression with AI, and cross-cultural affect recognition</article-title>, in <source>Proceedings of the 9th International on Audio/Visual Emotion Challenge and Workshop</source> (<publisher-loc>Nice</publisher-loc>), <fpage>3</fpage>&#x02013;<lpage>12</lpage>. <pub-id pub-id-type="doi">10.1145/3347320.3357688</pub-id></citation></ref>
<ref id="B92">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Rothwell</surname> <given-names>S.</given-names></name> <name><surname>Elshenawy</surname> <given-names>A.</given-names></name> <name><surname>Carter</surname> <given-names>S.</given-names></name> <name><surname>Braga</surname> <given-names>D.</given-names></name> <name><surname>Romani</surname> <given-names>F.</given-names></name> <name><surname>Kennewick</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2015</year>). <article-title>Controlling quality and handling fraud in large scale crowdsourcing speech data collections</article-title>, in <source>Proceedings of INTERSPEECH</source> (<publisher-loc>Dresden</publisher-loc>), <fpage>2784</fpage>&#x02013;<lpage>2788</lpage>.</citation></ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saleiro</surname> <given-names>P.</given-names></name> <name><surname>Kuester</surname> <given-names>B.</given-names></name> <name><surname>Hinkson</surname> <given-names>L.</given-names></name> <name><surname>London</surname> <given-names>J.</given-names></name> <name><surname>Stevens</surname> <given-names>A.</given-names></name> <name><surname>Anisfeld</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Aequitas: a bias and fairness audit toolkit</article-title>. <source>arXiv</source> 1811.05577.</citation></ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Samek</surname> <given-names>W.</given-names></name> <name><surname>Wiegand</surname> <given-names>T.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>K.</given-names></name></person-group> (<year>2017</year>). <article-title>Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models</article-title>. <source>arXiv</source> abs/1708.08296.</citation></ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schembera</surname> <given-names>B.</given-names></name> <name><surname>Dur&#x000E1;n</surname> <given-names>J. M.</given-names></name></person-group> (<year>2020</year>). <article-title>Dark data as the new challenge for big data science and the introduction of the scientific data officer</article-title>. <source>Philos. Technol</source>. <volume>33</volume>, <fpage>93</fpage>&#x02013;<lpage>115</lpage>. <pub-id pub-id-type="doi">10.1007/s13347-019-00346-x</pub-id></citation></ref>
<ref id="B96">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schlagwein</surname> <given-names>D.</given-names></name> <name><surname>Cecez-Kecmanovic</surname> <given-names>D.</given-names></name> <name><surname>Hanckel</surname> <given-names>B.</given-names></name></person-group> (<year>2019</year>). <article-title>Ethical norms and issues in crowdsourcing practices: a Habermasian analysis</article-title>. <source>Inform. Syst. J</source>. <volume>29</volume>, <fpage>811</fpage>&#x02013;<lpage>837</lpage>. <pub-id pub-id-type="doi">10.1111/isj.12227</pub-id></citation></ref>
<ref id="B97">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schneider</surname> <given-names>D. F.</given-names></name></person-group> (<year>2020</year>). <article-title>Machine learning and artificial intelligence</article-title>, in <source>Health Services Research</source> eds <person-group person-group-type="editor"><name><surname>Dimick</surname> <given-names>J.</given-names></name> <name><surname>Lubitz</surname> <given-names>C.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>155</fpage>&#x02013;<lpage>168</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-28357-5_14</pub-id></citation></ref>
<ref id="B98">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Schuller</surname> <given-names>B. W.</given-names></name> <name><surname>Steidl</surname> <given-names>S.</given-names></name> <name><surname>Batliner</surname> <given-names>A.</given-names></name> <name><surname>Vinciarelli</surname> <given-names>A.</given-names></name> <name><surname>Scherer</surname> <given-names>K. R.</given-names></name> <name><surname>Ringeval</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism</article-title>, in <source>Proceedings of INTERSPEECH</source> (<publisher-loc>Lyon</publisher-loc>), <fpage>148</fpage>&#x02013;<lpage>152</lpage>.</citation></ref>
<ref id="B99">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Sch&#x000FC;tt</surname> <given-names>K. T.</given-names></name> <name><surname>Gastegger</surname> <given-names>M.</given-names></name> <name><surname>Tkatchenko</surname> <given-names>A.</given-names></name> <name><surname>M&#x000FC;ller</surname> <given-names>K.-R.</given-names></name></person-group> (<year>2019</year>). <article-title>Quantum-chemical insights from interpretable atomistic neural networks</article-title>, in <source>Explainable AI: Interpreting, Explaining and Visualizing Deep Learning</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>311</fpage>&#x02013;<lpage>330</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-28954-6_17</pub-id></citation></ref>
<ref id="B100">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Setia</surname> <given-names>P. K.</given-names></name> <name><surname>Tillem</surname> <given-names>G.</given-names></name> <name><surname>Erkin</surname> <given-names>Z.</given-names></name></person-group> (<year>2019</year>). <article-title>Private data aggregation in decentralized networks</article-title>, in <source>2019 7th International Istanbul Smart Grids and Cities Congress and Fair (ICSG)</source> (<publisher-loc>Istanbul</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>76</fpage>&#x02013;<lpage>80</lpage>. <pub-id pub-id-type="doi">10.1109/SGCF.2019.8782377</pub-id></citation></ref>
<ref id="B101">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Settles</surname> <given-names>B.</given-names></name> <name><surname>Craven</surname> <given-names>M.</given-names></name> <name><surname>Friedland</surname> <given-names>L.</given-names></name></person-group> (<year>2008</year>). <article-title>Active learning with real annotation costs</article-title>, in <source>Proceedings of the NIPS Workshop on Cost-Sensitive Learning</source> (<publisher-loc>Vancouver, CA</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>10</lpage>.</citation></ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Simon</surname> <given-names>M.</given-names></name> <name><surname>Rodner</surname> <given-names>E.</given-names></name> <name><surname>Denzler</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>Imagenet pre-trained models with batch normalization</article-title>. <source>arXiv</source> 1612.01452.</citation></ref>
<ref id="B103">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <source>Why Enterprises Need to Focus on Decentralized AI</source>.</citation></ref>
<ref id="B104">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Smith</surname> <given-names>C. J.</given-names></name></person-group> (<year>2019</year>). <article-title>Transhumanism and distributed ledger technologies</article-title>, in <source>The Transhumanism Handbook</source> ed <person-group person-group-type="editor"><name><surname>Lee</surname> <given-names>N.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>529</fpage>&#x02013;<lpage>531</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-16920-6_34</pub-id></citation></ref>
<ref id="B105">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stahl</surname> <given-names>B.</given-names></name> <name><surname>Wright</surname> <given-names>D.</given-names></name></person-group> (<year>2018</year>). <article-title>Ethics and privacy in ai and big data: Implementing responsible research and innovation</article-title>. <source>IEEE Security Privacy</source> <volume>16</volume>, <fpage>26</fpage>&#x02013;<lpage>33</lpage>. <pub-id pub-id-type="doi">10.1109/MSP.2018.2701164</pub-id></citation></ref>
<ref id="B106">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stappen</surname> <given-names>L.</given-names></name> <name><surname>Baird</surname> <given-names>A.</given-names></name> <name><surname>Rizos</surname> <given-names>G.</given-names></name> <name><surname>Tzirakis</surname> <given-names>P.</given-names></name> <name><surname>Du</surname> <given-names>X.</given-names></name> <name><surname>Hafner</surname> <given-names>F.</given-names></name> <etal/></person-group>. (<year>2020</year>). <source>Muse 2020-The First International Multimodal Sentiment Analysis in Real-Life Media Challenge and Workshop</source>.</citation></ref>
<ref id="B107">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Stark</surname> <given-names>T. H.</given-names></name></person-group> (<year>2015</year>). <article-title>Understanding the selection bias: social network processes and the effect of prejudice on the avoidance of outgroup friends</article-title>. <source>Soc. Psychol. Q</source>. <volume>78</volume>, <fpage>127</fpage>&#x02013;<lpage>150</lpage>. <pub-id pub-id-type="doi">10.1177/0190272514565252</pub-id></citation></ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sueur</surname> <given-names>C.</given-names></name> <name><surname>Deneubourg</surname> <given-names>J.-L.</given-names></name> <name><surname>Petit</surname> <given-names>O.</given-names></name></person-group> (<year>2012</year>). <article-title>From social network (centralized vs. decentralized) to collective decision-making (unshared vs. shared consensus)</article-title>. <source>PLoS ONE</source> <volume>7</volume>:<fpage>e0032566</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0032566</pub-id><pub-id pub-id-type="pmid">22393416</pub-id></citation></ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Swan</surname> <given-names>M.</given-names></name></person-group> (<year>2015</year>). <article-title>Blockchain thinking: the brain as a decentralized autonomous corporation</article-title>. <source>IEEE Technol. Soc. Mag</source>. <volume>34</volume>, <fpage>41</fpage>&#x02013;<lpage>52</lpage>. <pub-id pub-id-type="doi">10.1109/MTS.2015.2494358</pub-id></citation></ref>
<ref id="B110">
<citation citation-type="book"><person-group person-group-type="author"><collab>The-European-Commission</collab></person-group> (<year>2019</year>). <source>The 2018 Reform of EU Data Protection Rules</source>. <publisher-loc>London</publisher-loc>: <publisher-name>EU</publisher-name>.</citation></ref>
<ref id="B111">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tjoa</surname> <given-names>E.</given-names></name> <name><surname>Guan</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>A survey on explainable artificial intelligence (XAI): towards medical XAI</article-title>. <source>arXiv</source> 1907.07374.</citation></ref>
<ref id="B112">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Trajanov</surname> <given-names>D.</given-names></name> <name><surname>Zdraveski</surname> <given-names>V.</given-names></name> <name><surname>Stojanov</surname> <given-names>R.</given-names></name> <name><surname>Kocarev</surname> <given-names>L.</given-names></name></person-group> (<year>2018</year>). <article-title>Dark data in internet of things (IOT): challenges and opportunities</article-title>, in <source>7th Small Systems Simulation Symposium</source> (<publisher-loc>Ni&#x00161;</publisher-loc>), <fpage>1</fpage>&#x02013;<lpage>8</lpage>.</citation></ref>
<ref id="B113">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Trindel</surname> <given-names>K.</given-names></name> <name><surname>Polli</surname> <given-names>F.</given-names></name> <name><surname>Glazebrook</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Using technology to increase fairness in hiring</article-title>, in <source>What Works</source>? (<publisher-loc>Amherst, MA</publisher-loc>), <fpage>30</fpage>.</citation></ref>
<ref id="B114">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>van Otterlo</surname> <given-names>M.</given-names></name></person-group> (<year>2018</year>). <article-title>Gatekeeping algorithms with human ethical bias: the ethics of algorithms in archives, libraries and society</article-title>. <source>arXiv</source> abs/1801.01705.</citation></ref>
<ref id="B115">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Vellido</surname> <given-names>A.</given-names></name> <name><surname>Mart&#x000ED;n-Guerrero</surname> <given-names>J. D.</given-names></name> <name><surname>Lisboa</surname> <given-names>P. J. G.</given-names></name></person-group> (<year>2012</year>). <article-title>Making machine learning models interpretable</article-title>, in <source>Proceedings of European Symposium on Artificial Neural Networks</source> (<publisher-loc>Bruges</publisher-loc>), <fpage>163</fpage>&#x02013;<lpage>172</lpage>.</citation></ref>
<ref id="B116">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vidgen</surname> <given-names>B.</given-names></name> <name><surname>Yasseri</surname> <given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title><italic>P</italic>-values: misunderstood and misused</article-title>. <source>arXiv</source> abs/1601.06805. <pub-id pub-id-type="doi">10.3389/fphy.2016.00006</pub-id></citation></ref>
<ref id="B117">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>T.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name> <name><surname>Yatskar</surname> <given-names>M.</given-names></name> <name><surname>Chang</surname> <given-names>K.-W.</given-names></name> <name><surname>Ordonez</surname> <given-names>V.</given-names></name></person-group> (<year>2019a</year>). <article-title>Balanced datasets are not enough: estimating and mitigating gender bias in deep image representations</article-title>, in <source>Proceedings of the IEEE International Conference on Computer Vision</source> (<publisher-loc>Brighton</publisher-loc>), <fpage>5310</fpage>&#x02013;<lpage>5319</lpage>. <pub-id pub-id-type="doi">10.1109/ICCV.2019.00541</pub-id></citation></ref>
<ref id="B118">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>W.</given-names></name> <name><surname>Song</surname> <given-names>H.</given-names></name> <name><surname>Zhao</surname> <given-names>S.</given-names></name> <name><surname>Shen</surname> <given-names>J.</given-names></name> <name><surname>Zhao</surname> <given-names>S.</given-names></name> <name><surname>Hoi</surname> <given-names>S. C.</given-names></name> <etal/></person-group>. (<year>2019b</year>). <article-title>Learning unsupervised video object segmentation through visual attention</article-title>, in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Ithaca, NY</publisher-loc>), <fpage>3064</fpage>&#x02013;<lpage>3074</lpage>. <pub-id pub-id-type="doi">10.1109/CVPR.2019.00318</pub-id><pub-id pub-id-type="pmid">31940522</pub-id></citation></ref>
<ref id="B119">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Mendez</surname> <given-names>A. E. M.</given-names></name> <name><surname>Cartwright</surname> <given-names>M.</given-names></name> <name><surname>Bello</surname> <given-names>J. P.</given-names></name></person-group> (<year>2019c</year>). <article-title>Active learning for efficient audio annotation and classification with a large amount of unlabeled data</article-title>, in <source>ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>880</fpage>&#x02013;<lpage>884</lpage>. <pub-id pub-id-type="doi">10.1109/ICASSP.2019.8683063</pub-id></citation></ref>
<ref id="B120">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Yao</surname> <given-names>Q.</given-names></name></person-group> (<year>2019</year>). <article-title>Few-shot learning: a survey</article-title>. <source>arXiv</source> 1904.05046.</citation></ref>
<ref id="B121">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Liu</surname> <given-names>K.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Zhu</surname> <given-names>Y.</given-names></name> <name><surname>Zhang</surname> <given-names>Y.</given-names></name></person-group> (<year>2019d</year>). <article-title>Various frameworks and libraries of machine learning and deep learning: a survey</article-title>. <source>Archiv. Comput. Methods Eng</source>. <fpage>1</fpage>&#x02013;<lpage>24</lpage>. <pub-id pub-id-type="doi">10.1007/s11831-018-09312-w</pub-id></citation></ref>
<ref id="B122">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Waterhouse Cooper</surname> <given-names>P.</given-names></name></person-group> (<year>2019</year>). <source>Responsible AI Framework. PwC</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.pwc.co.uk/services/risk-assurance/insights/accelerating-innovation-through-responsible-ai/responsible-ai-framework.html">https://www.pwc.co.uk/services/risk-assurance/insights/accelerating-innovation-through-responsible-ai/responsible-ai-framework.html</ext-link></citation></ref>
<ref id="B123">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Westphal</surname> <given-names>P.</given-names></name> <name><surname>B&#x000FC;hmann</surname> <given-names>L.</given-names></name> <name><surname>Bin</surname> <given-names>S.</given-names></name> <name><surname>Jabeen</surname> <given-names>H.</given-names></name> <name><surname>Lehmann</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <source>SML-Bench-A Benchmarking Framework for Structured Machine Learning</source>. <publisher-loc>Amsterdam</publisher-loc>: <publisher-name>Semantic Web</publisher-name>.</citation></ref>
<ref id="B124">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>Q.</given-names></name> <name><surname>Liu</surname> <given-names>Y.</given-names></name> <name><surname>Cheng</surname> <given-names>Y.</given-names></name> <name><surname>Kang</surname> <given-names>Y.</given-names></name> <name><surname>Chen</surname> <given-names>T.</given-names></name> <name><surname>Yu</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>Federated learning</article-title>. <source>Synth. Lect. Artif. Intell. Mach. Learn</source>. <volume>13</volume>, <fpage>1</fpage>&#x02013;<lpage>207</lpage>. <pub-id pub-id-type="doi">10.2200/S00960ED2V01Y201910AIM043</pub-id><pub-id pub-id-type="pmid">30036402</pub-id></citation></ref>
<ref id="B125">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Yavuz</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <source>Machine Bias: Artificial Intelligence and Discrimination</source>. (<publisher-name>SSRN</publisher-name>).</citation></ref>
<ref id="B126">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zafar</surname> <given-names>S.</given-names></name> <name><surname>Irum</surname> <given-names>N.</given-names></name> <name><surname>Arshad</surname> <given-names>S.</given-names></name> <name><surname>Nawaz</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>Spam user detection through deceptive images in big data</article-title>, in <source>Recent Trends and Advances in Wireless and IoT-Enabled Networks</source> eds <person-group person-group-type="editor"><name><surname>Jan</surname> <given-names>M.</given-names></name> <name><surname>Khan</surname> <given-names>F.</given-names></name> <name><surname>Alam</surname> <given-names>M.</given-names></name></person-group> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>311</fpage>&#x02013;<lpage>327</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-99966-1_28</pub-id></citation></ref>
<ref id="B127">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zeiler</surname> <given-names>M. D.</given-names></name> <name><surname>Fergus</surname> <given-names>R.</given-names></name></person-group> (<year>2014</year>). <article-title>Visualizing and understanding convolutional networks</article-title>, in <source>European Conference on Computer Vision</source> (<publisher-loc>Zurich</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>818</fpage>&#x02013;<lpage>833</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-10590-1_53</pub-id></citation></ref>
<ref id="B128">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Q.</given-names></name> <name><surname>Hua</surname> <given-names>G.</given-names></name></person-group> (<year>2015</year>). <article-title>Multi-view visual recognition of imperfect testing data</article-title>, in <source>Proceedings of the 23rd ACM International Conference on Multimedia</source> (<publisher-loc>Brisbane</publisher-loc>), <fpage>561</fpage>&#x02013;<lpage>570</lpage>. <pub-id pub-id-type="doi">10.1145/2733373.2806224</pub-id></citation></ref>
<ref id="B129">
<citation citation-type="book"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>Y.</given-names></name> <name><surname>Lee</surname> <given-names>R.</given-names></name> <name><surname>Madievski</surname> <given-names>A.</given-names></name></person-group> (<year>2001</year>). <article-title>Confidence measure (CM) estimation for large vocabulary speaker-independent continuous speech recognition system</article-title>, in <source>Seventh European Conference on Speech Communication and Technology</source> (<publisher-loc>Aalborg</publisher-loc>).</citation></ref>
<ref id="B130">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zheng</surname> <given-names>Z.</given-names></name> <name><surname>Xie</surname> <given-names>S.</given-names></name> <name><surname>Dai</surname> <given-names>H.-N.</given-names></name> <name><surname>Chen</surname> <given-names>X.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name></person-group> (<year>2018</year>). <article-title>Blockchain challenges and opportunities: a survey</article-title>. <source>Int. J. Web Grid Serv</source>. <volume>14</volume>, <fpage>352</fpage>&#x02013;<lpage>375</lpage>. <pub-id pub-id-type="doi">10.1504/IJWGS.2018.095647</pub-id></citation></ref>
<ref id="B131">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zliobaite</surname> <given-names>I.</given-names></name></person-group> (<year>2015</year>). <article-title>A survey on measuring indirect discrimination in machine learning</article-title>. <source>arXiv</source> abs/1511.00148.</citation></ref>
</ref-list>
<fn-group>
<fn id="fn0001"><p><sup>1</sup>DeepMind : <ext-link ext-link-type="uri" xlink:href="https://deepmind.com/applied/deepmind-ethics-society/">https://deepmind.com/applied/deepmind-ethics-society/</ext-link>. Partnership on AI: <ext-link ext-link-type="uri" xlink:href="https://www.partnershiponai.org/board-of-directors/">https://www.partnershiponai.org/board-of-directors/</ext-link>.</p></fn>
<fn id="fn0002"><p><sup>2</sup>Google: <ext-link ext-link-type="uri" xlink:href="https://developers.google.com/machine-learning/fairness-overview/">https://developers.google.com/machine-learning/fairness-overview/</ext-link>.</p></fn>
<fn id="fn0003"><p><sup>3</sup>IBM AI Explainability 360 Toolkit: <ext-link ext-link-type="uri" xlink:href="https://www.research.ibm.com/artificial-intelligence/trusted-ai/">https://www.research.ibm.com/artificial-intelligence/trusted-ai/</ext-link>.</p></fn>
<fn id="fn0004"><p><sup>4</sup>Effect.AI: <ext-link ext-link-type="uri" xlink:href="https://effect.ai/">https://effect.ai/</ext-link>., SingularityNET <ext-link ext-link-type="uri" xlink:href="https://singularitynet.io/">https://singularitynet.io/</ext-link>.</p></fn>
<fn id="fn0005"><p><sup>5</sup>Decentralized AI Alliance: <ext-link ext-link-type="uri" xlink:href="https://daia.foundation/">https://daia.foundation/</ext-link>.</p></fn>
<fn id="fn0006"><p><sup>6</sup>The Algorithm Justice League: <ext-link ext-link-type="uri" xlink:href="https://www.ajlunited.org/">https://www.ajlunited.org/</ext-link>, and the AI NOW institute <ext-link ext-link-type="uri" xlink:href="https://ainowinstitute.org/">https://ainowinstitute.org/</ext-link>.</p></fn>
<fn id="fn0007"><p><sup>7</sup>IBM&#x02014;Building Trust in AI: <ext-link ext-link-type="uri" xlink:href="https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html">https://www.ibm.com/watson/advantage-reports/future-of-artificial-intelligence/building-trust-in-ai.html</ext-link>.</p></fn>
</fn-group>
<fn-group>
<fn fn-type="financial-disclosure"><p><bold>Funding.</bold> This work was funded by the Bavarian State Ministry of Education, Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B).</p>
</fn>
</fn-group>
</back>
</article>