<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="brief-report" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">519957</article-id>
<article-id pub-id-type="doi">10.3389/fdata.2020.519957</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Conceptual Analysis</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Algorithmic Accountability in Context. Socio-Technical Perspectives on Structural Causal Models</article-title>
<alt-title alt-title-type="left-running-head">Poechhacker and Kacianka</alt-title>
<alt-title alt-title-type="right-running-head">Algorithmic Accountability in Context</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Poechhacker</surname>
<given-names>Nikolaus</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="http://loop.frontiersin.org/people/710345/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Kacianka</surname>
<given-names>Severin</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="http://loop.frontiersin.org/people/1057349/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>Institute for Public Law and Political Science, University of Graz, <addr-line>Graz</addr-line>, <country>Austria</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>Department of Computer Science, Chair of Software and Systems Engineering, Technical University of Munich, <addr-line>Munich</addr-line>, <country>Germany</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/660454/overview">Katja Mayer</ext-link>, University of Vienna, Austria</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/887431/overview">Matthias Karmasin</ext-link>, University of Klagenfurt, Austria</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/541022/overview">Jianwu Wang</ext-link>, University of Maryland, Baltimore County, United States</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Nikolaus Poechhacker, <email>nikolaus.poechhacker@uni-graz.at</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Data Mining and Management, a section of the journal Frontiers in Big Data</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>29</day>
<month>01</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2020</year>
</pub-date>
<volume>3</volume>
<elocation-id>519957</elocation-id>
<history>
<date date-type="received">
<day>13</day>
<month>12</month>
<year>2019</year>
</date>
<date date-type="accepted">
<day>08</day>
<month>12</month>
<year>2020</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Poechhacker and Kacianka.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Poechhacker and Kacianka</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>The increasing use of automated decision making (ADM) and machine learning sparked an ongoing discussion about algorithmic accountability. Within computer science, a new form of producing accountability has been discussed recently: causality as an expression of algorithmic accountability, formalized using structural causal models (SCMs). However, causality itself is a concept that needs further exploration. Therefore, in this contribution we confront ideas of SCMs with insights from social theory, more explicitly pragmatism, and argue that formal expressions of causality must always be seen in the context of the social system in which they are applied. This results in the formulation of further research questions and directions.</p>
</abstract>
<kwd-group>
<kwd>algorithms</kwd>
<kwd>structural causal model</kwd>
<kwd>pragmatism</kwd>
<kwd>accountability</kwd>
<kwd>causality</kwd>
<kwd>social theory</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Introduction</title>
<p>The rise of machine learning and automated decision making (ADM) affects many domains of social life. They have been used in court decisions (<xref ref-type="bibr" rid="B2">Angwin et al., 2016</xref>), policing (<xref ref-type="bibr" rid="B25">Kaufmann et al., 2019</xref>), hiring practices, and many more. Negative experiences with these systems led to a scholarly discussion in which formulations like <italic>Weapons of Math Destruction</italic> (<xref ref-type="bibr" rid="B40">O&#x2019;Neil, 2016</xref>) or <italic>Algorithms of Oppression</italic> (<xref ref-type="bibr" rid="B39">Noble, 2018</xref>) have been used. As such, the power of algorithms and how to deal with these entities has become a major point of discussion (<xref ref-type="bibr" rid="B55">Ziewitz, 2016</xref>; <xref ref-type="bibr" rid="B3">Beer, 2017</xref>)&#x2014;even creating its own field of <italic>critical algorithm studies</italic> (e.g., <xref ref-type="bibr" rid="B17">Gillespie, 2014</xref>; <xref ref-type="bibr" rid="B45">Seaver, 2018</xref>). Because of the intensifying application of these systems in various social domains, issues of fairness, (in)justice and power relations have become the focus of attention, especially in the form of bias (<xref ref-type="bibr" rid="B16">Friedman and Helen, 1996</xref>; <xref ref-type="bibr" rid="B7">Bozdag, 2013</xref>; <xref ref-type="bibr" rid="B9">Crawford, 2013</xref>). As a result, &#x201c;algorithmic accountability&#x201d; has been suggested as a means (e.g., <xref ref-type="bibr" rid="B12">Diakopoulos 2015</xref>) to mitigate the risks of bias and inequalities produced by algorithmic systems (<xref ref-type="bibr" rid="B50">Veale and Binns, 2017</xref>).</p>
<p>Accountability, however, is an ambiguous term in itself and was never clearly defined in computer science. <xref ref-type="bibr" rid="B22">Kacianka et al., (2017)</xref> found that most implementations of accountability do not use a peer reviewed definition of accountability and either provide no definition at all or rely on a loose dictionary definition. Despite being used as an umbrella term, accountability gained much prominence within the academic discussion, most prominently at the ACM Conference on Fairness, Transparency and Accountability. There, accountability is understood as &#x201c;public accountability&#x201d; and mostly follows the understanding of (<xref ref-type="bibr" rid="B5">Bovens, 2007</xref>, 9), who writes that &#x201c;[t]he most concise description of accountability would be: &#x201c;the obligation to explain and justify conduct&#x201d;, although he also cautions that &#x201c;[a]s a concept (...) &#x201c;accountability&#x201d; is rather elusive. It has become a hurrah-word, like &#x201c;learning&#x201d;, &#x201c;responsibility&#x201d;, or &#x201c;solidarity&#x201d;, to which no one can object&#x201d; (<xref ref-type="bibr" rid="B5">Bovens, 2007</xref>, 9). This also seems to be true for algorithmic accountability. <xref ref-type="bibr" rid="B54">Wieringa (2020</xref>, 10). conducted a systematic literature review of the field of algorithmic accountability through the lens of Bovens. She found that the &#x201c;term &#x201c;algorithmic accountability&#x201d; is inherently vague&#x201d; and derives the following definition of algorithmic accountability, using the terminology and ideas presented by Bovens:</p>
<p>&#x201c;Algorithmic accountability concerns a networked account for a socio-technical algorithmic system, following the various stages of the system&#x2019;s lifecycle. In this accountability relationship, multiple actors (e.g. decision makers, developers, users) have the obligation to explain and justify their use, design, and/or decisions of/concerning the system and the subsequent effects of that conduct. As different kinds of actors are in play during the life of the system, they may be held to account by various types of fora (e.g. internal/external to the organization, formal/informal), either for particular aspects of the system (i.e. a modular account) or for the entirety of the system (i.e. an integral account). Such fora must be able to pose questions and pass judgment, after which one or several actors may face consequences. The relationship(s) between forum/fora and actor(s) departs from a particular perspective on accountability&#x201d; (<xref ref-type="bibr" rid="B54">Wieringa, 2020</xref>, 10).</p>
<p>However, it is important to note that there is not just one definition of accountability. For example, in contrast to Bovens, <xref ref-type="bibr" rid="B29">Lindberg (2013)</xref>, who is deeply critical of Bovens and the Utrecht school<xref ref-type="fn" rid="FN1">
<sup>1</sup>
</xref>, establishes accountability as a classical concept where subtypes are complete instances of their parents. In psychology, <xref ref-type="bibr" rid="B18">Hall et al. (2017)</xref> give a great overview of the concept of <italic>felt</italic> accountability, which focuses on the feeling of the individual. Besides <italic>algorithmic</italic> accountability, the state of the art on accountability in computer science is split into three branches of research.</p>
<p>First, works building on <xref ref-type="bibr" rid="B53">Weitzner et al., (2008)</xref> use the term &#x201c;Information Accountability&#x201d; to formulate a new approach to data control measures based on the idea of detection, rather than prevention. This approach does not try to prevent unauthorized data access, but wants to design systems in such a way that data access is logged and therefore any access to data is easily tracked. If data is &#x201c;leaked&#x201d;, it should be easy to identify the deviant entity and hold it accountable. Second, in the field of cryptographic protocols, <xref ref-type="bibr" rid="B28">K&#xfc;sters et al. (2010)</xref> formalized accountability and linked it to verifiability. The main challenge here is to discover entities that attempt to falsify the results of elections. This requires a precise definition of a protocol, or allowed actions, to work. Recent advances in accountability for cryptographic protocols also started to investigate the use of causal reasoning for attributing blame (e.g. <xref ref-type="bibr" rid="B27">K&#xfc;nnemann et al. 2019</xref>). Third, accountability is discussed in the field of cloud computing, mainly focusing on data protection as well as accounting for resource usage (e.g. <xref ref-type="bibr" rid="B26">Ko et al. 2011</xref>).</p>
<p>The question remains how an algorithm can be hold accountable for its &#x201c;actions&#x201d;. Algorithms are often discussed in terms of opaque and powerful black boxes (<xref ref-type="bibr" rid="B42">Pasquale, 2015</xref>), which resulted in the often-formulated demand of algorithmic transparency. Yet it remains unclear how to implement algorithmic transparency and what its benefits would be (<xref ref-type="bibr" rid="B1">Ananny and Crawford, 2018</xref>). Accountability would require the translation of expert knowledge, such as algorithmic techniques, into accounts that are understandable to a broader audience. A task that is not easily achieved&#x2014;especially when confronted with machine learning applications (<xref ref-type="bibr" rid="B8">Burrell, 2016</xref>; <xref ref-type="bibr" rid="B38">Neyland, 2016</xref>). Additionally, the ideal of transparency often collides with claims of intellectual property rights (<xref ref-type="bibr" rid="B8">Burrell, 2016</xref>). Thus, exploring alternative approaches of producing and thinking about accountability are needed.</p>
<p>In the recent debate, interpretability and explainability are discussed as alternatives to total transparency of algorithmic systems. <xref ref-type="bibr" rid="B13">Doshi-Velez et al. (2017)</xref> for example point out the importance of explanations to produce accountability. Instead of demanding absolute transparent systems, they argue that it suffices to know &#x201c;how certain factors were used to come to the outcome in a specific situation&#x201d; (<xref ref-type="bibr" rid="B13">Doshi-Velez et al., 2017</xref>, 7). This does not require full disclosure of the internal workings of an algorithmic system, but can be achieved by a statistical input/output analysis which results in a simplified model of human-readable rules explaining the observed data points. By this, the explanation system is an empirical reconstruction of the algorithm&#x2019;s behavior. Such an explanation is not a one-to-one reconstruction of the internal workings, but an external model to find interpretable rules to explain the algorithm&#x2019;s actions.</p>
<p>But not only in legal settings is causality important for explanations and achieving fairness (<xref ref-type="bibr" rid="B31">Madras et al., 2019</xref>). <xref ref-type="bibr" rid="B36">Mittelstadt et al. (2016)</xref> argue for the relevance of causality and causal knowledge in ethical considerations regarding AI and machine learning. <xref ref-type="bibr" rid="B51">Wachter et al. (2017a)</xref> extended this approach by suggesting counterfactuals. Counterfactuals are deviations from observed input data that are used to reconstruct relations between input and output that goes beyond the actual application. By this, explanations in form of differences in the input data that make a difference in the results can be reconstructed, e.g. using varying variables on race or gender to see if the results change (<xref ref-type="bibr" rid="B37">Mittelstadt et al., 2019</xref>). Further, <xref ref-type="bibr" rid="B52">Wachter et al. (2017b)</xref> argue that counterfactual explanations meet the legal requirements formulated under the GDPR. However, formulating potential influence of input data points on the behavior of agents requires a post-hoc explanation of causality (<xref ref-type="bibr" rid="B35">Miller, 2019</xref>; <xref ref-type="bibr" rid="B37">Mittelstadt et al., 2019</xref>). If we formulate rules describing the impact of input data on the classification of an algorithmic system, we are basically modelling a causal relationship to grasp observed behavior that goes beyond mere correlation. <xref ref-type="bibr" rid="B36">Mittelstadt et al. (2016)</xref> therefore argue for the relevance of causality and causal knowledge in ethical considerations regarding AI and machine learning. <xref ref-type="bibr" rid="B31">Madras et al. (2019)</xref> even argue that counterfactual causal models are able to produce fairer systems, as the influence of hidden confounding variables can be discovered. As a result, causality seems to be a promising approach to tackle issues of algorithmic accountability. This led computer science scholars to explore the formal expression of algorithmic accountability as a structural causal model (SCM) (<xref ref-type="bibr" rid="B24">Kacianka and Pretschner, 2018</xref>). The underlying idea is that accountability always requires causality (<xref ref-type="bibr" rid="B23">Kacianka et al., 2020</xref>). This approach extends the argument for causality as an essential feature beyond the notion of explainability. It assumes that a person cannot be held accountable for actions s/he did not cause, therefore referring to the social and political function of causality in human reasoning (see also <xref ref-type="bibr" rid="B35">Miller, 2019</xref>; <xref ref-type="bibr" rid="B37">Mittelstadt et al., 2019</xref>). In doing so, it also signifies the importance of the underlying models of causality and the process of their (social) construction.</p>
<p>SCMs (<xref ref-type="bibr" rid="B43">Pearl and Mackenzie, 2018</xref>) represent a human-readable graphical model, while also offering mathematical tools to analyze and reason over them. For example, take two definitions of accountability: One states that a system is accountable, if its logs might be reviewed by some third party. The second one defines an elaborate framework of checks and balances, in which every action of a computer system is reviewed by a human principal. Both are valid definitions of accountability and in line with recent literature. The first example is similar to the notion of felt accountability used in psychology (<xref ref-type="bibr" rid="B18">Hall et al., 2017</xref>), while the second one resembles the Responsible-Accountable-Consult-Inform (RACI) framework used in the organizational sciences (<xref ref-type="bibr" rid="B46">Smith et al., 2005</xref>). Both models can be expressed as a SCM and matched to a technical system. If a system takes an undesired action, its SCM will allow us to understand why this undesired action happened. If the SCM corresponds to an accountability definition, we can also see who is to be held accountable for the system&#x2019;s undesired action.</p>
<p>Yet an open question is how accountability, expressed as SCMs, can take the social structure in which they are placed into account i.e. consider, which forms of accounting (<xref ref-type="bibr" rid="B38">Neyland, 2016</xref>) for their actions are compatible with the practices (re-)produced within the social domain. We therefore confront ideas of SCMs with insights from the social sciences and humanities and argue that formal expressions of causality must always be seen in the context of the social system in which they are applied. The argument operates with the observation that SCMs are being discussed as means for producing algorithmic accountability and situates this observation in an interdisciplinary perspective. In this contribution, we will first introduce how causality can be expressed in SCMs and then contrast the method with concepts from interactionist theories of social science, more specifically pragmatism, to theorize the interaction effects between causal models of algorithms and the social interaction order in which they are placed.</p>
</sec>
<sec id="s2">
<title>From Causality to Accountability: The Computer Science Approach</title>
<p>While correlation does not imply causation is a well-known mantra, hardly anyone can give a mathematical formalization of causality. Recently, <xref ref-type="bibr" rid="B43">Pearl and Mackenzie (2018)</xref> put forward a formalization of causality that extends structural equation models to SCMs, but the expression of causality comes with its own methodological challenges. The gold standard of determining causality has been the randomized controlled trial (RCT), which has been popularized by RA Fisher (for a historical perspective see <xref ref-type="bibr" rid="B43">Pearl and Mackenzie, 2018</xref>, 139). In such experiments, the investigators try to create a stable environment and manipulate only a single variable (ceteris paribus). The fundamental downside of RCTs is that they are often infeasible, unethical or prohibitively expensive (<xref ref-type="bibr" rid="B43">Pearl and Mackenzie, 2018</xref>).</p>
<p>The alternatives to RCTs are observational studies, where researchers gather data and try to understand some (causal) processes. In these settings, researchers cannot directly manipulate any factors and are therefore restricted to recorded data, which makes the reconstruction of causal relations problematic. Pearl was the first to show that even with observational data causality could be proven (<xref ref-type="bibr" rid="B43">Pearl and Mackenzie, 2018</xref>). According to Pearl, SCMs allow expressing an understanding of a process&#x2019; causal structure, which can be formally expressed as follows:</p>
<p>Formally, an SCM is a tuple M &#x3d; (U,V,F) where U is a set of exogenous variables, V is a set of endogenous variables, F associates with each variable in X &#x2208; V a function that determines the value of X given the value of all other variables.</p>
<p>It is noteworthy that the universe of discourse is split into two sets of variables: exogenous variables, which are taken as given and for which no explanation can be provided, and endogenous variables that are considered relevant for the causality relations. Additionally, a given understanding of the causal relation is modeled by a set of functions in F that describe the mathematical relation between the variables. By not having a causal relation between variables in such a model, it is assumed that one variable cannot influence another. In the graphical model corresponding to the SCM this is shown by the absence of arrows between two variables.</p>
<p>In <xref ref-type="fig" rid="F1">Figure 1</xref>, one can mathematically express that B has no effect on Y, but that X is causally linked to Y. The caveat of this modeling approach is that any statement of causality depends on the underlying model. We, as experts, are forced to state our assumptions and invite others to challenge them. While there is no way to prove a causal model correct, we can use data to refute some. However, for any given model, we can most likely find an alternative model, that will also explain the data.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>A simple causal model following the definitions of (<xref ref-type="bibr" rid="B43">Pearl and Mackenzie, 2018</xref>, 159) (created by the authors).</p>
</caption>
<graphic xlink:href="fdata-03-519957-g001.tif"/>
</fig>
<p>Drawing on forms of causal modeling and applying them on algorithmic systems provides us with the possibility to express accountability of algorithmic systems in mathematical terms. Once a notion of accountability has been agreed upon, it can be expressed as a SCM. This formalization determines what data needs to be collected, making it possible to design systems in a way that sensitive data, such as gender or race, will not need to be stored. In <xref ref-type="fig" rid="F1">Figure 1</xref>, for example, if B was gender and we were only interested in Y, we can show that B does not affect Y and that we therefore do not need to record B.</p>
<p>To illustrate our point, we will expand on the example given by <xref ref-type="bibr" rid="B24">Kacianka and Pretschner (2018)</xref>. They examined a prominently reported accident involving an autonomous vehicle operated by Uber in Arizona (<xref ref-type="bibr" rid="B15">Elish, 2019</xref>). In this unfortunate accident, the test vehicle was driving autonomously and had a safety driver on board. However, the system mis-detected a pedestrian crossing the road, fatally injuring her. The safety driver on board of the car was distracted and did not manage to operate the brake in time, while the emergency braking system designed by the chassis manufacturer was disabled, because it interfered with the autonomous driving capabilities of the car. In the aftermath, one of the questions asked was who was to be held accountable for the accident. This example can now be modeled as an SCM (<xref ref-type="bibr" rid="B23">Kacianka et al., 2020</xref>). The answer to who is to be held accountable in the ultimately depends on the causal understanding of the events. The Figure below depicts three possible causal configurations:</p>
<p>
<xref ref-type="fig" rid="F2">Figure 2A</xref> shows a configuration in which the human has direct influence on the trajectory of the car. This influence is moderated by the software in <xref ref-type="fig" rid="F2">Figure 2B</xref>, and in <xref ref-type="fig" rid="F1">Figure 1C</xref> the human has no influence on the course of events at all. When talking about autonomous cars, many people will often have the causal model shown by <xref ref-type="fig" rid="F2">Figure 2C</xref> in mind, while <xref ref-type="fig" rid="F2">Figure 2B</xref> or even 2a are equally valid explanations. SCMs (<xref ref-type="bibr" rid="B43">Pearl and Mackenzie, 2018</xref>), as used in <xref ref-type="fig" rid="F2">Figure 2</xref>, offer a mathematically precise way to express such causality relations. The arrows denote causal connections, while the boxes denote variables. Here, rectangular boxes represent components, and rounded boxes represent natural or juridical persons. Even if we do not specify the exact mathematical function for each relation, we can reason about some form of causality. For instance, the absence of an arrow in <xref ref-type="fig" rid="F2">Figure 2C</xref>, between the <italic>Safety Driver</italic> and the <italic>Trajectory</italic>, expresses that there is no causal connection between them. On the other hand, in <xref ref-type="fig" rid="F1">Figure 1C</xref>, we could specify the exact influence of the components on the <italic>Trajectory</italic>. Simplifying to a Boolean formula, where <italic>false</italic> means <italic>do nothing</italic> and <italic>true</italic> indicates a c<italic>hange in trajectory</italic>, the formula could be: <italic>Trajectory &#x3d; Brake or Software or Driver</italic>
</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Three possible SCMs for the Uber case (from left to right) <bold>(A)</bold> The human can take over, <bold>(B)</bold> Human Influence is moderated by the machine, <bold>(C)</bold> No human influence is possible (created by the authors).</p>
</caption>
<graphic xlink:href="fdata-03-519957-g002.tif"/>
</fig>
<p>To model that the system only brakes if both the emergency brake and the system agree, we could model it like this: <italic>Trajectory = (Brake and Software) or Driver</italic>
</p>
<p>Now, once we have such a causal representation of a system, we can start looking for patterns of accountability. Following <xref ref-type="bibr" rid="B23">Kacianka et al. (2020)</xref>, we can express definitions of accountability as causal models. For example, they use the definition of (<xref ref-type="bibr" rid="B29">Lindberg, 2013</xref>, 209), who conceptualizes accountability as:<list list-type="order">
<list-item>
<p>An agent or institution who is to give an account (A for agent);</p>
</list-item>
<list-item>
<p>An area, responsibilities, or domain subject to accountability (D for domain);</p>
</list-item>
<list-item>
<p>An agent or institution to whom A is to give account (P for principal);</p>
</list-item>
<list-item>
<p>The right of P to require A to inform and explain/justify decisions with regard to D; and</p>
</list-item>
<list-item>
<p>The right of P to sanction A if A fails to inform and/or explain/justify decisions with regard to D.</p>
</list-item>
</list>
</p>
<p>This can be expressed as in the causal model shown in <xref ref-type="bibr" rid="B3">Figure 3</xref>
</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>The causal model for the Lindberg accountability pattern; the principal is not part of the pattern. Taken from <xref ref-type="bibr" rid="B23">Kacianka et al. (2020)</xref>.</p>
</caption>
<graphic xlink:href="fdata-03-519957-g003.tif"/>
</fig>
<p>We can draw different conclusions about accountability from the graphical representations of causality. Coming back to the different models of causality in <xref ref-type="fig" rid="F2">Figure 2</xref>, we are confronted with different possibilities. We can see that in <xref ref-type="fig" rid="F2">Figure 2A</xref>, the software, the emergency brake, and the safety driver, are accountable for the accident. However, looking at <xref ref-type="fig" rid="F2">Figure 2C</xref>, we can see that the safety driver is no longer connected to the pattern and thus cannot be held accountable. We want to emphasize that we do not argue for the correctness of the specific models. Both the model of the system, as well as the model of accountability, can be improved, changed and refined. Rather we want to show that the usefulness of SCMs is that they allow us to express these causal relationships and offer us a formalization to clearly state our assumptions. These assumptions can then be discussed and criticized, and joint models can be developed.</p>
<p>Though, taking up the warning of <xref ref-type="bibr" rid="B37">Mittelstadt et al. (2019)</xref>, models should be used carefully. In the Uber example, the different definitions of accountability provided will result in different understandings: one could conclude that Uber was accountable or that the driver was accountable. While none of the two definitions is inherently wrong on a formal level, a shared definition of accountability is needed to resolve the issue. This insight on the contingency in modeling leads to the conclusion that models of causality are so-called second order observations. They formalize not only how a causal relation can be expressed, but they also have to consider how the social system experiences causalities and creates spoken or written accounts about them. As such, accountability of algorithmic systems is not merely about making actions of an algorithm understandable through causal reasoning, but should also address the question of towards whom such an algorithmic system should be accountable (<xref ref-type="bibr" rid="B38">Neyland, 2016</xref>). In other words, the algorithmic system must be accountable to the principals with whom it interacts in the given situation. This implies two conditions. First, the causal model for the algorithmic system needs to be aligned with the actual application in the specific social system in which it is placed. Second, the model and its assumptions must not only be accountable to the developers of the model, but also the accounts created by a SCM have to be interpretable by the other members of the social system. Briefly: the SCM must be seen in context.</p>
</sec>
<sec id="s3">
<title>Putting Structural Causal Models Into Context</title>
<p>Causality as a concept has been discussed in the social sciences for a long time. Thus, contrasting and confronting the notion of causality from the formal perspective of computer science with the approach of pragmatism, could produce insights into the social interfaces between algorithmic accountability and social structure. As described before, different SCMs can be applied to explain given data. The question then is, which models correspond with shared expectations and practical enactments of accountability, and to which extent. Coming back to the Uber example, the model could describe both the missing reaction of the driver or the action of the board-computer not applying the brakes as a causal factor for the accident. Formally, both factors might explain the accident, but they relate to different normative assumptions.</p>
<p>The multiplicity of possible explanations is thereby not unique to causal modelling in computer science and neighboring fields, but touches upon a general epistemic position and how the world is experienced by individuals and communities. An influential tradition that deals with questions of social construction of truth and collective expectations has been American pragmatism (<xref ref-type="bibr" rid="B21">James, 1907</xref> 2014; <xref ref-type="bibr" rid="B49">Thomas and Thomas, 1928</xref>). American pragmatism was foremost a philosophy developed in the U.S. at the beginning of the 20th century. Later it became an influential way of thinking within the social sciences, exemplified by the Chicago School of Sociology (e.g., <xref ref-type="bibr" rid="B41">Park, 1915</xref>) and Symbolic Interactionism (<xref ref-type="bibr" rid="B4">Blumer, 1986</xref>; <xref ref-type="bibr" rid="B10">Dewey, [1925] </xref>2013). Besides, it has been understood by its scholars as an empirical philosophy (see <xref ref-type="bibr" rid="B10">Dewey, [1925]</xref> 2013), which conveys an interesting branch of thinking that focuses on the practices and interactions of individuals and how bigger patterns and social worlds emerge from them (<xref ref-type="bibr" rid="B48">Strauss, 1978</xref>; <xref ref-type="bibr" rid="B6">Bowker and Star, 2000</xref>).</p>
<p>Two conceptions of human action are of importance when discussing causality as a mode of accountability production. First, pragmatists argue that human action is tied to problem solving. What we do, and what results out of our actions, is tied to the perception of a problem that needs to be solved, and our positioning in the world. These problems that need to solved are not objectively given, but are experienced and imagined as such by an individual or a group of individuals (<xref ref-type="bibr" rid="B32">Marres, 2007</xref>). Thus, firstly, when looking at patterns of communication or interaction, the question arises what problem and&#x2014;more importantly&#x2014;whose problem is solved by the observed behavior. Secondly, the perception of the world cannot be separated from these problem-solving activities. What is true or real is experienced in our practices, in testing and updating our assumptions about the world. Truth therefore becomes a question of practicability and &#x201c;what works&#x201d; (<xref ref-type="bibr" rid="B21">James, [1907]</xref> 2014). Nevertheless, in a given situation different perceptions and imaginations of truth can work in the practical doing. As a result, sharing a common vision of the world is not necessarily given, but must actively be produced through processes of socialization (<xref ref-type="bibr" rid="B49">Thomas and Thomas, 1928</xref>). This contingency of perceived reality must then be considered when talking about causality.</p>
<p>For Dewey&#x2014;an important pragmatist scholar at the beginning of the 20th century&#x2014;causality represents a sequential order of events, though he doesn&#x2019;t see causality as the result of pre-existing associations between these events (<xref ref-type="bibr" rid="B11">Dewey, [1938]</xref> 2007). Instead, the associations between these events are operational, i.e. associations are constructed in a social process in order to solve a given problem of inquiry. In this perspective, the notion of causality is insofar problematic, as the assumption that an event A caused event B is in itself a reduction within an endlessly more complex situation. For each event A that we can identify, we can also identify more events that caused it, moving to ever finer-grained levels of interactions. Meanwhile, event B is not necessarily the end of a potential endless chain of causations (see also <xref ref-type="bibr" rid="B47">Stone, 1994</xref>).</p>
<p>Taking our example of the Uber-car accident, we could now ask for the initial event. Was it the driver, who was braking too late? The system mis-classifying the pedestrian? The driver starting the car, or maybe even the engineers, who assembled the system? All of these events would be viable starting points for a chain of causality. Similarly, we could argue that it wasn&#x2019;t the hit of the car, which killed the pedestrian, but that the hit damaged some inner organs, which led to internal bleeding, which then led to insufficient oxygen supply to the brain, etc.<xref ref-type="fn" rid="FN2">
<sup>2</sup>
</xref> Reducing this complex process to a relation between e.g. the classification and the pedestrian&#x2019;s death represents a simplification, which Dewey termed &#x201c;common sense causation&#x201d; (<xref ref-type="bibr" rid="B11">Dewey, [1938]</xref> 2007). This also includes questions of co-correlations of events, which could lead to different common-sense causations. As such, the model of causality needs a link to learned experiences of the social system&#x2019;s members to create <italic>plausible</italic> accounts on their own actions (<xref ref-type="bibr" rid="B10">Dewey, [1925]</xref> 2013; <xref ref-type="bibr" rid="B21">James, [1907]</xref> 2014). Similar concepts can be found in cognitive psychology (e.g., <xref ref-type="bibr" rid="B44">Pylyshyn, 2006</xref>). Causality therefore not only describes the associations between different identifiable events, but also presumes a shared construction of the world.</p>
<p>There is an interesting convergence of arguments between social theory, the formulation of structural causal models, and algorithmic accountability. In the context of algorithmic accountability, <xref ref-type="bibr" rid="B37">Mittelstadt et al. (2019)</xref> already argue that explanations of algorithmic behavior should resemble the recipient&#x2019;s epistemic and normative values. In terms of social theory, this now addresses the constructions of a social system. Further the &#x201c;common sense causation&#x201d;, as described by Dewey, has been introduced in SCMs as &#x201c;context setting&#x201d; (<xref ref-type="bibr" rid="B19">Halpern, 2016</xref>). Context setting defines the elements to identify and include in the model auf causal relations. This, however, can be seen as a specific setting of how reality is being perceived and imagined within SCMs. During the modelling of these SCMs, specific ideas and assumptions about reality are being inscribed into these models.</p>
<p>
<xref ref-type="bibr" rid="B43">Pearl and Mackenzie (2018)</xref> base their model of causation on a question that is not too different to a pragmatist conception of causality. They follow a definition of causal reasoning that is not asking for the essence of causality, or, to put it differently&#x2014;for an objective, neutral, and detached definition of such a term&#x2014;but for the performatively produced understanding of causality. The explanatory power of SCMs is granted, as &#x201c;causal inference is objective in one critically important sense: once two people agree on their assumptions, it provides a 100 percent objective way of interpreting any new evidence (or data)&#x201d; (<xref ref-type="bibr" rid="B43">Pearl and Mackenzie, 2018</xref>, 91). This means that if several models describe reality in a way that is functional for a given problem definition, objectivity is achieved through the act of commonly deciding on which model is the most useful. This has important implications on how SCMs can be applied within a social system to produce accountability.</p>
<p>Arguing for the possibility of multiple models of causal relationships that have to be negotiated means to assume the (important) position of an external observer. In order to become objective in the terms of Pearl and Mackenzie, the reasoning over different possible causal models have to be aligned with the interpretation of other observers. The question then is not only if a model converges with the perceived reality of the developers, but instead observations of a second order are necessary, to produce models that are also plausible to these other observers. This raises the question how causality is being described within a social system in which the model should be deployed, i.e. observing how the social setting is observing reality.<xref ref-type="fn" rid="FN3">
<sup>3</sup>
</xref> Causal models rely on a shared understanding of the world and a common form of causal reasoning. What Pearl and Mackenzie are implicitly referring to has been conceptualized in pragmatism as shared knowledge and the production of intersubjectivity (<xref ref-type="bibr" rid="B34">Mead, 1917</xref> and <xref ref-type="bibr" rid="B33">Mead, 1967</xref>). Acting and reasoning are based on experiences that create implicit causal models of the world. The interpretation of newly gathered data&#x2014;here seen as a new experience&#x2014;can only be interpreted according to the experiences one has made in the past. This therefore requires a deep understanding of the social interaction system in which causal models should operate.</p>
<p>Coming back to our example of the Uber accident, the question arises, which of the presented models coheres with society&#x2019;s perceptions. For the causal description of the Uber case, the question therefore is not which SCM is better, but which ones best reflects the normative and experienced causalities of the social groups, for which it should solve the problem. This of course entails interests of different social groups and therefore requires a broader discussion among them. The model displayed in <xref ref-type="fig" rid="F2">Figure 2C</xref> might seem intuitive to many people, as the term &#x201c;autonomous driving&#x201d; suggests that the car is acting &#x201c;on its own&#x201d;, and thus the company who built the car should be held accountable, while insurance companies and producers of autonomous cars would probably prefer the models described in <xref ref-type="fig" rid="F2">Figures 2A</xref> or <xref ref-type="fig" rid="F2">2B</xref>. When it comes to legal decisions and settings, one has to not only attribute accountability, but also responsibility.</p>
<p>Each of the models displayed represents a valid reduction of a highly complex reality into a manageable set of entities and relations. However, the normative ideas and social consequences of these models differ to a large degree. Thus, for SCMs to be able to act as <italic>accountability machines</italic>, they have to reflect these social constraints in order to become objective. <italic>Accountability</italic> then does not (only) mean to produce addressable entities that can be held responsible, but to create a model that is able to give account about what happened in a way that is understandable and acceptable to the members of the addressed social group. Constructing causal models therefore requires knowledge about these different modes of attributing and producing accountability within different functional interaction systems.</p>
</sec>
<sec id="s4">
<title>Conclusion and Outlook</title>
<p>Causality and the calculation of counter-factuals is a promising approach to algorithmic accountability. By calculating and formulating human-readable rules to explain the observed behavior of an algorithmic system, they can be made available to public scrutiny. Especially, as an implementation of causal descriptions can be applied in a way that balances the public&#x2019;s need to know and the protection of intellectual property rights. Thus, the introduction of such <italic>explainability systems</italic> (<xref ref-type="bibr" rid="B13">Doshi-Velez et al., 2017</xref>) creates an interface between the practices of the developers of algorithmic systems and the organizations and communities that want or need to hold them accountable. This could create the means to intervene in the production and deployment of algorithmic systems.<xref ref-type="fn" rid="FN4">
<sup>4</sup>
</xref> Producing <italic>account</italic>-ability, in terms of being able to understand and interpret the behavior of algorithms, also creates <italic>contest</italic>-ability, i.e. the ability to reject specific implementations. However, in building accountability systems, we have to be aware of the construction of causality within the social system in which these machineries should be able to operate. That is, in order to enable a community to hold algorithms and their developers accountable, SCMs operate as an interface between the practices of designers and the practices of the social system&#x2019;s members.</p>
<p>The Uber case illustrates that different models of causality correspond with social imaginaries of common-sense causality to varying degrees. The legitimacy of SCMs as accountability machines therefore hinges on the relation between these different conceptions of causality. If social research and computer science are to collaborate in the development of SCMs as a means to produce accountability, more, and especially interdisciplinary research on the matter is necessary. It remains an open question, how translating social visions of fairness, discrimination, and &#x201c;normality&#x201d; into mathematical models can be achieved in a way that enables interactions between algorithms and their social context. This calls for more in-depth studies of interaction patterns between algorithms, social systems, and SCMs as translation devices between the technical and social realm. Such studies would enable the development of SCMs, that could express causalities in a field&#x2019;s <italic>own language</italic>. Simultaneously, it would be na&#xef;ve to assume that there was only one existing construction of accountability between different institutions and actors. By making these different notions of accountability visible, SCMs can therefore not only foster disagreement with single observers, but with different communities, each with their own normative account and resulting constructions of causality.</p>
<p>Opening up the discussion about possible causal models could therefore also be a means to a broader and (deliberative) democratic discussion about how algorithms should operate within our societies. Instead of treating algorithms or causal models as given, the perspective explored here calls for more inclusive forms of development, as algorithms and statistical models are not objective by nature. Bringing developments of algorithmic systems and SCMs (as accountability machines) into conversation with the normative ideas and imaginaries of the social system within which they are operating, could therefore not only result in <italic>account</italic>-able, but also more responsible systems. The question, if and how algorithmic accountability can foster social integration, therefore needs further inquiry.</p>
<p>This leads us to three major questions for future research: First, how do people actually make meaning of their everyday life in relation with algorithms, especially in different (public) organizations? Second, how can these processes of meaning-making be translated into SCMs in a way that is compatible with the social system? And third, how can these models consider that different modes of constructing accountability and causality are being negotiated in social systems? These questions call for a closer cooperation between several disciplines, including philosophy (of technology), legal studies, ethics, social science and computer science, to name just a few. Ethnographic studies of algorithmic systems in action and quasi-experimental studies would be a valuable contribution to the technical implementation of SCMs with interdisciplinary research. It therefore seems productive to explore the possibilities of constructing algorithmic accountability by bringing perspectives and interdisciplinary approaches into an ongoing conversation with SCMs.</p>
</sec>
<sec id="s5">
<title>Author Contributions</title>
<p>Both authors, NP (lead author) and SK (co-author) made substantial contributions to the conception, drafting and finalizing of the theoretical analysis work, whereas NP provided social theory insights and the synthesis of causal modelling and social theory, whereas SK was responsible for description of SCMs. The text is a joint effort, therefore it is not possible to attribute sections to one author alone.</p>
</sec>
<sec id="s6">
<title>Funding</title>
<p>The authors acknowledge the financial support for open access publishing by the University of Graz. Severin Kacianka's work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant no. PR1266/3-1, Design Paradigms for Societal-Scale Cyber-Physical Systems.</p>
</sec>
<sec id="s7" sec-type="COI-statement">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
</body>
<back>
<ack>
<p>The authors want to thank Peter Kahlert (TUM) for his critical and insightful remarks on early versions of the paper. We would also like to thank the reviewers for their valuable input.</p>
</ack>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ananny</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Crawford</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability</article-title>. <source>New Media and Society</source> <volume>20</volume>, <fpage>973</fpage>&#x2013;<lpage>989</lpage>. <pub-id pub-id-type="doi">10.1177/1461444816676645</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Angwin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Larson</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mattu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kirchner</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Machine bias: there&#x2019;s software used across the country to predict future criminals. And it&#x2019;s biased against blacks</article-title>. <publisher-loc>New York, NY, United States</publisher-loc>: <publisher-name>ProPublica</publisher-name> <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing">https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing</ext-link>
</comment> <comment>Accessed</comment> May 23, 2016). </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Beer</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>The social power of algorithms</article-title>. <source>Inf. Commun. Soc.</source> <volume>20</volume> (<issue>1</issue>), <fpage>1</fpage>&#x2013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1080/1369118X.2016.1216147</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Blumer</surname>
<given-names>H.</given-names>
</name>
</person-group> (<year>1986</year>). <source>Symbolic interactionism: perspective and method.</source> <publisher-loc>Berkley, LA</publisher-loc>: <publisher-name>University of California Press</publisher-name>. </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bovens</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Analysing and assessing accountability: a conceptual framework 1</article-title>. <source>Eur. Law J.</source> <volume>13</volume> (<issue>4</issue>), <fpage>447</fpage>&#x2013;<lpage>468</lpage>. <pub-id pub-id-type="doi">10.1111/j.1468-0386.2007.00378.x</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bowker</surname>
<given-names>G. C.</given-names>
</name>
<name>
<surname>Star</surname>
<given-names>S. L.</given-names>
</name>
</person-group> (<year>2000</year>). <source>Sorting things out: classification and its consequences.</source> <publisher-loc>Cambridge MA, United States</publisher-loc>: <publisher-name>MIT Press</publisher-name>. </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bozdag</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Bias in algorithmic filtering and personalization</article-title>. <source>Ethics Inf. Technol.</source> <volume>15</volume> (<issue>3</issue>), <fpage>209</fpage>&#x2013;<lpage>227</lpage>. <pub-id pub-id-type="doi">10.1007/s10676-013-9321-6</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Burrell</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>How the machine &#x201c;thinks&#x201d;: understanding opacity in machine learning algorithms</article-title>. <source>Big Data &#x26; Society</source> <volume>3</volume> (<issue>1</issue>), <fpage>2053951715622512</fpage>. <pub-id pub-id-type="doi">10.1177/2053951715622512</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Crawford</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>The hidden biases in Big data</article-title>. <source>Harvard business review.</source> <comment>Available at: <ext-link ext-link-type="uri" xlink:href="https://hbr.org/2013/04/the-hidden-biases-in-big-data">https://hbr.org/2013/04/the-hidden-biases-in-big-data</ext-link>
</comment> <comment>Accessed</comment> April 1, 2013). </citation>
</ref>
<ref id="B10">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Dewey</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>1925</year>). <source>Experience and nature.</source> <publisher-loc>Chicago, IL, United States.</publisher-loc>: <publisher-name>Dover Publications</publisher-name>. </citation>
</ref>
<ref id="B11">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Dewey</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>[1938] 2007</year>). <source>Logic&#x2014;the theory of inquiry</source> <publisher-name>New York, NY: Saerchinger Press</publisher-name>. </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Diakopoulos</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Algorithmic accountability</article-title>. <source>Digital Journalism</source> <volume>3</volume> (<issue>3</issue>), <fpage>398</fpage>&#x2013;<lpage>415</lpage>. <pub-id pub-id-type="doi">10.1080/21670811.2014.976411</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Doshi-Velez</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kortz</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Budish</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Bavitz</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Gershman</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>O&#x2019;Brien</surname>
<given-names>D.</given-names>
</name>
<etal/>
</person-group> (<year>2017</year>). <source>Accountability of AI under the law: the role of explanation.</source> <comment>Available at: <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/1711.01134">http://arxiv.org/abs/1711.01134</ext-link>
</comment>. </citation>
</ref>
<ref id="B14">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Eisenberger</surname>
<given-names>I.</given-names>
</name>
</person-group> (<year>2016</year>). <source>Innovation im Recht.</source> <publisher-loc>Wien, Austria</publisher-loc>: <publisher-name>Verlag &#xd6;sterreich</publisher-name>. </citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Elish</surname>
<given-names>M. C.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Moral crumple zones: cautionary tales in human-robot interaction</article-title>. <source>Engaging Science, Technology, and Society</source> <volume>5</volume>, <fpage>40</fpage>&#x2013;<lpage>60</lpage>. <pub-id pub-id-type="doi">10.2139/ssrn.2757236</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Friedman</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Helen</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>1996</year>). <article-title>Bias in computer systems</article-title>. <source>ACM Trans. Inf. Syst.</source> <volume>14</volume> (<issue>3</issue>), <fpage>330</fpage>&#x2013;<lpage>347</lpage>. <pub-id pub-id-type="doi">10.1145/230538.230561</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Gillespie</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2014</year>). &#x201c;<article-title>The relevance of algorithms</article-title>,&#x201d; in <source>media technologies. Essays on communication, materiality, and society.</source> Editors <person-group person-group-type="editor">
<name>
<surname>Gillespie</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Boczkowski</surname>
<given-names>P. J.</given-names>
</name>
<name>
<surname>Foot</surname>
<given-names>K. A.</given-names>
</name>
</person-group> (<publisher-loc>Cambridge, MA, United States</publisher-loc>: <publisher-name>MIT Press.</publisher-name>), <fpage>167</fpage>&#x2013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.7551/mitpress/9780262525374.001.0001</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Hall</surname>
<given-names>A. T.</given-names>
</name>
<name>
<surname>Frink</surname>
<given-names>D. D.</given-names>
</name>
<name>
<surname>Buckley</surname>
<given-names>M. R.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>An accountability account: a review and synthesis of the theoretical and empirical research on felt accountability</article-title>. <source>J. Organ. Behav.</source> <volume>38</volume> (<issue>2</issue>), <fpage>204</fpage>&#x2013;<lpage>224</lpage>. <pub-id pub-id-type="doi">10.1002/job.2052</pub-id> </citation>
</ref>
<ref id="B19">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Halpern</surname>
<given-names>J. Y.</given-names>
</name>
</person-group> (<year>2016</year>). <source>Actual causality.</source> <publisher-loc>Cambridge, MA, United States</publisher-loc>: <publisher-name>MIT Press Ltd</publisher-name>. </citation>
</ref>
<ref id="B20">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Hayles</surname>
<given-names>N. K.</given-names>
</name>
</person-group> (<year>2012</year>). <source>How we think: digital media and contemporary technogenesis.</source> <publisher-loc>Chicago, IL, United States</publisher-loc>: <publisher-name>University of Chicago Press</publisher-name>. </citation>
</ref>
<ref id="B21">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>James</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>[1907] 2014</year>). <source>Pragmatism: a new name for some old ways of thinking.</source> <publisher-loc>South Carolina, CA, United States</publisher-loc>: <publisher-name>CreateSpace Independent Publishing Platform</publisher-name>. </citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kacianka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Beckers</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kelbert</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Kumari</surname>
<given-names>P.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>How accountability is implemented and understood in research tools</article-title>. in <conf-name>Proceedings of the International Conference on Product-Focused Software Process Improvement</conf-name>, <conf-loc>Innsbruck, Austria</conf-loc>, <conf-date>29 November&#x2014;1 December</conf-date>, <fpage>199</fpage>&#x2013;<lpage>218</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-69926-4_15</pub-id> </citation>
</ref>
<ref id="B23">
<citation citation-type="web">
<person-group person-group-type="author">
<name>
<surname>Kacianka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Ibrahim</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Pretschner</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Expressing accountability patterns using structural causal models</article-title>. <comment>Available at: <ext-link ext-link-type="uri" xlink:href="http://arxiv.org/abs/2005.03294">http://arxiv.org/abs/2005.03294</ext-link>
</comment> </citation>
</ref>
<ref id="B24">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kacianka</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Pretschner</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Understanding and formalizing accountability for cyber-physical systems</article-title>.&#x201d; in <conf-name>Proceedings of the 2018 IEEE international conference on Systems, Man, and Cybernetics (SMC)</conf-name>, <conf-loc>Miyazaki, Japan</conf-loc>, <conf-date>7&#x2013;10 October</conf-date>, <fpage>3165</fpage>&#x2013;<lpage>3170</lpage>. <pub-id pub-id-type="doi">10.1109/SMC.2018.00536</pub-id> </citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kaufmann</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Simon</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Matthias</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Predictive policing and the politics of patterns</article-title>. <source>Br. J. Criminol.</source> <volume>59</volume> (<issue>3</issue>), <fpage>674</fpage>&#x2013;<lpage>692</lpage>. <pub-id pub-id-type="doi">10.1093/bjc/azy060</pub-id> </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ko</surname>
<given-names>R. K.</given-names>
</name>
<name>
<surname>Jagadpramana</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Mowbray</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Pearson</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Kirchberg</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Liang</surname>
<given-names>Q.</given-names>
</name>
<etal/>
</person-group> (<year>2011</year>). &#x201c;<article-title>TrustCloud: a framework for accountability and trust in cloud computing</article-title>.&#x201d; in <conf-name>Proceedings of the IEEE World Congress on Services</conf-name>, <conf-loc>Washington, DC, United States</conf-loc>, <conf-date>July-September</conf-date>, <fpage>584</fpage>&#x2013;<lpage>588</lpage>. <pub-id pub-id-type="doi">10.1109/SERVICES.2011.91</pub-id> </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>K&#xfc;nnemann</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Ilkan</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Michael</surname>
<given-names>B.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Automated verification of accountability in security protocols</article-title>. in <conf-name>Proceedings of the IEEE 32nd Computer Security Foundations Symposium (CSF)</conf-name>, <conf-loc>Hoboken, NJ, United States</conf-loc>, <conf-date>June-September</conf-date>, <fpage>397</fpage>&#x2013;<lpage>413</lpage>. <pub-id pub-id-type="doi">10.1109/CSF.2019.00034</pub-id> </citation>
</ref>
<ref id="B28">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ku&#x308;sters</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Truderung</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Vogt</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Accountability: definition and relationship to verifiability</article-title>. in <conf-name>Proceedings of the 17th ACM conference on Computer and communications security</conf-name>, <conf-loc>Chicago, IL, United States</conf-loc>, <conf-date>October</conf-date>, <fpage>526</fpage>&#x2013;<lpage>535</lpage>. <pub-id pub-id-type="doi">10.1145/1866307.1866366</pub-id> </citation>
</ref>
<ref id="B29">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lindberg</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2013</year>). <article-title>Mapping accountability: core concept and subtypes</article-title>. <source>Int. Rev. Adm. Sci.</source> <volume>79</volume> (<issue>2</issue>), <fpage>202</fpage>&#x2013;<lpage>226</lpage>. <pub-id pub-id-type="doi">10.1177/0020852313477761</pub-id> </citation>
</ref>
<ref id="B30">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Luhmann</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>1996</year>). <source>Social systems.</source> <publisher-loc>Stanford, CA, United States</publisher-loc>: <publisher-name>Stanford University Press</publisher-name>. </citation>
</ref>
<ref id="B31">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Madras</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Elliot</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Pitassi</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Zemel</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Fairness through causal awareness: learning causal latent-variable models for biased data</article-title>, in <conf-name>Proceedings of the 2019 conference on fairness, accountability, and transparency</conf-name>, <conf-loc>Atlanta, GA, United States</conf-loc>, <conf-date>29-31 January</conf-date>, <fpage>349</fpage>&#x2013;<lpage>358</lpage>. <pub-id pub-id-type="doi">10.1145/3287560.3287564</pub-id> </citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Marres</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>The issues deserve more credit: pragmatist contributions to the study of public involvement in controversy</article-title>. <source>Soc. Stud. Sci.</source> <volume>37</volume> (<issue>5</issue>), <fpage>759</fpage>&#x2013;<lpage>780</lpage>. <pub-id pub-id-type="doi">10.1177/0306312706077367</pub-id> </citation>
</ref>
<ref id="B33">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Mead</surname>
<given-names>G. H.</given-names>
</name>
</person-group> (<year>1967</year>). <source>Mind, self, and society: from the standpoint of A social behaviorist.</source> <publisher-loc>Chicago, IL, United States</publisher-loc>: <publisher-name>University of Chicago Press</publisher-name>. </citation>
</ref>
<ref id="B34">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Mead</surname>
<given-names>G. H.</given-names>
</name>
</person-group> (<year>1917</year>). &#x201c;<article-title>Scientific method and individual thinker</article-title>,&#x201d; in <source>Creative intelligence: essays in the pragmatic attitude.</source> Editors <person-group person-group-type="editor">
<name>
<surname>Dewey</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Moore</surname>
<given-names>A. W.</given-names>
</name>
<name>
<surname>Brown</surname>
<given-names>H. C.</given-names>
</name>
<name>
<surname>Mead</surname>
<given-names>G. H.</given-names>
</name>
<name>
<surname>Bode</surname>
<given-names>B. H.</given-names>
</name>
<name>
<surname>Stuart</surname>
<given-names>H. W.</given-names>
</name>
<etal/>
</person-group> (<publisher-loc>New York, NY, United States</publisher-loc>: <publisher-name>Henry Holt and Co.</publisher-name>), <fpage>176</fpage>&#x2013;<lpage>227</lpage>. </citation>
</ref>
<ref id="B35">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Miller</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Explanation in artificial intelligence: insights from the social sciences</article-title>. <source>Artif. Intell.</source> <volume>267</volume>, <fpage>1</fpage>&#x2013;<lpage>38</lpage>. <pub-id pub-id-type="doi">10.1016/j.artint.2018.07.007</pub-id> </citation>
</ref>
<ref id="B36">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mittelstadt</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Allo</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Taddeo</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Wachter</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Floridi</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>The ethics of algorithms: mapping the debate</article-title>. <source>Big Data &#x26; Society</source> <volume>3</volume> (<issue>2</issue>), <fpage>2053951716679679</fpage>. <pub-id pub-id-type="doi">10.1177/2053951716679679</pub-id> </citation>
</ref>
<ref id="B37">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mittelstadt</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Russell</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Wachter</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>Explaining explanations in AI</article-title>,&#x201d; in <conf-name>Proceedings of the 2019 conference on fairness, accountability, and transparency</conf-name>, <conf-loc>Atlanta, GA, United States</conf-loc>, <conf-date>29&#x2013;31 January</conf-date>, <fpage>279</fpage>&#x2013;<lpage>288</lpage>. <pub-id pub-id-type="doi">10.1145/3287560.3287574</pub-id> </citation>
</ref>
<ref id="B38">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Neyland</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Bearing account-able witness to the ethical algorithmic system</article-title>. <source>Sci. Technol. Hum. Val.</source> <volume>41</volume> (<issue>1</issue>), <fpage>50</fpage>&#x2013;<lpage>76</lpage>. <pub-id pub-id-type="doi">10.1177/0162243915598056</pub-id> </citation>
</ref>
<ref id="B39">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Noble</surname>
<given-names>S. U.</given-names>
</name>
</person-group> (<year>2018</year>). <source>Algorithms of oppression: how search engines reinforce racism.</source> <publisher-loc>New York, NY, United States</publisher-loc>: <publisher-name>Combined Academic Publ</publisher-name>. </citation>
</ref>
<ref id="B40">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>O&#x2019;Neil</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2016</year>). <source>Weapons of Math destruction: how Big data increases inequality and threatens democracy.</source> <publisher-loc>New York, NY, United States</publisher-loc>: <publisher-name>Crown</publisher-name>. </citation>
</ref>
<ref id="B41">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Park</surname>
<given-names>R. E.</given-names>
</name>
</person-group> (<year>1915</year>). <article-title>The city: suggestions for the investigation of human behavior in the city environment</article-title>. <source>Am. J. Sociol.</source> <volume>20</volume> (<issue>5</issue>), <fpage>577</fpage>&#x2013;<lpage>612</lpage>. <pub-id pub-id-type="doi">10.1086/212433</pub-id> </citation>
</ref>
<ref id="B42">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pasquale</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2015</year>). <source>The black box society: the secret algorithms that control money and information.</source> <publisher-loc>Cambridge, NY, United States</publisher-loc>: <publisher-name>Harvard University Press</publisher-name>. </citation>
</ref>
<ref id="B43">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pearl</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mackenzie</surname>
<given-names>D.</given-names>
</name>
</person-group> (<year>2018</year>). <source>The book of why: the new science of cause and effect.</source> <publisher-loc>New York, NY, United States</publisher-loc>: <publisher-name>Basic Books</publisher-name>. </citation>
</ref>
<ref id="B44">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Pylyshyn</surname>
<given-names>Z. W.</given-names>
</name>
</person-group> (<year>2006</year>). <source>Seeing and visualizing: it&#x2019;s not what you think.</source> <publisher-loc>Cambridge, NY, United States</publisher-loc>: <publisher-name>MIT Press</publisher-name>. </citation>
</ref>
<ref id="B45">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Seaver</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>What should an anthropology of algorithms do?</article-title>. <source>Cult. Anthropol.</source> <volume>33</volume> (<issue>3</issue>), <fpage>375</fpage>&#x2013;<lpage>385</lpage>. <pub-id pub-id-type="doi">10.14506/ca33.3.04</pub-id> </citation>
</ref>
<ref id="B46">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Smith</surname>
<given-names>M. L.</given-names>
</name>
<name>
<surname>Erwin</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Diaferio</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2005</year>). <article-title>Role and responsibility charting (RACI)</article-title>, <publisher-name>Rancho Cucamonga, CA: Project Management Forum (PMForum)</publisher-name>. <volume>5</volume>. </citation>
</ref>
<ref id="B47">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stone</surname>
<given-names>G. C.</given-names>
</name>
</person-group> (<year>1994</year>). <article-title>Dewey on causation in social science</article-title>. <source>Educ. Theor.</source> <volume>44</volume> (<issue>4</issue>), <fpage>417</fpage>&#x2013;<lpage>428</lpage>. <pub-id pub-id-type="doi">10.1111/j.1741-5446.1994.00417.x</pub-id> </citation>
</ref>
<ref id="B48">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Strauss</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>1978</year>). &#x201c;<article-title>A social world perspective</article-title>,&#x201d;. <source>In studies in symbolic interaction.</source> Editor <person-group person-group-type="editor">
<name>
<surname>Denzin</surname>
<given-names>N. K.</given-names>
</name>
</person-group> (<publisher-loc>Greenwich, CT</publisher-loc>: <publisher-name>JAI Press</publisher-name>), Vol.<volume>4</volume>, <fpage>171</fpage>&#x2013;<lpage>190</lpage>. </citation>
</ref>
<ref id="B49">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Thomas</surname>
<given-names>W. I.</given-names>
</name>
<name>
<surname>Thomas</surname>
<given-names>D. S.</given-names>
</name>
</person-group> (<year>1928</year>). <source>The child in America: behavior problems and programs.</source> <publisher-loc>New York, NY, United States</publisher-loc>: <publisher-name>Alfred A. Knopf</publisher-name>. </citation>
</ref>
<ref id="B50">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Veale</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Binns</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data</article-title>. <source>Big Data and Soc.</source> <volume>4</volume> (<issue>2</issue>), <fpage>2053951717743530</fpage>. <pub-id pub-id-type="doi">10.1177/2053951717743530</pub-id> </citation>
</ref>
<ref id="B51">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wachter</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Mittelstadt</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Russell</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>2017a</year>). <article-title>Counterfactual explanations without opening the black box: automated decisions and the GDPR</article-title>. <source>Harv. J. Law Technol.</source> <volume>31</volume>, <fpage>841</fpage>-<lpage>887</lpage>. <pub-id pub-id-type="doi">10.2139/ssrn.3063289</pub-id> </citation>
</ref>
<ref id="B52">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wachter</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Mittelstadt</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Floridi</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2017b</year>). <article-title>Why a right to explanation of automated decision-making does not exist in the general data protection regulation</article-title>. <source>International Data Privacy Law.</source> <volume>7</volume> (<issue>2</issue>), <fpage>76</fpage>&#x2013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1093/idpl/ipx005</pub-id> </citation>
</ref>
<ref id="B53">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Weitzner</surname>
<given-names>D. J.</given-names>
</name>
<name>
<surname>Abelson</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Berners-Lee</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Feigenbaum</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hendler</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sussman</surname>
<given-names>G. J.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Information accountability</article-title>. <source>Commun. ACM.</source> <volume>51</volume> (<issue>6</issue>), <fpage>82</fpage>&#x2013;<lpage>87</lpage>. <pub-id pub-id-type="doi">10.1145/1349026.1349043</pub-id> </citation>
</ref>
<ref id="B54">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wieringa</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2020</year>). &#x201c;<article-title>What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability</article-title>,&#x201d; in <conf-name>Proceedings of the 2020 conference on fairness, accountability, and transparency</conf-name>, <conf-loc>Barcelona, Spain</conf-loc>, <conf-date>January</conf-date>, <fpage>1</fpage>&#x2013;<lpage>18</lpage>. <pub-id pub-id-type="doi">10.1145/3351095.3372833</pub-id> </citation>
</ref>
<ref id="B55">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ziewitz</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Governing algorithms: myth, mess, and methods</article-title>. <source>Sci. Technol. Hum. Val.</source> <volume>41</volume> (<issue>1</issue>), <fpage>3</fpage>&#x2013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1177/0162243915608948</pub-id> </citation>
</ref>
</ref-list>
<fn-group>
<fn id="FN1">
<label>1</label>
<p>Lindberg (2013, 203) writes that &#x201c;[t]he main achievement [of Bovens and the Utrech School] is to obfuscate the distinctiveness of accountability from other types of constraints on actors&#x2019; power to act autonomously. When the term &#x201c;sanction&#x201d; finally is misunderstood to denote only punishment (deviating from the proper meaning of the word in English), the paraphrasing becomes misleading.&#x201d;</p>
</fn>
<fn id="FN2">
<label>2</label>
<p>We would like to note that this is a hypothetical reflection of this unfortunate event and that we did not inquiry into the exact medical circumstances.</p>
</fn>
<fn id="FN3">
<label>3</label>
<p>There is of course a long-standing discussion about these issues in (the history of) cybernetics and social system theories (e.g. <xref ref-type="bibr" rid="B30">Luhmann, 1996</xref>; <xref ref-type="bibr" rid="B20">Hayles, 2012</xref>).</p>
</fn>
<fn id="FN4">
<label>4</label>
<p>This can become even more important if we think about the complicated relationship between innovation and law, and the way how innovations like autonomous driving can be made accessible to legal reasoning and regulation (<xref ref-type="bibr" rid="B14">Eisenberger, 2016</xref>).</p>
</fn>
</fn-group>
</back>
</article>
