Skip to main content

PERSPECTIVE article

Front. Digit. Health, 02 July 2021
Sec. Human Factors and Digital Health
This article is part of the Research Topic Responsible Digital Health View all 10 articles

From General Principles to Procedural Values: Responsible Digital Health Meets Public Health Ethics

  • Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, United Kingdom

Most existing work in digital ethics is modeled on the “principlist” approach to medical ethics, seeking to articulate a small set of general principles to guide ethical decision-making. Critics have highlighted several limitations of such principles, including (1) that they mask ethical disagreements between and within stakeholder communities, and (2) that they provide little guidance for how to resolve trade-offs between different values. This paper argues that efforts to develop responsible digital health practices could benefit from paying closer attention to a different branch of medical ethics, namely public health ethics. In particular, I argue that the influential “accountability for reasonableness” (A4R) approach to public health ethics can help overcome some of the limitations of existing digital ethics principles. A4R seeks to resolve trade-offs through decision-procedures designed according to certain shared procedural values. This allows stakeholders to recognize decisions reached through these procedures as legitimate, despite their underlying disagreements. I discuss the prospects for adapting A4R to the context of responsible digital health and suggest questions for further research.

Introduction

Recent years have seen a proliferation of digital ethics guidelines. There now exist more than 160 such guidelines, the vast majority published within the last 5 years by a wide range of institutions, including governments, legislative bodies, technology companies, and academic and professional organizations (1). These guidelines are intended for a number of purposes, including as a guide for designers of new digital technologies, to identify and address issues arising from the deployment of such technologies, and as a basis for developing standards and regulation (2).

Many seeking to bring analytical clarity to this panoply have looked to medical ethics for inspiration (3, 4). This is unsurprising: medical ethics is perhaps the most well-established field of practical ethics, both within academic research and as a framework for practitioners. For digital health technologies there is of course the additional reason that they are designed to become part of medical practice. Responsible digital health should involve being held to the same ethical standards as any other form of medical practice (5).

Most of this work has been modeled on an approach to medical ethics known as “principlism.” Principlism seeks to articulate a small set of general principles to guide ethical decision-making. Most influentially, Tom Beauchamp & James Childress' four Principles of Biomedical Ethics (6)—Beneficence, Non-Maleficence, Autonomy and Justice—are widely used and taught within clinical practice and research ethics. Many reviews of digital ethics guidelines similarly seek to subsume their recommendations under a small set of general principles, and some explicitly use Beauchamp & Childress' four principles (sometimes with a new fifth principle of Explicability) (3, 710). The convergence on these principles is often touted as evidence of an emerging consensus which can serve as a basis for implementing ethics into the design, regulation, and application of digital technologies. Yet how this is to be done largely remains an open question (11). Consequently, digital ethicists have increasingly turned their attention to how such principles can best be translated into practice, whether through new design practices (5, 12, 13) or new forms of legislation and regulation (14, 15).

However, critics have highlighted several limitations which vitiate the practical applicability of this approach to digital ethics (2, 9, 1618). In this paper, I focus on two in particular. First, principles formulated in general, abstract terms mask underlying disagreements between and within stakeholder communities. Second, they provide little guidance for how to resolve tensions and trade-offs that can arise between different (interpretations of) principles. To overcome these limitations, I argue, efforts to develop more responsible digital health practices should pay closer attention to a different branch of medical ethics: public health ethics.

I start by making a general case for this claim. I then discuss the problems of disagreement and trade-offs within digital ethics, before introducing an influential account from public health ethics of how to reach ethically legitimate compromises on value-laden trade-offs. This approach, known as accountability for reasonableness (A4R) is based on the idea that legitimate compromises can be reached through decision-procedures designed according to certain procedural values (19). Finally, I discuss the prospects for adapting this approach to digital health and propose some questions for future research.

Why Public Health Ethics?

Public health differs from clinical practice in two key respects (20): in who is affected, and in who decides and implements interventions. Public health interventions affect broader populations, rather than specific, identifiable patients, and they are largely decided and implemented by institutional actors (e.g., governments, insurance companies, NGOs), rather than individual clinicians/researchers.

There are two general reasons why closer attention to public health ethics is likely to benefit efforts to develop responsible digital health.

First, digital health technologies are often similar to public health interventions. Some are explicitly designed for public health purposes, such as monitoring infectious disease outbreaks (21, 22) or discovering risk factors for childhood obesity (23). But many digital technologies deployed in clinical settings also resemble public health interventions. Take machine learning tools for diagnostic decision-support (24, 25). These are usually designed for screening purposes, to monitor data from a given patient population and flag risk factors to human clinicians, and decisions to deploy them are made at the institutional level (e.g., hospitals or health service trusts). Even in patient-facing applications, e.g., conversational agents to assist with lifestyle decisions (26), many of the pertinent ethical decisions have to be made at the population/institutional level—by designers and regulators—rather than in the individual clinical encounter.

The second reason follows from the first. Due to its focus on population/institution-level interventions, public health ethics mainly addresses questions of political morality rather than the ethics of the individual patient-clinician relationship (20). It therefore provides a promising resource for addressing important political issues that arise from digital health.

Recent digital ethics has mostly focused on technological deficiencies and solutions, such as algorithmic bias and transparency. As several commentators have highlighted, this risks occluding broader social and political issues relating, e.g., to democratic oversight, power, and oppression (2733). For example, it was recently shown that an algorithm that uses healthcare costs as a proxy for healthcare needs systematically underestimated the needs of Black patients, because less resources are already spent on their care (34). Ruha Benjamin (35) argues that labeling this “algorithmic bias,” makes it seem a purely technical issue and sanitizes the social context that produced the problem in the first place, namely persistent structural and interpersonal racism in healthcare. More generally, as Leila Marie Hampton (30) argues, using generic concepts such as “fairness” or “transparency” to analyze technologies, without considering broader socio-political issues, risks legitimizing, and entrenching fundamentally unjust institutions.

While the Four Principles do include a principle of Justice, political issues covered under this heading mainly concern the question of what health-related goods society should provide and how to allocate resources within healthcare systems (5, chapter 6). By contrast, public health interventions raise a much wider set of political issues (20), similar to those commentators have started to discuss for digital health. For instance, is it permissible for interventions to impose risks or burdens on some individuals, even if they are not the main beneficiaries (e.g., mandatory vaccination programs)? Is it justifiable for interventions to exploit or reinforce structural patterns of disadvantage (e.g., using the communicative power of the state to stigmatize smoking)? More generally, when can institutional actors legitimately impose interventions despite widespread disagreement about relevant ethical values?

To be clear, my aim is not to reject the Four Principles framework or other principlist approaches to digital ethics. Such principles still serve a useful purpose in articulating the values at stake in digital ethics (cf. Section What rationales should be considered relevant?). Similarly, public health ethics will not, in itself, answer all of the socio-political issues that Benjamin, Hampton and others raise. Clearly, many of these require political action and structural change, not (just) better theory. Even in terms of theory, other literatures will be relevant too, especially emancipatory philosophies such as the Black Feminist tradition Hampton highlights. Nonetheless, public health ethics is a well-developed literature addressing practical political issues in healthcare, often closely informed by the empirical realities of healthcare policy and decision-making. It can thus help broaden the range of questions digital health ethics addresses.

Disagreement, Trade-Offs, and the Limits of Principles

The rest of this paper will focus on how insights from public health ethics can help overcome the two limitations of purely principlist approaches to digital ethics I highlighted in the introduction, i.e., that they mask disagreements between and within different stakeholder communities and provide little guidance for how to resolve trade-offs.

Consider for example debates about contact tracing apps for the management of Covid-19. Some governments wanted to base these on a centralized data collection approach, arguing that such datasets could also be used to produce new knowledge to help combat the pandemic. This was resisted by legal and information security experts concerned about potential privacy breaches (3638). Appealing to general principles is unlikely to resolve this debate. While most people would presumably agree, say, that digital health technologies should be used to “do good” (Beneficence), there are legitimate ethical and political disagreements about the extent to which privacy is constitutive of or conducive to a good life. While we should arguably accept some trade-offs between protecting individual privacy and promoting social goods, there is little consensus on what exactly those trade-offs should be (38).

The prevalent approach to managing value trade-offs within clinical ethics is through informed consent (5, chapter 3): by informing patients about the trade-off involved in some treatment and letting them decide whether this is acceptable in light of their particular circumstances and values, clinicians can legitimize the decision to administer or withhold the treatment. It might be tempting to apply the same approach to digital health. However, informed consent is only plausible when the trade-offs occur within a single patient's value-set. One of the ways digital health resembles public health is that the trade-offs often cut across populations. Rather than each patient deciding for themselves how to balance trade-offs, which values get priority depends on population-level aggregate decisions. Contact tracing apps, and centralized data collection more generally, can only produce the relevant social goods if there is sufficient uptake (39). Conversely, if enough people consent to share their personal data, this can often be used to train machine learning algorithms capable of inferring highly personal information even about those who withhold consent (40).

In such cases, making interventions conditional on obtaining everyone's consent is neither practically feasible nor ethically plausible. A single intransigent individual should not be allowed to deprive everyone else of significant social goods. However, pure majority rule is not plausible either. Certain groups and communities may have good reasons, e.g., to value privacy because of their historical experiences of surveillance and discrimination (37). For instance, during the 1980's AIDS crisis, gay community-based activists initially resisted name-based reporting of infections, arguing that homophobia and AIDS-hysteria made privacy breaches and discrimination against people identified as HIV-positive more likely than for other diseases (41). Even if such reasons should not necessarily be decisive, collective decision-making should at least be responsive to them, and not just defer to majority preferences.

Legitimacy Through Procedural Values

How to resolve disagreement and trade-offs is a characteristic conundrum in public health ethics. For example, in debates about priority setting and rationing of healthcare resources, ethicists have found it difficult to formulate ethical principles that are plausible enough to command broad consensus while being sufficiently fine-grained to guide decision-making in practice (42, 43). While many agree that those with greater needs should be given some priority, even at the expense of aggregate health outcomes, there is little consensus on how to weigh these two concerns against each other.

One influential model for resolving disagreements about priority setting in public health is called Accountability for Reasonableness (A4R) (19, 44, 45). Proposed by Norman Daniels and James Sabin, the key idea in A4R is to implement decision-procedures for reaching compromises which fair-minded people can accept as legitimate, despite their underlying ethical disagreements. This relies on a distinction between ethical rightness and ethical legitimacy. To regard a decision as right is to regard it as the morally correct thing to do in a given situation. To regard it as legitimate is to regard it as appropriately made, i.e., by a decision-maker or procedure whose moral authority to make such decisions should be accepted. The two can come apart: we can accept a verdict of “not guilty” in a fair trial as legitimate, even if we believe the defendant should have been convicted. Conversely, an unelected dictator may sometimes do the right thing, e.g., donate food to relieve a famine. Nonetheless, rightness and legitimacy are also entangled: if a procedure consistently generates abhorrent outcomes, we have reason to question its legitimacy; and if we can see that a decision-maker has carefully considered the relevant concerns, there is prima facie reason to accept their decision as right.

Daniels and Sabin propose four conditions for legitimate decision-procedures (44, 45):

1. Publicity: The rationale for a given decision must be publicly accessible.

2. Relevance: Decisions must be based on rationales which fair-minded individuals, who want to find mutually justifiable terms of cooperation, would accept as relevant to the decision.

3. Revision and Appeals: There must be mechanisms in place for challenging and revising decisions in light of new evidence or arguments.

4. Enforcement: There must be voluntary or public regulation in place to ensure that conditions 1–3 are met.

These conditions can be interpreted as embodying certain procedural values, specifying features that fair and appropriate decision procedures should have. It is a shared commitment to procedural values that generates legitimacy. Stakeholders who agree on these values have good reasons to regard procedures designed according to them as legitimate.

As the name suggests, the core procedural values in A4R are Accountability and Reasonableness. By articulating standards and mechanisms that stakeholders can use to hold decision-makers accountable—through enforceable rights to access rationales and challenge decisions—A4R aims to produce decisions that are reasonable, and can be recognized as such. Reasonableness here means something weaker than rightness: a decision is reasonable to the extent that it is responsive to all relevant concerns. Thus, if you recognize a decision as reasonable you may disagree about the specific way decision-makers weighed the reasons cited in their rationale, but you agree that it involved the right kinds of considerations.

The A4R conditions are supposed to guide the design of decision-making bodies charged with deciding how to balance any trade-offs that arise within a given healthcare institution (e.g., a hospital, public health agency or insurance company). Decision-makers should strive to identify compromises which all fair-minded stakeholders could find acceptable, though, some form of voting may be used if disagreement persists at the end of deliberation. Importantly, decision-makers do not need to articulate any general hierarchy of values or “meta-principles” for resolving trade-offs. Indeed, one of the motivations behind A4R is that we are unlikely to agree on any sufficiently action-guiding meta-principles. Rather, it aims to resolve trade-offs on a case-by-case basis as they arise in practice, based on rationales stakeholders will find contextually reasonable, despite persistent disagreement about general principles.

A4R is not without its detractors (little in philosophy is), nor is it the only account in public health ethics of how to resolve trade-offs (20). Nonetheless, it is a highly influential framework which has been used to inform public health practice (46, 47) and whose acceptability to decision-makers has been studied empirically across the world (4850). Furthermore, public health ethicists have proposed a number of revisions and extensions of the A4R framework, reflecting lessons from these practical applications (5154). As such, the A4R literature is likely to contain valuable lessons for responsible digital health1.

Adapting A4R to Digital Health

In the Introduction I highlighted two routes that ethicists have proposed for translating existing principles into practice: legislation/regulation and design practices. A4R can help overcome some of the limitations of the principlist approach within each of these.

Regarding the first, the challenge is to translate abstract general principles into more concrete legislation and regulation while still preserving their broad appeal. However, attempts to make principles more concrete and action-guiding, including any meta-principles for resolving trade-offs, will likely also make them more controversial. The A4R framework provides an alternative solution: rather than having to settle on a specific action-guiding translation of principles, legislators can instead specify how organizations that deploy or design digital health technologies should structure the decision-making processes through which they resolve any trade-offs they encounter.

As mentioned, deliberative bodies based on the A4R conditions have already been implemented in some healthcare institutions to address issues of priority setting and rationing. The remit of these could be expanded to also address the broader range of trade-offs that arise from the deployment of digital health technologies. Legislators could also require decision-making bodies modeled on the existing ones to be created elsewhere, including within private technology companies or as part of regulators charged with overseeing them.

Whether legally required or voluntarily adopted, this type of deliberative body could also provide a way to deal with trade-offs in the design of digital health technologies. A common criticism of Value-Sensitive Design (VSD) is that it lacks a method for resolving trade-offs, except if designers commit to an explicit – and therefore likely controversial – ethical theory (56, 57). This challenge will also affect proposals to implement digital ethics principles through (a modified version of) VSD (12). A4R suggests a way to overcome it: by structuring their decision-making processes according to the right kinds of procedural values, designers will be able to reach decisions that stakeholders can recognize as legitimate and therefore acceptable. To be clear, A4R is a normative theory of legitimacy. It does not commit the naturalistic fallacy by assuming that whatever stakeholders find acceptable is therefore right. If a decision counts as legitimate, according to A4R, stakeholders ought to find it acceptable.

Future Research Questions

There are of course many details to be worked out regarding the proposals sketched here. How to best implement and operationalize them in practice remains an important question for future research. Part of this will practical, but A4R also provides a philosophically grounded theory to underpin this research and ensure that proposed implementations remain normatively plausible.

However, we should not expect that A4R can simply be transposed from its original application (priority setting and rationing) to digital health without modification. Adapting A4R to digital health will likely require modifications or extensions to the framework itself. At least two kinds of further research questions will be relevant to explore.

Are Other Procedural Values Needed?

One of the ways public health ethicists have extended the original A4R framework is by adding further procedural values, often motivated by their practical experience of applying A4R to priority setting decisions. For instance, some have proposed new conditions of Inclusiveness and Empowerment. In brief, these require explicit input from all affected stakeholders and that active steps are taken to counteract knowledge-gaps and institutional power differences between decision-makers (33, 53, 58). Importantly, these conditions are still motivated by the core value of Reasonableness, namely to ensure that decision-makers are responsive to as many relevant concerns as possible, including those that are held by minoritized or less empowered parts of the population.

Applying A4R to digital health may similarly reveal new procedural values. For instance, if Benjamin and Hampton are correct that ethical discussions of digital technologies risk sanitizing and entrenching unjust social structures, it may be necessary to actively encourage decision-makers to raise critical questions about how new technologies will interact with these structures. Similarly, it may be necessary to encourage scrutiny of the aims and presuppositions of the technology itself, asking for example whether it targets the right problem or whether the proposed solution is at all appropriate. We might summarize these as a condition of Socio-Technological Criticism.

What Rationales Should Be Considered Relevant?

The Relevance condition is a formal constraint on the type of rationales that should be given weight within decision-making. However, implementing A4R in practice requires us to specify in more substantive terms what types of concerns should be admissible. This will likely depend on the context of application. As A4R was originally developed for debates about rationing, most discussions focus on rationales framed in terms of Fairness or related distributive values (e.g., Solidarity (52)). Presumably, a broader range of values will be relevant to debates about digital health technologies (e.g., Privacy). Exploring in more detail what those values should be is a substantive research task. To ensure that decision-makers are responsive to all relevant reasons, this research should aim to identify a broad range of plausible concerns and help elucidate and articulate these, so that stakeholders can present them in their most compelling form. Existing VSD methodologies for empirical and conceptual investigations of stakeholder values provide a plausible approach to this task.

Existing principlist approaches to digital ethics provide a useful starting point. However, the values discussed in the existing literature should not be assumed exhaustive or representative. The apparent convergence found here may simply be a product of people from roughly similar backgrounds consuming the same literature (2, 17). It is noticeable, for instance, that many commonly cited principles (e.g., transparency, fairness, responsibility) also feature prominently within liberal political philosophy. Values more characteristic of other political traditions, such as solidarity, belonging, authenticity, harmony, non-exploitation, non-domination or emancipation are rarely discussed or even mentioned (9, 29, 30). Public health ethics may also here provide a useful resource. Public health ethicists have developed alternative sets of principles to the four classical principles of biomedical ethics (59), and explored the implications of different political traditions (60).

Conclusion

Paying closer attention to public health ethics is likely to benefit efforts to develop responsible digital health. In this paper, I have made a general case for this claim and highlighted A4R as a specific model from public health ethics that can be adapted to digital health. While not intended to wholly replace principlism, A4R can complement and help overcome some of the limitations faced by principlist approaches. Further, research on the questions outlined above could generate valuable insights for the ethical deployment, design and regulation of digital technologies, especially within healthcare.

Data Availability Statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Funding

This research was funded in whole, or in part, by the Wellcome Trust [Grant No. 213660/Z/18/Z] and the Leverhulme Trust, through the Leverhulme Centre for the Future of Intelligence [RC-2015-067].

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

I'm very grateful to Jess Whittlestone, Stephen Cave and two anonymous referees for detailed feedback on previous drafts, and to Sidsel Størmer for research assistance which informed this paper. Many thanks also to Ali Boyle, Elena Falco, Adrian Weller, Dan White, and John Zerilli for fruitful discussion.

Footnotes

1. ^To my knowledge, only two other recent papers have discussed the application of A4R to digital (health) ethics (33, 55), though not along the same lines as me.

References

1. Algorithm Watch. AI Ethics Guidelines Global Inventory. (2020). Available online at: https://inventory.algorithmwatch.org/ (accessed April 2, 2021).

Google Scholar

2. Whittlestone J, Nyrup R, Alexandrova A, and Cave S. The role limits of principles in ai ethics: towards a focus on tensions. In: Proceedings of the AAAI/ACM Conference AI. Ethics, Society (AIES'19):195-200 (Honolulu, HI). (2019). doi: 10.1145/3306618.3314289

CrossRef Full Text | Google Scholar

3. Floridi L, and Cowls J. A unified framework of five principles for AI in society. Harv Data Sci Rev. (2019) 1. doi: 10.1162/99608f92.8cd550d1

CrossRef Full Text | Google Scholar

4. Véliz C. Three things digital ethics can learn from medical ethics. Nat Electron. (2019) 2:316–8. doi: 10.1038/s41928-019-0294-2

CrossRef Full Text | Google Scholar

5. Peters D, Vold K, Robinson D, and Calvo R. Responsible AI—two frameworks for ethical design practice. IEEE Trans Tech Soc. (2020) 1:34–47. doi: 10.1109/TTS.2020.2974991

CrossRef Full Text | Google Scholar

6. Beauchamp T, and Childress J. Principles of Biomedical Ethics. 5th ed. New York, NY: Oxford University Press (2001).

Google Scholar

7. Fjeld J, Achten N, Hilligoss H, Nagy A, and Srikumar M. Principled artificial intelligence: mapping consensus in ethical rights-based approaches to principles for AI. In: Berkman Klein Center Research Publication No. 2020-1 (Cambridge, MA). (2020). doi: 10.2139/ssrn.3518482

CrossRef Full Text | Google Scholar

8. Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, et al. AI4People—An ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. (2018) 28:689–707. doi: 10.1007/s11023-018-9482-5

PubMed Abstract | CrossRef Full Text | Google Scholar

9. Jobin A, Ienca M, and Vayena E. Artificial intelligence: the global landscape of ethics guidelines. Nat Mach Intell. (2019) 1:389–99. doi: 10.1038/s42256-019-0088-2

CrossRef Full Text | Google Scholar

10. Zeng Y, Lu E, and Huangfu C. Linking artificial intelligence principles. In: Proceedings of the AAAI Workshop on Artificial Intelligence Safety (AAAI-Safe AI 2019). Honolulu, HI (2019).

Google Scholar

11. Morley J, Floridi L, Kinsey L, and Elhalal A. From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics. (2020) 26:2141–68. doi: 10.1007/s11948-019-00165-5

PubMed Abstract | CrossRef Full Text | Google Scholar

12. Umbrello S, and van de Poel I. Mapping value sensitive design onto AI for social good principles. AI Ethics. (2021). doi: 10.1007/s43681-021-00038-3

CrossRef Full Text | Google Scholar

13. van de Poel I. Embedding values in artificial intelligence (AI) systems. Minds Mach. (2020) 30:385–409. doi: 10.1007/s11023-020-09537-4

CrossRef Full Text | Google Scholar

14. Stix C. Actionable principles for artificial intelligence policy: three pathways. Sci Eng Ethics. (2021) 27:15. doi: 10.1007/s11948-020-00277-3

PubMed Abstract | CrossRef Full Text | Google Scholar

15. Braun M, Bleher H, and Hummel P. A leap of faith: is there a formula for “Trustworthy” AI? Hast C Rep. (2021) 51:17–22. doi: 10.1002/hast.1207

PubMed Abstract | CrossRef Full Text | Google Scholar

16. Mittelstadt B. Principles alone cannot guarantee ethical AI. Nat Mach Intell. (2019) 1:501–7. doi: 10.1038/s42256-019-0114-4

CrossRef Full Text | Google Scholar

17. Whittlestone S, Nyrup J, Alexandrova R, Dihal A, and Cave K. Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research. London: Nuffield Foundation (2019).

Google Scholar

18. Dignum V. Responsible autonomy. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) (Melbourne, VIC). (2017). p. 4698–704. doi: 10.24963/ijcai.2017/655

CrossRef Full Text | Google Scholar

19. Daniels N. Accountability for reasonableness. BMJ. (2000) 321:1300–1. doi: 10.1136/bmj.321.7272.1300

CrossRef Full Text | Google Scholar

20. Faden R, Bernstein J, and Shebaya S. Public health ethics. In: Zalta E. editor. The Stanford Encyclopedia of Philosophy (Fall 2020 Edition). (2020). Available online at: https://plato.stanford.edu/archives/fall2020/entries/publichealth-ethics/ (accessed April 2, 2021).

Google Scholar

21. Bengtsson L, Gaudart J, Lu X, Moore S, Wetter E, Sallah K, et al. Using mobile phone data to predict the spatial spread of cholera. Sci Rep. (2015) 5:8923. doi: 10.1038/srep08923

PubMed Abstract | CrossRef Full Text | Google Scholar

22. Doan S, Ngo Q, Kawazoe A, and Collier N. Global Health Monitor – a web-based system detecting and mapping infectious disease. In: Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP) (Hyderabad). (2008). p. 951–6. https://arxiv.org/abs/1911.09735

Google Scholar

23. Dugan T, Mukhopadhyay S, Carroll A, and Downs S. Machine learning techniques for prediction of early childhood obesity. Appl Clin Inform. (2015) 5:506–20. doi: 10.4338/ACI-2015-03-RA-0036

PubMed Abstract | CrossRef Full Text | Google Scholar

24. Gulshan V, Peng L, Coram M, Stump M, Wu D, and Narayanswamy A. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. (2016) 316:2402–10. doi: 10.1001/jama.2016.17216

PubMed Abstract | CrossRef Full Text | Google Scholar

25. Solares J, Raimondi F, Zhu Y, Rahimian F, Canoy D, Tran, et al. Deep learning for electronic health records: a comparative review of multiple deep neural architectures. J Biomed Inform. (2020) 101:103337. doi: 10.1016/j.jbi.2019.103337

PubMed Abstract | CrossRef Full Text | Google Scholar

26. Stein N, and Brooks K. A fully automated conversational artificial intelligence for weight loss: longitudinal observational study among overweight and obese adults. JMIR Diabetes. (2017) 2:e28. doi: 10.2196/diabetes.8590

PubMed Abstract | CrossRef Full Text | Google Scholar

27. Zimmerman A, di Rosa E, and Kim H. Technology Can't Fix Algorithmic Injustice. Boston Review. (2020). Available online at: http://bostonreview.net/science-nature-politics/annette-zimmermann-elena-di-rosa-hochan-kim-technology-cant-fix-algorithmic (accessed April 2, 2021).

Google Scholar

28. Kind C. The Term ‘Ethical AI' is Finally Starting to Mean Something. Venture Beat (2020). Available online at: https://venturebeat.com/2020/08/23/the-term-ethical-ai-is-finally-starting-to-mean-something/ (accessed April 2, 2021).

Google Scholar

29. Kalluri P. Don't ask if AI is good or fair, ask how it shifts power. Nature. (2020) 583:169. doi: 10.1038/d41586-020-02003-2

CrossRef Full Text | Google Scholar

30. Hampton L. Black feminist musings on algorithmic oppression. In: Conference on Fairness, Accountability, Transparency (FAccT'21). Association for Computing Machinery, New York, NY, United States (2021). doi: 10.1145/3442188.3445929

CrossRef Full Text | Google Scholar

31. Binns R. Fairness in machine learning: lessons from political philosophy. In: Proceedings in Machine Learning Research 81: Conference on Fairness, Accountability and Transparency (New York, NY). (2018). p. 149–59.

Google Scholar

32. Himmelreich J. Ethics of technology needs more political philosophy. Commun ACM. (2020) 63:33–5. doi: 10.1145/3339905

CrossRef Full Text | Google Scholar

33. Wong P. Democratizing algorithmic fairness. Philos Technol. (2020) 33:224–44. doi: 10.1007/s13347-019-00355-w

CrossRef Full Text | Google Scholar

34. Obermeyer Z, Powers B, Vogeli C, and Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. (2019) 366:447–53. doi: 10.1126/science.aax2342

PubMed Abstract | CrossRef Full Text | Google Scholar

35. Benjamin R. Assessing risk, automating racism. Science. (2019) 366:421–2. doi: 10.1126/science.aaz3873

CrossRef Full Text | Google Scholar

36. Burgess M. There's a Big Row Brewing Over the NHS Covid-19 Contact Tracing App. Wired. (2020). Available online at: https://www.wired.co.uk/article/nhs-contact-tracing-app-data-privacy (accessed April 2, 2021).

Google Scholar

37. Cave S, Whittlestone J, Nyrup R, hEigeartaigh SO, and Calvo RA. Using AI ethically to tackle covid-19. BMJ. (2021) 372:n364. doi: 10.1136/bmj.n364

CrossRef Full Text | Google Scholar

38. Parker M, Fraser C, Abeler-Dörner L, and Bonsall D. Ethics of instantaneous contact tracing using mobile phone apps in the control of the COVID-19 pandemic. J Med Ethics. (2020) 46:427–31. doi: 10.1136/medethics-2020-106314

PubMed Abstract | CrossRef Full Text | Google Scholar

39. Singh H, Couch D, and Yap K. Mobile health apps that help with COVID-19 management: scoping review. JMIR Nursing. (2020) 3:e20596. doi: 10.2196/20596

PubMed Abstract | CrossRef Full Text | Google Scholar

40. Kosinski M, Stillwell D, and Graepel T. Private traits and attributes are predictable from digital records of human behavior. PNAS. (2013) 110:5802–5. doi: 10.1073/pnas.1218772110

PubMed Abstract | CrossRef Full Text | Google Scholar

41. Bayer R, and Fairchild A. The limits of privacy: surveillance and the control of disease. Health Care Anal. (2002) 10:19–35. doi: 10.1023/A:1015698411824

PubMed Abstract | CrossRef Full Text | Google Scholar

42. Daniels N. Rationing fairly: programmatic considerations. Bioethics. (1993) 7:224–33. doi: 10.1111/j.1467-8519.1993.tb00288.x

CrossRef Full Text | Google Scholar

43. Holm S. Goodbye to the simple solutions: the second phase of priority setting in health care. BMJ. (1998) 1998:317:1000. doi: 10.1136/bmj.317.7164.1000

PubMed Abstract | CrossRef Full Text | Google Scholar

44. Daniels N, and Sabin J. Limits to health care: fair procedures, democratic deliberation, and the legitimacy problem for insurers. Phil Pub Aff. (1997) 26:303–50. doi: 10.1111/j.1088-4963.1997.tb00082.x

PubMed Abstract | CrossRef Full Text | Google Scholar

45. Daniels N. Just Health: Meeting Health Needs Fairly. Cambridge: Cambridge University Press (2008). doi: 10.1017/CBO9780511809514

PubMed Abstract | CrossRef Full Text | Google Scholar

46. Maluka S. Strengthening fairness, transparency and accountability in health care priority setting at district level in Tanzania. Global Health Action. (2011) 4:7829. doi: 10.3402/gha.v4i0.7829

PubMed Abstract | CrossRef Full Text | Google Scholar

47. O'Malley P, Rainford J, and Thompson A. Transparency during public health emergencies: from rhetoric to reality. Bull WHO. (2009) 87:614–8. doi: 10.2471/BLT.08.056689

PubMed Abstract | CrossRef Full Text | Google Scholar

48. Martin D, Giacomini M, and Singer P. Fairness, accountability for reasonableness, and the views of priority setting decision-makers. Health Policy. (2002) 61:279–90. doi: 10.1016/S0168-8510(01)00237-8

PubMed Abstract | CrossRef Full Text | Google Scholar

49. Kapiri L, Norheim O, and Martin D. Fairness and accountability for reasonableness. Do the views of priority setting decision makers differ across health systems and levels of decision making? Soc Sci Med. (2009) 68:766–73. doi: 10.1016/j.socscimed.2008.11.011

PubMed Abstract | CrossRef Full Text | Google Scholar

50. Mshana S, Shemilu H, Ndawi B, Momburi R, Olsen O, Byskov J, et al. What do district health planners in Tanzania think about improving priority setting using 'Accountability for reasonableness'? BMC Health Serv Res. (2007) 7:180. doi: 10.1186/1472-6963-7-180

PubMed Abstract | CrossRef Full Text | Google Scholar

51. Rid A, and Biller-Andorno N. Justice in action? Introduction to the minisymposium on Norman Daniels' just health: meeting health needs fairly. J Med Ethics. (2009) 35:1–2. doi: 10.1136/jme.2008.025783

CrossRef Full Text | Google Scholar

52. Hasman A, and Holm S. Accountability for reasonableness: opening the black box of process. Health Care Anal. (2005) 13:261–73. doi: 10.1007/s10728-005-8124-2

PubMed Abstract | CrossRef Full Text | Google Scholar

53. Thompson A, Faith K, Gibson J, and Ushur R. Pandemic influenza preparedness: an ethical framework to guide decision-making. BMC Med Ethics. (2006) 2006:7:12. doi: 10.1186/1472-6939-7-12

PubMed Abstract | CrossRef Full Text | Google Scholar

54. Bell J, Hyland S, DePellegrin T, Upshur R, Bernstein M, and Martin D. SARS and hospital priority setting: a qualitative case study and evaluation. BMC Health Serv Res. (2004) 2004:4:36 doi: 10.1186/1472-6963-4-36

PubMed Abstract | CrossRef Full Text | Google Scholar

55. Brall C, Schröder-Bäck P, and Maeckelberghe E. Ethical aspects of digital health from a justice point of view. Europ J Pub Health. (2019) 29(Suppl. 3):18–22. doi: 10.1093/eurpub/ckz167

PubMed Abstract | CrossRef Full Text | Google Scholar

56. Manders-Huits N. What values in design? The challenge of incorporating moral values into design. Sci Eng Ethics. (2011) 17:271–87. doi: 10.1007/s11948-010-9198-2

PubMed Abstract | CrossRef Full Text | Google Scholar

57. Jacobs N, and Huldtgren A. Why value sensitive design needs ethical commitments. Ethics Inf Technol. (2021) 23:23–6. doi: 10.1007/s10676-018-9467-3

CrossRef Full Text | Google Scholar

58. Gibson J, Martin D, and Singer P. Priority setting in hospitals: fairness, inclusiveness, and the problem of institutional power differences. Soc Sci Med. (2005) 61:2355–62. doi: 10.1016/j.socscimed.2005.04.037

PubMed Abstract | CrossRef Full Text | Google Scholar

59. Upshur R. Principles for the justification of public health intervention. Canad J Pub Health. (2002) 93:101–3 doi: 10.1007/BF03404547

PubMed Abstract | CrossRef Full Text | Google Scholar

60. Jennings B. Frameworks for ethics in public health. Acta Bioethica. (2003) 9:165–76. doi: 10.4067/S1726-569X2003000200003

CrossRef Full Text | Google Scholar

Keywords: digital ethics, principlism, public health ethics, procedural values, accountability for reasonableness, A4R

Citation: Nyrup R (2021) From General Principles to Procedural Values: Responsible Digital Health Meets Public Health Ethics. Front. Digit. Health 3:690417. doi: 10.3389/fdgth.2021.690417

Received: 02 April 2021; Accepted: 14 June 2021;
Published: 02 July 2021.

Edited by:

Geke Ludden, University of Twente, Netherlands

Reviewed by:

Olya Kudina, Delft University of Technology, Netherlands
Merlijn Smits, Radboud University Nijmegen Medical Centre, Netherlands

Copyright © 2021 Nyrup. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Rune Nyrup, rn330@cam.ac.uk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.