Skip to main content

OPINION article

Front. Hum. Neurosci., 23 December 2009
Sec. Cognitive Neuroscience
This article is part of the Research Topic Emerging issues in brain imaging: a multidisciplinary dialogue View all 7 articles

Regulating brain imaging: challenge and opportunity

School of Law, King’s College London, UK

Introduction

The development of powerful new brain imaging technologies is likely to present a range of opportunities in many spheres of social life – for example, in the criminal justice system, in employment and in business contexts, and so on (Greely, 2006 ). Regulators are challenged to create the right kind of environment for the application of these technologies; but, they might also see an opportunity to adopt these technologies for their own regulatory purposes. In this Opinion, I comment, first, on the challenge of creating the right kind of regulatory environment and, then, on the implications of adopting a technology-reliant regulatory strategy.

The Regulatory Environment

When we act, whether as developers, commercial exploiters, or users of technologies, we do so in a particular “regulatory environment” – an environment where actors are faced with a range of signals indicating whether it is right to act in a particular way, whether an act is prudent, and even whether an act is possible. Some such regulatory environments function in a top-down fashion (with regulators clearly distinguishable from regulatees), others are more bottom-up (in the sense that they are self-regulatory); and, whilst some are reasonably stable, others are unstable, and so on.
For present purposes, the most significant feature of any regulatory environment is the range of coding that is available to regulators. Essentially regulators will seek to engage the practical reason of regulatees in one (or more) of the following three registers:
i. by signalling that some act, x, categorically ought or ought not to be done relative to standards of right action (as in retributive articulations of the criminal law); or
ii. by signalling that some act, x, ought or ought not to be done relative to the prudential interests of regulatees (as in deterrence-driven articulations of the criminal law); or
iii. by designing people, places, or products in such a way that some act, x, simply cannot be done – in which case, regulatees reason, not that x ought not to be done, but that x cannot be done.
If we rely on the first two registers, we assume that, whatever the response by regulatees, “they could have acted otherwise”, and that they are fairly held responsible for their actions. Where, however, the signal is in the third register, regulatees can only comply and it follows that it is no longer correct to say that “they could have acted otherwise”. In this light, we can observe a dual assault on traditional regulatory thinking: developments in the brain sciences challenge our assumption that, in general, “regulatees could have acted otherwise”; and technologies such as brain imaging provoke regulators to think about compliance by design (where regulatees are controlled so that they cannot act otherwise than they do) (Lessig, 1999 ).

The Right Kind of Regulatory Environment

The “right kind” of regulatory environment will be one where regulators are trying to do the right kind of thing in the right kind of way, where regulatory interventions are fit for purpose, and where regulation is properly engaged and connected.

Are Regulators Trying to do the Right Kind of Thing?

Generally, the regulatory environment should be geared for risk management and benefit sharing; and, the red lines (if any) should be drawn in the right places. More specifically, in the case of brain imaging, we might say that respect for human rights and human dignity should be properly secured, that privacy and confidentiality should be protected, that informed consent should be in place, and that there should be such precaution as is proportionate. However, this is easier said than done.
While most agree that regulators should ensure that brain imaging technologies are applied only once we are satisfied that they present no (unacceptable) risk to the health and safety of humans or to the integrity of the environment, agreement is more difficult to find once we consider issues of human dignity, privacy and the like. The problem is that, in pluralist societies, the leading ethical constituencies – the utilitarian, the human rights, and the dignitarian (Brownsword, 2008 ) – interpret the focal concepts in their own way. For example, while the need for free and informed consent is integral to an ethic of individual (human) rights, dignitarians (who treat duties as non-negotiable) can dismiss it as irrelevant; and, while rights theorists regard the obtaining and signalling of consent as a fundamental matter of principle, for utilitarians, these processes can be viewed as no more than transaction costs (Beyleveld and Brownsword, 2007 ).

Are Regulators Going About Their Business in the Right Kind of Way?

In many societies, regulators are expected to operate in ways that are transparent, accountable, inclusive and participatory. Hence, where a legislative framework is agreed for the application of a new technology, this will be preceded by public consultation, media and parliamentary debate, and so on. However, it is not always the case that the operative rules regulating the use of a technology are so debated and agreed. Quite possibly, all that we have is informal codes or guidelines that are self-regulatory coupled with fall-back general legal provisions such as those found in the criminal law and the law of torts. This might not be thought to be adequate. And brain imaging might be just such a case where an open regulatory debate is required.

Are Regulatory Interventions Fit for Purpose?

Even if regulators are trying to do the right kind of thing, and proceeding in the right kind of way, the regulatory environment will be deficient unless regulatory interventions are effective. Sadly, regulatory effectiveness cannot be taken for granted.
First, there might be problems with the regulators themselves, with their integrity and competence, as well as with the adequacy of their resources. While regulators who lack integrity are prey to corruption or capture, those who are simply incompetent might be unclear about their regulatory purposes – or the standards set by such regulators might fail to give workable guidance to regulatees (Fuller, 1969 ). Where resources are inadequate, regulators (acting on poor policy advice) might seriously miscalculate the consequences and indirect effects of their intervention; and their ability to monitor compliance and to correct for non-compliance might be severely limited.
Secondly, it might be the regulatees that are the problem. Predictably, some (habitual criminal) regulatees respond in the wrong way. However, we also need to anticipate non-compliance in more “respectable” quarters, particularly where economic or professional imperatives prevail. One of the facts of regulatory life is that, so long as regulators are not pushing at an open door, they must either try to minimise resistance ex ante or have a strategy for dealing with it ex post.
Thirdly, it is perfectly possible that the relationship between regulators and regulatees is aligned for effectiveness and yet a regulatory intervention fails because of some external disruption.

Is Regulation Properly Engaged and Connected?

With the rapid development and application of technologies, it is a commonplace that regulation (especially legislation) lacks sustainability, losing connection with its particular technological target. But, before disconnection and reconnection, there needs to be an initial connection.
Given that technologies “do not arrive fully mature” (Moor, 2008 ), then regulators need to consider whether we are in the introduction, permeation, or power stage of the development, utilisation, and penetration of a particular technology. In the case of brain imaging technology, we are probably somewhere between the stage of introduction (when the technology is expensive, known about only by a few specialists, and not in general circulation) and the stage of permeation when the costs start to drop, circulation spreads, and demand increases. At this stage, even if brain imaging is not being conducted in a regulatory void, the question arises of whether a more dedicated regulatory connection is called for. So long as the technology is large, visible, and expensive, it might be tempting to think that we can operate with a regulatory scheme that is based on registration, inspection, and institutional responsibility. However, as technologies assume much less expensive and more widely distributed formats, we might find that the regulation has become disconnected (Greely, 2006 ) leaving a regulatory environment that is deficient.

Technology as a Regulatory Tool

Where regulators turn to technology as a regulatory instrument, as they have done with the use of CCTV, DNA profiling, biometrics, and so on, there is a change in the regulatory culture; and we can again rehearse our questions about the adequacy of the regulatory environment. However, for present purposes, let me highlight just one question, namely whether regulators, who rely increasingly on technological tools, are operating in the right kind of way (Rothstein and Talbott, 2006 ).
If brain imaging were to be introduced into the regulatory repertoire, we might ask whether its use is compatible with respect for human rights and due process (Rosen, 2007 ), but also whether it is corrosive of the conditions of moral community (Brownsword, 2008 ). Would the impact of such technology be threatening to our assumption of moral agency and to our practice of attributing personal responsibility?
To repeat, we face two challenges to our traditional regulatory practice. The first is the challenge of the new brain sciences. To undermine our current practices, brain science must convince us that we are not really in control of our actions, whether other-regarding (moral) or purely self-regarding (prudential) actions. This seems a tall order (Morse, 2006 ). The second is the challenge, not so much of the science, but of the technology. If regulators turn to new technological tools of control, they can intensify the pressure on prudential reason (by making detection a near certainty); and, in some cases, regulatees might find that the technology so confines them that they have no choice other than to comply. Either way, it is arguable that the conditions for the cultivation of moral virtue are undermined; and, where there is no option to deviate, the regulatory environment already puts regulatees in a position where they cannot act otherwise than they do. Even if the science has a long way to go before it is disruptive, the technology is already with us.

Conclusion

With the emergence of a new suite of technologies, regulators face new challenges. However, we need not start entirely afresh. Getting the regulatory environment right raises a number of generic issues (concerning legitimacy, effectiveness and connection) that are familiar; and many of the particular puzzles are well-rehearsed. There are also regulatory opportunities and, once again, we face some fundamental, but not unfamiliar, questions about the character of the regulatory environment. Nevertheless, work in the new brain sciences coupled with the development of brain imaging doubly tests the resolve of a community with moral aspirations: on the one hand, it tests our determination to persist with a regulatory approach that respects the ideal of moral agency; on the other, it tests the resilience of the belief that we have in ourselves as moral agents.

References

Beyleveld, D., and Brownsword, R. (2007). Consent in the Law. Oxford, Hart.
Brownsword, R. (2008). Rights, Regulation and the Technological Revolution. Oxford, Oxford University Press.
Fuller, L. L. (1969). The Morality of Law. New Haven, Yale University Press.
Greely, H. T. (2006). The Social effects of advances in neuroscience: legal problems, legal perspectives. In Neuroethics, J. Illes, ed. (Oxford, Oxford University Press), pp. 245–263.
Lessig, L. (1999). Code and Other Laws of Cyberspace. New York, Basic Books.
Moor, J. H. (2008). Why we need better ethics for emerging technologies. In Information Technology and Moral Philosophy, J. van den Hoven and J. Weckert, eds (Cambridge, Cambridge University Press), pp. 26–39.
Morse, S. J. (2006). Moral and legal responsibility and the new neuroscience. In Neuroethics, J. Illes, ed. (Oxford, Oxford University Press), pp. 33–50.
Rosen, J. (2007). The Brain on the Stand. The New York Times, March 11.
Rothstein, M. A., and Talbott, M. K. (2006). The Expanding use of DNA in law enforcement: what role for privacy? J. Law Med. Ethics 34, 153–164.
Citation:
Brownsword R (2009). Regulating brain imaging: challenge and opportunity.Front. Hum. Neurosci.3:50. doi: 10.3389/neuro.09.050.2009
Received:
18 June 2009;
 Paper pending published:
17 July 2009;
Accepted:
02 November 2009;
 Published online:
23 December 2009.

Edited by:

Chiara Saviane, SISSA, Italy

Reviewed by:

Stefano F. Cappa, Vita-Salute San Raffaele University, Italy
Chiara Saviane, SISSA, Italy
Copyright:
© 2009 Brownsword. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.
*Correspondence:
Roger Brownsword, School of Law, Kingquotidns College London, Strand, London WC2R 2LS, UK. Email: Roger.brownsword@kcl.ac.uk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.