BEAMING Legal and Ethical Workshop, Wednesday October 10th 2012

Julian Savulescu and Tim Nissen during Beaming legal workshop

 On Wednesday, 10th October the BEAMING Consortium met for a workshop on the ethical and legal implications of researching and developing BEAMING and BEAMING-like technologies.

The workshop began with a brief introduction from Patrick Haggard about the importance of integrating ethical and legal issues into the core research activity of any project developing new technologies. The aim of the workshop was to help the Consortium think about what type of issues BEAMING might raise and how we could address them. 

Anthony Farrant introduced some of the philosophical principles that generally form a framework for ethical debates. Focussing on the principle of autonomy, he considered how this key concept in moral philosophy could help to elucidate the broad range of ethical concerns that should guide the development of the BEAMING project.

In moral philosophy, autonomy is understood in several different ways. In its ordinary usage, autonomy commonly means independence. Philosophically it can be more complex, with some traditions viewing autonomy as self-regulation, the capacity to use higher mental states to regulate lower mental states, as in the case of Odysseus constraining his liberty (by tying himself to the mast of his ship) in order to frustrate his lower-order attraction to the song of the sirens so that he could satisfy his higher-order plan to hear them. Kant says that we are self-legislators, and that the autonomy of reason is the root of morality because it makes us will universal laws for action. Autonomy, then, has been related to the concepts of independence, self-regulation and morality. But how does autonomy relate to BEAMING? 

Ensuring that as developers of BEAMING technology we respect the autonomy of its users means considering, among other things, the safety, privacy and veracity of the technology and its users. If the technology causes somebody physical harm, alienates someone from their desires or affects someone’s ability to decide what is right and wrong, then it restricts their autonomy. Similarly, if the information shared between locals and visitors within BEAMING, as well as information stored about people, is secure then we are respecting users’ autonomy. Verifying that people are who they claim to be, and that users are not deceiving others within the system, will also ensure that the autonomy of the users is respected, for instance by preventing people from being influenced by false representations of other users.

A strongly related issue is trust. We as developers and users of the technology need to ensure that BEAMING has taken all the measures that they are capable of taking to ensure the safety, privacy and veracity of users and their information. Users need to trust that regulations exist to guide and constrain the uses of BEAMING-like technologies and that the developers have followed these regulations. There also needs to be trust within the community that people are prepared to respect the autonomy of others.

In considering the ethical concept of autonomy, Anthony showed us the importance of considering these issues when planning the development and application of this new technology.

Continuing in the philosophical theme, Julian Savulescu discussed how ethical principles like autonomy can shape the choices we make in developing new technologies. Julian argued that the ultimate cause of many global catastrophes is human choice: things like terrorism, poverty, global warming and the Fukushima nuclear disaster can all be attributed to choice, whether it’s the choice to fund bank bonuses rather than schools in developing countries, or the choice to continue using fossil fuels at such high rates when we know the damage it is doing to the environment. The enormous power that comes with the technologies that have been developed even over the last century give us the capacity to do great good, but also great harm. Ethical principles like autonomy can and should guide us in choosing how we design new technology like BEAMING, how we restrict its use, and how we disseminate knowledge about it. These are ethical questions, not scientific ones.

Concepts like harm, risk and responsibility should inform the ethical discussion around new technologies. For instance, if we believe that we as moral people should at least do those things that provide great benefit to others and a minimal sacrifice from ourselves, then we should design a new technology like BEAMING to encourage, but not require, people to make that minimal sacrifice for the greater good. A new technology also brings some amount of risk to the user, whether that risk is physical (like electrocution or injury from the equipment) or otherwise (like the risk of fraud or identity theft). Julian argued that risk is acceptable when the benefits are commensurate to the risk, when the risk has been minimized as much as possible, and when there is no better or safer alternative to the risky course of action. Competent adults can make their own decisions on what risks they are willing to take, as long as the risks are known and documented, and the adult is informed of them.

The discussion of risk raised questions about consent. Respecting a person’s autonomy requires getting their informed consent to undertake a risk. But informed consent requires that the person is competent to give their consent, that they are informed of the risks, and that they give their consent freely and without coercion or exploitation. We need to ensure that the users of BEAMING are able to give informed consent by recognizing possible risks the technology poses and informing the users of them.

The concept of responsibility relates to dual use when developing new technologies. How responsible are the developers for the way that the technology is used by the public? Julian contended that the degree to which a person is responsible for the outcome of an action is the degree to which they were able to foresee the result, and how avoidable that result was. We as developers can’t control how a government or criminal uses the technology that we develop, but we can document the risks, do our best to think about the possibility of dual use, try to promote dialogue with potential users, and be honest about the technology’s potential in order to reduce as much as possible the risk it poses. In this way, we can reduce our responsibility for harm resulting from the use of the technology.

Timothy Nissen discussed how the law responds to new technologies. Essentially, this happens in one of three ways: the technology may be governed by existing generic legislation, by existing related legislation, or by creating new bespoke legislation. New legislation is rarely created before a new technology is rolled out, as this process is usually reserved for cases of potential catastrophes, like nuclear power and early DNA research, or where they wish to create something potentially controversial and/or which would be potentially unlawful in the absence of such legislation, as in the case of the now defunct national identity database. Most politicians probably don’t know about the BEAMING project, and it doesn’t fall under the category of “potential catastrophe” or “being created by government”, so any new legislation required by its development and use will almost certainly come after the fact, when it’s on sale and in use.

The BEAMING technology will be subject to existing legislation, like health and safety laws to deal with energy use, electronics and the manufacturing process. Lawmakers can use these existing laws to cover aspects of the new technology. For instance, laws relating to existing telecommunications technologies may be “stretched” to cover the new possibilities allowed by BEAMING. However, Timothy reminded us to consider the fact that if we want BEAMING to be available in other jurisdictions, it has to comply with the laws in those jurisdictions. A particularly important example is the fact that the possibility for 3rd party law enforcement surveillance in communication technologies is required by law in the United States under the Communications Assistance for Law Enforcement Act. This means that, if we want BEAMING to be available there, we need to ensure that such surveillance is in fact possible within the technology.

Laws reflect the values that a society holds, but how does society reach conclusions about the rights and wrongs of new technologies? Often new laws are the result of resistance from society at large, for instance when people have a bad experience with a new technology. In order to be able to react to the general feeling of the public towards a new technology, organized public engagement can be a very useful tool. Citizen collectives discussing technologies with their developers, other technologists, or even just amongst themselves can provide useful feedback directly from the people who will be using the technology, as well as countering possible resistance to it in the public’s mind. This process requires engagement with the citizenry, not just dissemination of information.

Organized public engagement can also give technologists a stronger hand when approaching the possibility of new legislation. It avoids a feeling of mistrust or suspicion towards developers, because it is an open process with free discussion between the public and technologists. It can also help us foresee and plan for possible uses, both beneficial and harmful. While not required by law as part of developing a new technology, organized public engagement provides many benefits, both to the developers and to legislators.

After the presentations from Anthony, Julian and Timothy, there followed a round table discussion of the topics we had encountered throughout the workshop. There was concern about the degree to which the public would trust the accuracy of their BEAMING avatar and its interaction with the world, because the user and avatar are not co-located. When we split the fate of the avatar or robot from the fate of the person who’s controlling it, how are the autonomy, responsibility and identity of the user and of anyone they interact with through BEAMING affected? The extent to which the robot is dependent on the user for its actions, or the extent to which it is autonomous in those actions, affects how a user is perceived by others, and therefore affects the user’s sense of identity. That identity is also at risk from factors like fraud. So should we as developers try to prevent people from pretending to be other people, or do we allow for the anonymity of the internet? Or do we not take a stance on this? Do the choices we make in this domain affect the applications that are available for the technology? In general, the group agreed on the importance of designing the technology to encourage desirable behaviour and discourage dangerous or immoral behaviour, but questions surrounding the best way to ensure that people trust their avatar are an accurate and secure representation of themselves remained.