Address for correspondence: Caragh Brosnan, School of Humanities and Social Science, University of Newcastle, Callaghan Campus, NSW 2308, Australia e-mail: firstname.lastname@example.org
The ethical issues neuroscience raises are subject to increasing attention, exemplified in the emergence of the discipline neuroethics. While the moral implications of neurotechnological developments are often discussed, less is known about how ethics intersects with everyday work in neuroscience and how scientists themselves perceive the ethics of their research. Drawing on observation and interviews with members of one UK group conducting neuroscience research at both the laboratory bench and in the clinic, this article examines what ethics meant to these researchers and delineates four specific types of ethics that shaped their day-to-day work: regulatory, professional, personal and tangible. While the first three categories are similar to those identified elsewhere in sociological work on scientific and clinical ethics, the notion of ‘tangible ethics’ emerged by attending to everyday practice, in which these scientists’ discursive distinctions between right and wrong were sometimes challenged. The findings shed light on how ethical positions produce and are, in turn, produced by scientific practice. Informing sociological understandings of neuroscience, they also throw the category of neuroscience and its ethical specificity into question, given that members of this group did not experience their work as raising issues that were distinctly neuro-ethical.
Ethical issues surrounding neuroscience are currently the subject of interest across numerous disciplines. This article explores the meanings of ethics that emerged through studying the experiences of one group of researchers working in translational neuroscience. It aims to contribute to our understanding of how ethical positions produce and are produced by scientific practice. In so doing, it builds on sociological work on ethics that emphasises the importance of studying ‘how ethics are “done” in everyday life (Haimes 2002: 99). At the same time, we aim to contribute an empirically grounded analysis of ethics in neuroscience. While there has been much discussion of the ethical issues that neuroscience raises, little is known about how ethics intersects with day-to-day practice in this field and how scientists themselves perceive the ethics of their work.
Drawing on observation and interviews with members of a UK group conducting basic and clinical neuroscientific research, the article examines what ethics meant to these researchers and delineates four specific types of ethics that shaped their day-to-day work: regulatory, professional, personal and tangible ethics. Before presenting these findings, we discuss existing work in two key areas: the ethics of neuroscience and empirical work in sociology on ethics.
Ethics of neuroscience
Neuroscience is one of the most active areas of biomedicine in terms of public visibility, research funding, output and, arguably, scientific progress (Pickersgill 2011, Vrecko 2010). As the field has gained prominence the ethical issues associated with neuroscience have begun to receive greater scrutiny. Scholarly analysis has been spearheaded by the discipline of neuroethics, which focuses on the ethical implications of neuroscientific developments, including neuroimaging, neuropharmacological enhancement and neurostimulation (Racine 2010), in addition to the implications of such neurotechnologies for how we understand ethics (Roskies 2002). While examining the ethical implications of neuroscience is valuable, Pickersgill (2012) points out that this focus overlooks the role that ethics plays in configuring neuroscience itself. To date, few studies have included neuroscientists as research participants with the aim of exploring how ethics comes into their everyday work.
Several such studies focus specifically on neuroimaging. Illes et al.’s (2010) survey found that North American researchers involved in neuroimaging rated patient confidentiality and consent, external influences on academic research, conflicts of interest and participants’ vulnerability and expectations as important issues in their work. Managing incidental findings on brain scans can be problematic (Deslauriers et al. 2010) and there is variability in how such findings are dealt with (Illes et al. 2004). Robillard et al. (2011) focused on ethical issues for researchers working on neurodegenerative disorders. Drawing on Illes et al.’s (2010) survey to construct their questionnaire, they also found that confidentiality and consent, and the influence of external factors were important. Several respondents commented on ethical issues in animal research in a free-text section of the questionnaire, which Robillard et al. acknowledge was not designed to explore work with animals or cells, although these were the respondents’ most common research subjects.
Pickersgill (2012) goes further in terms of explaining how ethics manifests itself in day-to-day neuroscience. Drawing on focus groups with UK neuroscientists, he argues that science and ethics are co-produced, each embedded in and shaping the other, being mediated through the emotions and experiences of researchers and their relationships with others. The participants identified dealing with incidental findings as their principal ethical concern, along with collaboration with countries seen as less ethical, patient confidentiality and the upstream implications of neuroscientific work. Pickersgill’s participants were mostly involved in human subject research, and the way that ethics intersects with other kinds of neuroscientific work, such as at the laboratory bench, is not specifically discussed. Our article adds to this small body of work by examining the ethical issues of relevance to a translational neuroscientific group involved in both human and more basic research, and by documenting the various forms of ethics that manifested in practice in their everyday work.
Empirical ethics in sociology
Critiques of bioethics as relying too much on abstract philosophical reasoning and too little on empirical investigation (Hedgecoe 2004) have prompted the development of empirical ethics, an area that offers rich potential for social scientific input. A number of sociological studies already contribute to this interdisciplinary project by elucidating ‘the social processes, meanings and institutions that frame and produce “ethics” and “ethical problems” (Haimes 2002: 110) in scientific and clinical settings. We build on this work on biomedical professionals’ experiences of ethics by extending it to neuroscience. Three key findings from prior research are of relevance here: ethical boundary-work enables practitioners to respond pragmatically in contentious areas; regulation does not map neatly onto practice and professional roles shape ethics.
Previous work illustrates the ways in which ethical boundary-work can be used to reach workable solutions in contested scientific areas (Ehrich et al. 2006, Frith, Jacoby, and Gabbay 2011, Hobson-West 2012, Wainwright et al. 2006a). This research draws on Gieryn’s (1983) concept of boundary-work and shows that beyond differentiating science from non-science, researchers draw boundaries within science, reflexively ordering practices along a spectrum from ‘more’ to ‘less’ ethical. As well as helping to maintain science’s public image, this enables scientists to reach practical solutions in ethically contentious areas. For example, Wainwright et al. (2006a) show how embryonic stem cell scientists constructed some sources of embryos as more ethical than others and drew lines around which embryos they were personally willing to work with. By deferring to regulatory frameworks, scientists were able both to present themselves as ethical scientists who were following the rules and to progress their work without being hindered by ethical concerns. This finding has been replicated several times, leading (Frith et al. 2011: 579) to argue that ‘displacement of responsibility’ when working in highly regulated science is a recognisable repertoire of boundary-work.
Despite this widespread deferral to regulation, another recurring finding is that regulation does not map neatly onto practice. Professionals may act according to their personal values when these are less permissive than the regulations. For example, Wainwright et al. (2006a) found many UK scientists were unwilling to work with embryos created for research purposes, even though this is permitted in the UK. Professional roles can also play a key role in shaping ethics. Cribb et al. (2008) show how, in translational stem cell research, the scientists and doctors they interviewed had different perceptions of what mattered and how research should be conducted. The differing epistemological orientations, communities of practice and institutional and regulatory structures of science and medicine resulted in a ‘division of ethical labour’ (p. 53), where each group had separate ethical concerns. Similarly, Birke et al. (2007) identify a division of emotional labour among laboratory staff involved in animal experimentation, whereby technicians and research scientists have differing amounts of contact with the animals and different feelings towards them. These studies illustrate how professionals working in contested biomedical areas typically draw simultaneously and reflexively on various sources of ethical guidance, including personal, professional and regulatory spheres. As will be discussed, the neuroscientists in our study were also guided by these influences, and additionally, we argue, were confronted with a fourth moral sphere in the form of tangible ethics.
One gap in existing studies is that they tend to focus on how boundaries are drawn at a discursive level and the positions researchers take on working with contentious entities or experimental therapies. They tell us less about how, in practice, ethical boundaries shape individuals’ day-to-day work and vice versa (Hobson-West 2012: 13). While several of the above-mentioned studies report using observation, only interview data are generally discussed. Little is said about how the various moral frameworks clinicians and scientists report drawing on are negotiated and applied in real time, how they relate to the physical setting of the laboratory or clinic and whether there are other ethical spheres at play whose influence is not articulated. With regard to work on the embryo, Harvey and Ehrich (2011) have called for a consideration of the embryo itself as an actor in its own ethical configuration. Attempting to move beyond analyses of how embryos are constructed by staff, they argue that embryos are better understood as ‘socially contingent, material agents that cannot be regarded as entirely passive’ (2011: 12). Incorporating the material into analyses of ethics – an approach that potentially has applicability beyond the embryo – requires attending much more closely to the daily practices of laboratories and clinics, as scholars in anthropology (for example, Franklin and Roberts 2006) and science and technology studies (for example, Latour 1987) have done.
Similarly, it is seldom clear in prior sociological work on ethics whether what counts as ethics is grounded in an analysis of practice or is defined according to interviewees’ understandings. In this article we unpack this distinction by exploring how ethics was understood by participants and by analysing how ethics acted and was enacted within the activities of the study site. We draw on fieldwork conducted in one research group that works on both Parkinson’s disease (PD) and Huntington’s disease (HD). Our study forms part of a larger research programme on translational neuroscience that is investigating the ethical, clinical, scientific and legal landscapes of translational research in contrasting neurological conditions.
The setting is a UK university-based research group with around 18 scientific and clinical members, situated in a neuroregenerative research centre. The group, led by a clinician-scientist, aims to increase understanding of the aetiology and features of PD and HD and to develop therapies via an array of approaches ranging through pharmacological to genetic to cell-based experimental treatments. One of the largest projects at the time of fieldwork involved research into foetal brain tissue grafts as a therapy for PD, with the goal of moving from in vitro and in vivo animal work to a clinical trial in the near future. Both bench and bedside research is conducted, with one of the central goals of the group being translation between these domains. The layout of the building reflects this purpose: on one floor is a clinic where PD and HD patients come for assessment and to take part in research; on another are the labs where bench science is undertaken. There is a circulation of cells and tissue, patients and staff between the research centre and a nearby hospital. The staff work mostly in either the lab (undertaking cell, tissue and animal behavioural research) or the clinic (conducting neuropsychological and genetic testing of patients and running drug studies). The lab side of the group includes both pure scientists and some people with clinical training; the clinic side involves a mix of backgrounds, from medicine, neuroscience and psychology to clinical trial management. The group comes together for a weekly lab meeting during which one person reports on their project.
Data collection was conducted by CB (the first author) between September 2010 and April 2011, comprising observation that preceded and ran concurrently with interviews. Observation included observing cell and tissue-based work at the bench, patient cognitive testing and team meetings. Observation was crucial for understanding team members’ varied roles and daily experiences. In all, 13 interviews were conducted with team members including the group leader, split between the lab and the clinic. A group leader from another group in the centre was also interviewed to add an additional perspective from a senior neuroscientist. Interviews lasted 1 hour on average and were recorded and transcribed. Interviewees were asked general questions about their background and motivation for working in the area; what ethical issues they encountered; the relationship between the laboratory and the clinic and how they felt about the activities they were involved in. These latter questions were drawn from the observation data where possible, hence interviews and observation were tightly connected. Data were thematically analysed using NVivo as a way to structure the data and all the research team contributed to the generation of the identified themes. To protect the identities of our informants we classify them here as either lab researchers (LR) or clinic researchers (CR).
As with previous studies, there were a number of different ethical or moral spheres that shaped and were embedded in the daily work of this neuroscientific research group. Individual researchers drew on these different influences in a reflexive manner that assisted them in reconciling challenging or contradictory moral interpretations of their work. However, some spheres had more legitimacy than others. Figure 1 depicts four key ethical spheres emerging from the data – regulatory, professional, personal, and a new category, tangible ethics – and their order of influence in the research setting. Each sphere is discussed in turn.
It soon became clear in both the observation and interviews that the dominant meaning of ethics in the group was the external regulatory approvals required to conduct research with patients, human tissue or animals. For studies involving patients and human tissue, approval has to be granted by a National Health Service research ethics committee. Scientific work with animals is licensed and monitored by the UK Home Office. The word ‘ethics’ had become so synonymous with these approval processes that nearly half of the interviewees confounded ethics and regulation, as in the following excerpt:
CB: What role do ethical concerns play in how you set up the research?
CR2: I don’t think it really affected anything I do. It made me think about things that I wouldn’t have otherwise thought about, like insurance and what happens if things go wrong … But in terms of the actual study, I think I just set it up as I wanted to do it, and then obviously sent it off to Ethics and it was all fine. So there was nothing that Ethics and the ethical procedure stopped me from doing that I wanted to do.
LR2 distinguished moral deliberation from ethics committee applications, but recognised that the latter were more often discussed within the group:
CB: Do you ever discuss ethical issues within the group?
LR2: We discuss ethical application forms! But I think that’s mainly discussion, you know, how can we make sure the committee approves what we want to do? Rather than, is what we’re doing ethically correct?
The regulatory ethics process therefore loomed large in these neuroscientists’ minds when ethics was mentioned, and indeed, regulation was omnipresent in day-to-day practice (cf. Pickersgill 2012). The various legal acts and ethical guidelines governing work with humans and animals, as well as project-specific requirements enshrined in ethics committee approvals, structured almost every aspect of work in the lab and clinic, including which patients, animals or tissue types could be involved, how they could be accessed, what procedures could be performed on them, where in the building could such procedures be carried out, for what duration, who could conduct the research and what training was needed. At a practical level, therefore, daily tasks were carried out by approved personnel in tightly bounded ethical spaces.
External ethical regulation also affected the way in which experiments were designed and the data that were generated. LR4, who was working with HD model mice, explained that ideally her research would include testing how different treatments affected the mice’s longevity. However, this was impossible because according to Home Office regulations the mice had to be killed as soon as they began showing signs of the disease:
We have to look at when the mouse dies naturally to see the effect of treatment, but it’s not allowed … As long as they express the [HD] phenotypes, we have to kill them. And if they have very little symptoms, the Animal House ring us, and ‘Please kill the mice now’.
In addition to shaping research practices, ethical regulation was a major actor in the group’s work in terms of the time spent discussing ethics applications, applying for ethical approvals and maintaining related paperwork. In the lab meetings observed, ethics committee approvals were frequently discussed and many interviewees lamented the time they had to devote to the administrative work related to regulation. However, as Hobson-West (2012: 8) also reports from interviews with another group of scientists, some group members felt that the process of putting together ethics applications was helpful in terms of having to think through research plans in advance. CR4 was one of several who saw the regulatory ethics process itself as integral to good research:
CR4: I think with ethics these days, you can’t really get away with that much now anymore. Compared to 10 years ago, the ethics forms are vastly different. They used to be about three pages long and now they’re 93 pages!
CB: Do you think that actually makes things more ethical?
CR4: I think, yes, it does. Before you basically gave a brief summary of the research that you were going to do … Now, you have to state exactly what you’re doing, why you’re doing it, when you’re doing it, and how long you’re doing it for. And I think that does make people really question and consider why they’re doing things.
So the fact that ethics was often understood to mean external ethical regulation should not be viewed as erroneous on the part of these neuroscientists, nor as insignificant. It reflects the actual constitution of practice in the group, in which regulation affected almost every aspect – from the conceptualisation of experiments to their conduct – and the way that these researchers had learned to think ethically. Regulation set a rigid ethical space within which practice took place. However, within that space other ethical spheres also operated and other boundaries were drawn, and we now look at the influence of what we have called professional ethics.
Cribb et al. (2008) have shown how the contrasting professional roles of doctors and scientists can lead to somewhat different moral viewpoints in translational research. We found that it was day-to-day practices and the space in which they were carried out that framed staff members’ ethical concerns, rather than professional roles per se. Staff worked mainly in either the lab or the clinic, although in each subgroup there was a mix of professions and roles, with some lab scientists having a medical background and some clinic staff having trained in basic sciences. It was striking, however, that individual group members self-identified as belonging to either the lab or the clinic part of the group and perceived quite a sharp divide between them. CR1, for example, described the situation and the weekly lab meeting as follows:
Every week someone presents their own work, but you tend to find that lab people will comment on lab techniques, because the clinic people won’t know any of the lab techniques … then when clinic people present, it’s other clinic people who comment on it. I think in terms of [group leader], he’s got a great sense of both and he links the two in his own mind. But I wouldn’t say there’s a great link really. I mean you don’t even get to know people’s names upstairs [in the lab].
A boundary was therefore constructed by group members between the lab and the clinic, with each domain seen as epistemologically distinct. Other researchers have explored the two cultures of the lab and clinic (Martin et al. 2008, Wainwright et al. 2006b, Wilson-Kovacs and Hauskeller 2011), but what is significant here is that this division persisted even in a single research group whose self-defined goal was translation. The same boundary was used to mark out the division of ethical labour, as each subgroup felt that they had specific ethical issues that pertained to their domain. Ethical concerns in the clinic subgroup centred on the overburdening and inconveniencing of patients participating in research studies. In contrast, ethical issues in the lab were seen to centre on animals and the foetus. Strikingly, when asked if there were any common ethical issues in the group as a whole, none of the interviewees thought there were.
Furthermore, each of the two subgroups constructed the other as having more pressing ethical concerns than themselves. For example, on CB’s second day of observation, a clinic researcher told her that two of the lab researchers were the best people to observe because their work had the most ethical implications. Similarly, CR5, when asked about ethical issues, commented:
I don’t think I’ve had any major issues so far, because I mean I haven’t been involved in doing like stem cells or foetal tissue. I know some other people probably are. But no, I haven’t as yet been involved in that kind of research, so it’s mainly been quite simple with patients.
Conversely, when asked about the relationship between science and bioethics, LR3 said:
I don’t have too much to do with – I mean, I know that we have ethical permission for doing our human foetal work … but I suppose I just conform to the part that I need to conform to and don’t think too much about it otherwise really. So in fact it’s something that I think of as being more relevant to the clinic than to the lab.
Thus, ethical boundaries were constructed between working with patients and working with contentious entities such as animals and the foetus. In these discussions, there was virtually no acknowledgement that the group’s goal was actually to translate one type of practice into the other. In fact, both lab and clinic group members claimed to know little of the work going on in the other domain. Birke et al. (2007: 116) note this form of boundary-work among laboratory staff working with animals and suggest that ‘ignorance’ is a way of displacing moral responsibility. It may be that by focusing simply on their own daily practices, researchers in this group were able to delimit their ethical concerns. However, such boundary-work may have implications for the success of the translational enterprise if, as Cribb et al. argue, ‘translational research has to be understood as a process of movement and negotiation across ethical spaces and not simply across physical and social places and spaces’ (2008: 353).
Notably, in neither domain was the fact that they were working in neuroscience considered ethically significant. When asked generally about ethical issues in their work, only the group leader and one clinic researcher raised a neuro-specific concern, which was related to obtaining informed consent from HD patients who may lack capacity. When then asked directly about ethical issues in neuroscience, the interviewees did not connect these to their daily work. CR5, for example, asserted that neuroscience created ‘massive’ ethical concerns, ‘just because it’s basically brain, and brain is person, or brain is like mind, and mind is individual, and then that’s behaviour’. However, when then asked how these issues came into her work she explained:
They don’t so much, I think, just working – so when I work at [another research centre], they’re doing research which is touching on all those areas, so the vegetative state work, the consciousness work, that sort of thing … I was deciding between two PhDs – one in Parkinson’s, and the other in vegetative state work … I went to meet a couple of vegetative state patients to see how I’d feel about working with them. And it just left me with so many questions, and just so kind of heated up, that I just thought I wouldn’t be able to work with that for three years.
Here, a boundary was drawn between more and less ethically charged areas of neuroscience. Interviewees in the lab drew a further boundary between the neuroscience involving patients and their own work, which they located clearly in the realm of basic science:
LR5: I don’t think neurobiology is anything different from cardiological research or whatever because I mean – neurology may be different, because in neurology you can try to manipulate especially the cognitive part of patients.
LR1: To me, what we do is cell work, and cells are cells and it doesn’t matter whether they’re neurons or liver cells.
In terms of professional ethics then, among these researchers professional identity was linked closely to the day-to-day practices they carried out individually, rather than to a broader category of neuroscientist. This enabled them to draw clear boundaries around what was and was not an ethical issue in their work, the main schism being between concerns in the lab and the clinic. Just as regulation creates a bounded ethical space and produces particular practices, so these spaces and practices construct ethical issues that individuals encounter depending on which space they enter. At the same time, individuals bring their own moral views to these ethical spaces, developed in other settings; hence day-to-day experiences of ethics among researchers in this group were further refined via the influence of personal ethics.
Of interest here is the degree to which personal beliefs determine practices. In the third layer of Figure 1 both regulatory and professional spheres were used to demarcate what was and was not an ethical concern for individuals. Within this framework people also exercised their own personal ethics, by which we mean individuals’ views about what is right and wrong in their work. For a minority of participants, the regulations themselves served as their primary moral compass. LR2, when asked whether there was any research he would not be willing to do, responded:
I mean I guess the stuff that … would come to mind, would be animal stuff. But I mean there are so many regulations about what can and can’t be done … you kind of think to yourself that anything that you could ethically be approved to do, would be justifiable.
This can be seen to fit within the displacement of responsibility repertoire described by Frith et al. (2011: 10) and found in other studies (Wainwright et al. 2006a), whereby external regulation is allowed to do the moral work.
For most interviewees, however, regulation set an upper limit on what practices could be contemplated but they drew a less permissive boundary in terms of what they personally were prepared to do. As in previous studies of ethical boundary-work, we found that rather than simply defining activities as right or wrong, these researchers tended to place practices and entities on a spectrum and to draw their own personal line at a particular point. LR4 felt that animal research was justifiable for the benefit of humans, but drew a line at the species she would work with:
I don’t agree with working with monkeys or other very developed mammals … [Working with mice] can’t be the best model, but just to think about the monkey, we have to give up something, and the compromise animals can be mice or rats.
Although many interviewees drew boundaries around what kind of research they were prepared to do, none indicated that they were unwilling to work in an environment where such practices took place. This is similar to those professionals Farsides et al. (2004) label tolerators in their study of antenatal screening – those who are uncomfortable with or even disapprove of certain practices but are willing to carry them out or to work with others who do so because of their commitment to another moral principle, such as patient autonomy. A view expressed by all interviewees in our study was that the group’s research was being done for the right reasons – to help sick people – and where people were not willing to carry out certain practices themselves, an attitude of tolerance prevailed, as seen in this excerpt:
CB: Is there any type of work going on in the group that you personally wouldn’t be prepared to do?
CR2: Probably animal work actually … I just always stayed away from it, just because I didn’t know how I’d feel doing it … I don’t think I could even, no – I like to think it goes on, I like to think people do it, but I couldn’t do it. I think that’s the bottom line.
As other studies have found, the researchers acted reflexively to create a workable space between regulatory, professional and personal ethics. People tended to work within the limits of what was acceptable to them, which in turn was within the limits of the regulation and the scope of their role. Despite the operation of these various ethical boundaries though, there were still occasions when staff experienced a sense of doing wrong in the daily conduct of their work, pointing to a fourth moral sphere that we call tangible ethics.
Here we discuss ways in which ethics manifested in this setting that did not fit into the previously identified categories. The instances we refer to are situations in which individuals experienced a dissonance between their personal view that a particular practice was morally right, and a feeling of wrongdoing when actually carrying it out. That is, the way that ethics was actually experienced in a tangible way in the workplace was sometimes different from the researchers’ discursive construction of right and wrong. While other studies report that health professionals are sometimes upset or uncomfortable while carrying out procedures that they believe are justified (for example, Harvey and Ehrich 2011, Williams 2006), this phenomenon has not previously been analysed as a distinct form of ethics. Pickersgill’s (2012) discussion of the co-production of ethics and emotion in neuroscience refers to the embedding of ethics in the relationship between the neuroscientist and research participant. As we show here, emotion-based ethics are also produced when human relationships are not involved. We use ‘tangible ethics’ to refer to this ethical sphere located at the level of practice.
One example relates to LR4, who, as described earlier, was required by regulations to kill the HD mice before her experiment had run its logical course. Commenting on the ethics of animal work she argued that animal welfare was sometimes given too much consideration at the cost of human wellbeing: ‘sometimes it’s too restricted, sometimes I feel it’s unnecessary effort … I think sometimes we have to focus more on treatment’. As seen in the preceding section, this researcher’s personal ethical boundary specifically sanctioned the use of rodents in research. Nevertheless, actually killing the mice – an intrinsic part of animal research – was experienced as fundamentally wrong:
CB: And are there any aspects of your work that make you uncomfortable?
LR4: Doing the procedure? Well, we have to kill the animal, sometimes, just expose to CO2, and when I kill four or five mice together, put them in the chamber, CO2 chamber, I was like, World War II, it’s like …
CB: Like the gas chamber
LR4: Yes. So I feel really bad. I avoid killing the mice like that.
CB: How do you avoid it?
LR4: It depends on the experiments, but luckily my plans, mostly I don’t need these experiments. So I try to make a design not using – but it’s inevitable. So I feel really, really miserable when I put the mice, four and five together in the chamber and turn on the CO2 gas. Oh it’s so bad. And where you do it, it’s quite dark.
The reference to the Holocaust here is a stark indication of how deeply wrong it felt to actually carry out this procedure. In this case, the scientist indicates that her tangible experience of wrongdoing fed back into her experimental design, suggesting that tangible ethics shapes as well as emanates from practice, without necessarily altering the personal ethical sphere.
A similar kind of distinction between personal and tangible ethics was experienced on occasion by two of the scientists working with foetal tissue. Their work involved dissecting six to 10-week-old foetuses in order to obtain a specific part of the brain for use in experiments. LR1 stated in the interview that her personal view was that the foetus did not have a special status and therefore she was comfortable doing this work:
I’m fine with it … it’s just tissue that is going to disintegrate and disappear, so if we can use it, much, much better.
However, it became clear when observing LR1 at work that she did not always see the foetus as ‘just tissue’ in practice. On one occasion, CB observed LR1 trying to obtain the required piece of neural tissue from a 10 week-old foetus that had been surgically aborted. As opposed to medical terminations, when the foetus arrived whole and at an earlier gestational stage, scientists received the product of surgical terminations as a mix of blood and pieces of tissue, which they had to sort through to find the foetal brain. The following took place in a room in the lab with fridges and a laminar flow hood designated for human tissue work:
LR1 takes three vials filled with tissue and a pinkish bloody fluid out of the fridge, sprays them and places them under the hood in a stand. She chats to me as she sets about emptying the vials one by one into Petri dishes and poking around in the tissue with tweezers to find foetal parts. She is looking for anything white. In the first lot it is mostly placenta, but then suddenly LR1 exclaims ‘Ay-yi-yi, I think that’s a hand’. She shows me down the microscope and indeed a small arm and hand is visible, sticking out of the bloody placental mass. (Field notes, 25 November 2010)
LR1′s reaction here seemed to indicate that the discursive boundary-work performed when categorising the foetus as like any other tissue was not always sustainable in practice. Furthermore, before CB first observed the dissections, LR1 was at pains to prepare her for what she was about to see:
LR1 Googles ‘embryo’ and clicks on the first image that appears – an illuminated pale white embryo floating on a black background with developing arms, legs, ear and eye in high definition. ‘This is basically what it looks like’, LR1 explains, adding that ‘I always used to think that the pictures the anti-abortionists use were exaggerated, but actually that is what they look like’. (Field notes, 16 September 2010)
LR1′s efforts to prepare CB for the materiality of the work again points to the tangible aspects of day-to-day ethics. LR3 also reported that it sometimes felt wrong to actually carry out the dissection:
[Today] we had a quite large nine-week foetus, which was larger than average … Everything is much more defined and easy to identify. So we were having discussions about that. And neither of us [LR1 & LR3] are that keen on actually opening up the head, do you know what I mean? It just seems like a wrong thing to do.
The finding that researchers in this group sometimes experienced a tangible sense of wrongness, a kind of embodied ethics-at-the-coalface, which was different from their personal moral views, adds weight to Harvey and Ehrich’s contention that in understanding the moral landscapes of, in their example, embryo research, there is a need for greater consideration ‘of how such landscapes could be influenced by the actual topology of the material facticity of human embryos’ (2011: 6). Like embryos, we argue that the materiality of the foetus and of animals makes them actors in the constitution of ethical landscapes in these neuroscientists’ daily work, seen most vividly in the sphere of tangible ethics.
The identification of this tangible form of ethics in some ways challenges the argument that ethical boundary-work enables researchers to defer or bracket out ethical quandaries in their work (Ehrich et al. 2006, Wainwright et al. 2006a). While the researchers had well-defined regulatory, professional and personal ethical boundaries that allowed them to do the work, there were still sometimes troublesome ethical issues or experiences that manifested in a tangible way when the work was actually being done. This has some similarities with the ‘yuck factor’ that is used to describe people’s instinctual moral aversion to certain practices or entities but here it was based wholly in experience rather than on a gut reaction to the issue. As opposed to a conflict of values that scientists needed to resolve prior to action (as might occur in the personal ethical sphere), the dissonance between personal and tangible ethics arose in action, and the two spheres co-existed, as depicted in Figure 1, rather than becoming incorporated. The role of tangible ethics only really emerged after extended discussions and observation. It is hidden under the other layers and has barely been discussed in other studies in the ethics-in-practice tradition of medical sociology.
Discussion and conclusion
This article examines the meanings that ethics takes in the translational culture of a conjoined neuroscience laboratory and clinic. We demonstrated that participants often interpreted ethics in terms of externally imposed regulation, and that this kind of ethics did indeed play a significant role in the whole set-up and conduct of the group’s activities. In practice, researchers drew on a complex mix of regulatory, professional, personal and tangible ethics when deciding how to do their work. Building on prior conceptualisations of ethical boundary-work, we suggest that there was a particular order in which boundaries were drawn in this setting: regulation created an overarching ethical frame into which professional and personal ethics were then fitted, while tangible ethics only manifested itself while actually carrying out practices that had been defined as acceptable in the other spheres. The conceptual schema we have outlined (Figure 1) may offer a useful framework for considering the way ethics acts and is enacted in other similar settings.
We have also introduced the concept of tangible ethics. The moral qualms researchers sometimes feel when confronted with the materiality of particular experimental elements have been noted before but they have never been considered a specific ethical domain. We found that this fourth ethical sphere acted as a countervailing influence on the other three and meant that some researchers were not able to fully reconcile ‘doing a good job’ with ‘doing good’. This suggests that, depending on what practices staff are engaged in, ethical boundary-work is sometimes sustainable only to a certain point, and ultimately ethical problems have to be faced when it comes to actually doing the work. Ethical boundary-work is therefore only a partial explanation of how the moral order of the lab/clinic is configured.
Although we have offered a taxonomy of ethics, we have treated the terms ethical and moral as effectively interchangeable, which reflects the usage of our interviewees and much other usage. Nonetheless, we have sought to make important differentiations around and under this umbrella category of the ethical-moral; differentiations that, to varying degrees, intersect with distinctions that are sometimes made – both in everyday discourse and in theoretical work – between ethics and morals. We have not made use of this terminological distinction in our analysis because we are conscious that the multiple, and sometimes conflicting, ways in which it is employed can obscure as much as it clarifies, as Sayer (2011: 16-17) has recently argued. However, it is worth briefly indicating some of the ways in which the differentiations developed here relate to some versions of the ethics/morality distinction. The taxonomy can be seen as a map of the scientists’ moral landscape in the sense that Arthur Kleinman (1999) uses the term; that is, the lived experience of what fundamentally matters to people who are practically engaged in specific and local contexts, as opposed to some more abstract, reflective and would-be universalist conception of the normative (ethics, for Kleinman). Hence, for the same broad reason, our mapping is not about morality in the sense in which Bernard Williams (1985) uses the word, that is, to pick out a narrow rationalist focus on overriding obligations as opposed to a more historicised, plural and thicker interest in a complex of values, purposes and ideals (ethics, for Williams) – indeed, the latter is much closer to our guiding interest.
Our differentiations do have some loose correspondence with the Foucauldian distinction between morality and ethics (in which ethics is a subset of morality which, roughly speaking, picks out the self-making aspects of morality): in broad terms the first two categories in our taxonomy (regulatory and professional) illuminate aspects of what Foucault (1998) labels morality and the latter two categories (personal and tangible) illuminate aspects of ethics. The analysis offered here is not principally concerned with the Foucauldian problematic of ethics – the business of the subject’s relations to itself – but we believe that the distinction between personal and tangible ethics helps to contribute an empirically grounded insight into some of the processes and tensions encountered and experienced in producing oneself as an embodied ethical subject. In particular, the discovery and examples of tangible ethics in the data show how visceral-affective elements in ethical life exist alongside, and can ‘push against’, the cognitive-discursive elements otherwise articulated as personal ethics. More broadly, it enables us to show how, building on Harvey and Ehrich’s argument (2011), attending to the material dimensions of practice in bioscience is a fruitful way to move sociological empirical ethics forward.
Our study also has implications for sociological research on translation, showing as it does that the two cultures of the lab and clinic were represented clearly in this one research group. The group may become more united as the PD clinical trial progresses and there is a clearer connection between bench and bedside, but at the time of data collection the lab and clinic were functioning as quite separate epistemological and ethical spaces. Staff maintained carefully constructed boundaries between what was and was not of ethical relevance to each area and the few discussions of ethical issues that took place were also within rather than between the domains. Prior studies have described how ethical boundary-work enables individual staff to get on with their jobs without being plagued by moral quandaries on a daily basis. The flipside is that ethical boundary-work may actually impede the translation process by helping maintain the cultural and epistemological separation of lab and clinic.
Finally, it was striking that despite being conducted in a neuroscientific setting, this study did not uncover issues that might be classed as particularly neuro-ethical. In fact, members of this group did not see their work as raising any issues specific to neuroscience. There are several possible explanations for this. One is that the group’s eschewal of special neuroethical status is yet another form of boundary-work that enables them to avoid confronting the ethical questions being raised about neuroscience, by distinguishing themselves from the neuroscientists whose work raises ‘big’ issues. Another is that we focused on one group conducting research on two specific neurodegenerative conditions, with only part of their work involving patients, and it could be argued that such work is unrepresentative of neuroscience. This, however, raises the question of what neuroscience is. Tracing its origins genealogically, Abi-Rached and Rose (2010) describe neuroscience as a disciplinary ‘hybrid of hybrids’ (2010: 12), held together since the 1960s by a common ‘neuromolecular gaze’. Precisely what neuroscience means today and how it is evolving requires ongoing empirical investigation. In turn, the object of a specific ethics of neuroscience demands greater conceptual clarification. Our findings highlight the importance of not only examining the possible future ethical implications of neuroscience, but of understanding how both ethics and neuroscience are configured in the present.
The study forms part of the London and Brighton Translational Ethics Centre programme funded by Wellcome Trust Biomedical Ethics Strategic Award no. 086034. We are grateful to the scientific research group included in the study for granting access to their workplace, particularly those members who participated in interviews and observation. We also thank the two anonymous reviewers for their helpful feedback on an earlier draft.