Transcript, Meeting 14 Session 7


August 20, 2013


Philadelphia, PA


Deborah G. Johnson, Ph.D., M.Phil., M.A. 
Anne Shirley Carter Olsson Professor of Applied Ethics
Science, Technology, and Society Program
Department of Engineering and Society
School of Engineering and Applied Science
University of Virginia
Thomas H. Murray, Ph.D.
President Emeritus
The Hastings Center
Anjan Chatterjee, M.D., F.A.A.N.
Professor of Neurology
Center for Cognitive Neuroscience and Center for Functional Neuroimaging
University of Pennsylvania School of Medicine

Download the Transcript


DR. WAGNER: Our next session will focus on models for integrating science and ethics. And to kick us off we are joined by Doctor Deborah Johnson, the Anne Shirley Carter Olsson Professor of Applied Ethics in the Science, Technology and Society Program at the School of Engineering and Applied Sciences at the University of Virginia.

In her research, she examines the ethical, social and policy implications of technology and engineering. She was awarded the Sterling Olmsted Award from the Liberal Education Division of the American Society of Engineering and Education in 2001. Subsequently, she received a Jon Barwise prize from the American Philosophical Association in 2004. She's also served as a member of the executive board of the Association for Practical and Professional Ethics and we're delighted to have you here.


DR. JOHNSON: Let me begin by saying that I'm pleased to be here. I followed the Commission since its inception. I've read several reports and really impressed with the ability of the Commission to craft those reports to be substantive on really tough issues.

So as I understand my task here it's to help the Commission go through the challenges of addressing ethical issues early on in technological development and throughout the development.

Although brain research is not my domain. I've got to say that upfront. There's no doubt in my mind that it's important to think about the ethical issue now and to continue to think about them as research and development progresses. And to think boldly about them with an eye in my mind, with influencing the direction out development and the design of technologies that will be developed. That will be a theme throughout.

As I understand that the President's initiative is rightly framed without a sharp distinction between science and technology or science and engineering and without sharp distinction between pure and applied research.

The absence of these distinctions will, I think, make it easier to create dialogue on the social, ethical and legal issues. Since I think that ideology sometimes gets in the way. If ethical issues aren't addressed intentionally and explicitly and thoughtfully and they aren't addressed early on then I think the ethical aspects of the new technologies will be framed and shaped and possibly solidified implicitly by engineers and scientists and probably through the press and public reaction. In other words, the ethical issues are going to be there whether exclusively addressed or not.

In my own recent research, I've been working on issues having to do with the responsibility for artificial agents. That's just a general term for robots and bots and other computational devices. In focusing on the issues of responsibility, my work has largely been a reaction to some implications in the early literature that these technologies were being framed at least by some actors involved as so complex and so autonomous that the standard ascriptions of responsibility wouldn't work. I thought and I still think we might have a situation in which the public is told that no one, no humans, that is, are responsible for the behavior of devices that make these powerful and important decisions that affect our lives.

My approach has been premised on the idea that design makes a difference. In my case, we can -- the artificial agents case, we can design artificial agents to facilitate responsibility or we can design them to make it harder and harder for humans to control and be accountable for their behavior. In other words, ethical issues are at least the makings of ethical issues, I would argue are constituted in the design of the technologies and can be constituted with or without particular problems. And I would assume it will be the case the same for the design of brain technologies though I don't know enough to give you the examples and the details.

I mention my own work here because I think it's important no matter what the technology to recognize that although nature, the brain, in my case computation, materiality, powerfully shapes what is technologically possible, it doesn't entirely dictate the character or design of the technologies that are produced. My work on artificial agents also illustrates the importance of framing technology not just as artifacts or material objects but in socio-technical. Whether it's brain technologies or computational devices, technologies are systems in which results or affects are produced by combinations of human and nonhuman activity. Machines perform certain tasks and humans perform certain tasks. The design of the technologies constitutes social relationships and social arrangements. It facilitates and constrains various modes of behavior, shapes ways of thinking and materializes social values.

In the case of brain research, obviously, the technologies developed and used will provide access to the brain and in so doing will shape the nature of knowledge about the brain. The technologies will also constitute relationships between those who use the technologies and those on whom the technologies are used. And will also affect how all of us think about ourselves and others and our relationships to one another. All of this you probably know already. I'm sure you do.

Although, I suspect you're aware of this, let me mention how important several programs have been in focusing attention on social, ethical and legal issues in technology and especially emerging technologies.

To my mind, the legislation creating the national nanotechnology initiative was a seminal moment insofar as it specified that research under the initiative would include activities that ensured that ethical, legal, environmental and other appropriate social concerns would be addressed and insofar as possible that these considerations would be integrated with nanotechnology R&D.

I don't know how much this has influenced the development of nanotechnology, but the legislative mandate coupled with funding has produced a significant body of knowledge. And that body of knowledge has generated a broad conversation about anticipatory governance and anticipatory ethics.

The NNI legislation created a new group of ethics researchers but it also drew on a group of scholars who had been supported by a national science foundation program which has -- if any of you are familiar with it -- has been variously named -- it's name changed many times. It was EVST and now it's STS. The NNI and the NSF program have been critically important in giving support as well as legitimacy to scholars and researchers focused on technology ethics. These researchers often work in environments in which funded research and the imprimatur of NSF has enormous cache. This is important to keep in mind.

Nevertheless, as we've learned from nanotechnology and from synthetic biology and other so-called emerging technologies the challenges of doing ethics early on and alongside the development of technology are significant. The most daunting challenge which has been mentioned already a number of times has to do with the fact that work involves identifying and analyzing issues related to things that aren't yet particular things -- specific things.

Ironically, sometimes the reason a developing technology doesn't materialize as predicted, is precisely because those involved recognize a social or ethical or more common and legal issue that can be avoided by taking another direction in the research or the design of technology.

A second challenge of doing ethics early on is using moral concepts that are malleable in certain ways. In my own research, the malleability of such concepts is privacy, responsibility, fairness, autonomy all come into play.

Some resist this idea that moral concepts are malleable because they think I'm asserting some sort of relativism. However, the reality is that when moral concepts are used in particular contexts and in relation to new technologies they have to be interpreted or extended or modified in ways that are complicated. I actually have an example here that we can use. Rawls makes a distinction between concepts and conception. I think most of the work that applies to technologies that I would think the BRAIN initiative has to do with developing conceptions of sort of generic concepts.

In any case, in addition to the fluidity of technological development and the malleability of moral concepts is the challenge of getting ethical ideas explicitly into the activity of design and the discretion of new technologies in useful ways.

In part, the problem is the same as for any interdisciplinary endeavor. As you all know, and was mentioned earlier, research scientists and engineers and ethicists tend to speak different languages and operate in somewhat different cultures. Here the challenge is to create space, forums, languages, legitimacy for reflection and discussion of ethical issues and in such a way that it penetrates the design and development activity.

A number of our committees with diverse labels are used for this. Anticipatory governance, anticipatory ethics, values and design, responsible innovation, real-time technology assessment, discourse analysis. The overarching idea in all of these and I think this is probably the most useful thing to say is that they all involve the kind of reflexivity that is bringing to the research and development activity a form of reflection on itself. This stepping back to reflect can be done by those engaged in -- by those who are inside doing the research, scientists and engineers or by those outside. And most of the people who have done this in the past have said that the best thing is to engage these two groups. Again, I'm sure you know this from bioethics.

So let me end just by saying that if I had more time, I would talk a little bit about the kinds of activities that have been undertaken in the case of other technologies. And let me emphasize the importance of ongoing monitoring of this activity not just in the early stages but ongoing. And let me stop there.

DR. WAGNER: Deborah, thank you.

Thank you for demonstrating that engineers can think ethically.

Our next speaker is Doctor Thomas Murray. He is President Emeritus of The Hastings Center where he served as President and CEO from 1999 until 2012. He's also former Director of the Center for Biomedical Ethics in the School of Medicine at Case Western Reserve University.

He currently serves as principle investigator of The Hastings Center project on ethics and synthetic biology. Receive the Henry Knowles Beecher Award from The Hastings Center in 2012 and Patricia Price Brown prize in 2013.

Welcome and welcome back.

DR. MURRAY: Thanks very much, thanks, Jim. And hello to old friends and to other folks that I've just more recently met. It's really a pleasure to be here. So I want to thank you for the invitation and I want to thank the staff of the Commission for their help in tracking down a variety of articles that looked at the Genome ELSI Program. I've been asked, as I understand it, to reflect on the genome ethical, legal and social issues, a/k/a ELSI program. And what lessons there might be for whatever we might undertake to sort of help think about ethical, legal and social issues in a neuroscience initiative.

Well, let me do that. My history with this goes back 25 years to a congressional hearing where I've been called by a congressional staff or scientist who said, you know, we're planning to map the sequence all the human DNA. We think there might be some ethical issues involved in that. We'll give you five minutes to talk about it. And I laughed and I said I can't do that in five minutes. And she said, you can have ten minutes, as you have given me. If the panel is interested they'll ask questions and there will be a conversation. So I arrived at this hearing not knowing who else was going to be there. And two of my fellow panelists were Jim Watson and Victor McKusick. So I gave my testimony and in that testimony I sort of tried to deal with what I thought were phony issues and set them aside and then talk about the ones that I thought were likely to be real issues and divide them into three categories. So I will close my remarks this morning with those three categories.

And I said it would be really wonderful if for the first time in human history an effort to think about the ethical, legal and social issues was actually alongside the science itself. Only a tiny proportion of the resources that report into the science could be given to some such program it would be a wonderful thing.

And as I recall the meeting, Congress talked about two things. They wanted to know -- remember, this is about '87, '88. They wanted to know if Japan would steal the results of the genome project. This was after the VCRs and such. And the second was ethical issues.

So Jim Watson is a savvy person and I'm not sure to what extent that had an impact on what ultimately he proposed but he announced that there would be a three percent satisfied for the ELSI program and the NIH portion and it was ultimately increased to five percent.

Now, the important thing to recognize is that the ELSI genome program was never one thing. It was at least three different things. There was an ELSI working group. For the first number of years, I was a member for five years. That was the full term you could serve. That was a group that was advising the genome -- at that point it was the Genome Center. So there was a working group. There was an investigator initiated research operation, typical sort of NIH investigator initiated research, peer reviewed, etc., and then ultimately there became a staff driven policy office at the Genome Institute. So there were at least those three components. Now, I'll mention each of those in a moment. But think in terms of what the right model is for an ELSI program. And I don't claim to have the right answer. I'll just mention three possibilities. One I've often -- the metaphor I've often used is thinking of the people who do ethics work as the fellow who follows the elephant in the circus parade with a shovel and the job is to go in and clean up. It would be nice if you could at least walk alongside the elephant. It's still an elephant and you're still a lot smaller. You might try to nudge it or lead it. So that's one model. The second model and I think this is the model that at least some scientists would prefer and that is to be a kind of handmaiden or if you know the sport of curling, the person who goes in front and sweeps the obstructions away so that the curling stone can go where you want it to go. And they like to be able to go where they wanted to go and they like the ethics types to make sure that it got there safely.

A model that -- those are both -- they have their merits. A model that I have proposed is I borrowed this -- the concept from Michael Walzer is the prophetic model and here I don't mean wondrous predictions. I mean, rather old testament prophet work which is figuring out when your community is falling short of its own values and toying of that. So the work of an ethics panel can be quite critical with an ELSI program including critical of the activities or at least the direction they see the science headed in. And some combination of those is probably right, but you may have better ideas for models, but those are the three that I would mention.

Now, the working group they had at least three -- the working group did at least three things noted, it had an insurance task force that looked at genetic information in insurance. I ended up chairing that task force. We did a report on genetic information and health insurance. I'm happy to talk about that later. I think that was a solid piece of work. NIH had no idea what to do with it in terms of policy. They weren't even sure they wanted to acknowledge it. Later on they embraced it. A second thing we did was we saw a number of different institutes funding research on cystic fibrosis testing, but they were not coordinating it. And so we sort of corralled them and said you need to collaborate into a CF consortium.

And a third accomplishment was we pushed an administrative clarification so it didn't have to go through Congress to the Americans with Disabilities Act. So that having a genetic predisposition to disease will now be regarded as a disability under the ADA and therefore afforded its protection. And that turned out to be significant for the Equal Employment Opportunity Commission in the Burlington Northern case.

Attention all the way through, by the way, was whether genetics and genetic information should be seen as just other kind of health related information or whether it should be seen as somehow special and exceptional. I attacked genetic exceptionalism which did not make me popular in some circles of the genome project. I would say the main accomplishment of the genome professional staff was the Genetic Information Nondiscrimination Act. It's a statute. They're quite proud of it. I'm ambivalent about it.

Now, one of the chief criticisms directed against the investigator initiated research component was that it was insufficiently effective in policy shaping. Now, here I'm reminded of the scene in Casablanca where Captain Renault goes in and he says, I'm shocked, shocked to discover that gambling is going on in this place. Anyone who thinks of the way to make policy is through investigator initiated research is culpably naive. That's not what it does best. If all you want is policy then you shouldn't do that. If, however, you're interested in fostering a dialogue and creating a creative attention among a community of scholars in science, engineering, the humanities and the social sciences, it can be a wonderful instrument. But it will expand our shared knowledge in basic research in the humanities and social sciences and just as basic research in the neuroscience's initiative is foundationally important I think I would argue the same would be true of research in the humanities and social sciences here.

But when you design such a program be very mindful of the institutional and professional forces that will bear on any such research program. Over time those of us who did the conceptual normative work felt that genome ELSI had become less hospitable to that line of work and was more interested in funding empirical research which seemed more comfortable which again was out there sweeping so that the stone could get to where you wanted it.

A central question is what should an ELSI program aspire to accomplish. And the second question is how can we tell it if it's successful? And here I list four categories. One is public policy. What I found is astonishingly often people like me were called upon to be wet blankets. And that's the comments about hype earlier are very relevant here. We would get calls sometimes from Congress, sometimes from journalists. When they say, well, what's going to happen with A, B or C and we would say, well, it's not here yet or don't expect immediate wondrous effects here.

A second will be over science policy and here it might be about priorities. It might be about the conduct of research, human subject protection, fairness and avoiding exploitation. There are a number of things there.

The third would be a republic engagement and understanding. And a fourth would be sort of mapping the terrain. Identifying issues and deepening understanding of those issues.

Now, I see that my time is just about up. So let me -- I'll hold my recommendations for later, if you want them. And mention the three categories that seemed relevant to me 25 years ago about the genome project because it still seems relevant.

One will be the avalanche of information. There was genetic information back then. At that point you had merely a few pebbles creating down the hill, but it's really picked up volume now and we're near the $1,000 genome.

In the brain sciences, I think we're way beyond where we were 25 years ago in genome sciences in terms of this avalanche of information understanding it, assessing it, using it in ways that help not hurt people.

A second major category is enhancement. Will these technologies be used to affect human physiology, anatomy, interactions in ways that may be wise or unwise.

And the third was genetic apology. That is the use of the scientific information to sometimes explain away things like enduring differences among populations. Sometimes explaining away behavior. Sometimes rationalizing social policies or practices and we need to be mindful of all three of those.

The one that I did not anticipate, but has turned out to be quite important where it's mentioned and that is issues of intellectual property and commercialization and that ultimately got added to the genome agenda.

Thank you very much.

DR. WAGNER: Thank you, Tom. And we will, in fact, want to get back to your recommendations on this.

Our final speaker and he is Doctor Anjan Chatterjee. He is Professor of Neurology and a member of the Center for Cognitive Neuroscience and Center for Neuroscience and Society here at Penn -- University of Pennsylvania. He's on the editorial boards of numerous professional journals and was awarded the 2002 Norman Geschwind prize in behavioral and cognitive neurology by the American Academy of Neurology. He's a founding member of the Board of Governors of the Neuroethics Society associate and is the President of the Behavioral and Cognitive Neurology Society.

His research focuses on spatial cognition and language, attention to neuroethics and esthetics and we're delighted to have you here. Welcome, the floor is yours.

DR. CHATTERJEE: Thank you very much. Traveling here was grueling.

So when we think about translational medicine, typically, this is the axis that we think about which is that advances in basic sciences and events get translated to the bedside and questions that arise at the bedside guide the kind of bench research.

And advances along this axis are providing deliverables and I'll be very concrete about the kind of deliverables we're talking about.

A month ago I was the neurology attending on the ward here and literally about a hundred yards from here I walked into a patient's room, she was in her mid 50's. She was disheveled, she was manic, she was delusional. There were papers strewn all over her bead and across the floor. And she had even scribbled on her own body with a ballpoint pen all sorts of messages. And here she was on a neurology ward. What she had was a particular immune mediated encephalitis where she was producing antibodies to the NMDA receptor. And this is a disorder that even 10 years ago we knew virtually nothing about. And now we have a fairly good sense of the mechanisms involved and how to treat it. Okay, so this is an example of what I would suggest that the advances in biomedical research and translational medicine has the potential to deliver.

But what I would like to do in this talk is to expand the notion of translation. That is not simply between bench and bedside and that there is a kind of translation of information that extends beyond that. And much of what I talk about has been touched on in various points in this morning's talk. So one is the translation of this information to commerce. And in that context we can think about the ways in which there are synergies. There are certainly every research university right now is trying to partner with industries. There are certain synergies but we shouldn't be blind to the kinds of conflicts that arise. And one is the way in which neuroscience can be sold. We've heard about hype. And so a couple of examples of the way neuroscience is being sold right now.

The first category I would suggest is that there are things being sold where they have plausible hypothesis for why they might be effective. And brain games are a good example of this. Probably everybody around this table has some little concern about as your gray hairs increase on your head, the possibility of developing dementing illness. Brain games are a big business right now. And in fact, there are applications for mobile devices being used. And while there is plausible scientific rational for this, the evidence and support of their efficacy is thin at best.

The second perhaps might be regarding as particularly egregious which is the use of certain neuroimaging technologies. And in particular, I'll talk about SPECT as they're used in the diagnosis and management of neuropsychiatric illnesses.

Now, there are clinics across the Country that are using this and presenting this with a very compelling logic. And the logic is the following. If you had a broken bone and went to see an orthopedic surgeon, you wouldn't trust them if they didn't get a picture of your bone before they operated on you. So how could you possibly go to someone who is trying to fix your dysfunctional brain if they don't do a functional brain imaging. It just doesn't make sense. And of course, the logic of this is completely flawed, but this, again, is big business. There's a whole category of clinics around the Country that are using this. These are not paid for by insurance companies as they should not be. But people are willing to pay several thousand dollars for this kind of technology. So that's one way which we can expand the notion of translation into commerce.

The second has also been touched on is how this information as it grows and as it will grow gets translated into cultural norms. And the one example of this, there are many, but the one I will touch on which has been brought up several times today is this notion of enhancement. And we have ways in which we can enhance our motor systems, motor learning, attention to some extent, memory to some extent. We can modify our emotional reactions to situations. And much of this discussion has been in the context of pharmaceutical interventions. But what is coming down the pipe is the way in which noninvasive brain stimulation will be used exactly in the same way. In the academic literature, there have been increasing papers talking about the same ways that you can manipulate our abilities using noninvasive brain stimulation. These are cheap. These are easy. And in fact, even in the last couple of months there has been a company that is marketing devices that you can buy and completely bypass any kind of medical or institutional venue to actually get this and it is being framed as wearable technology as opposed to a medical device, right. So you can go out and get your so you can go out and get your Google glasses and your little TDCS machine and wear on your head and do what you want to. And point out for those of you that might be concerned about, for example, doping in sports. It is not at all obvious right now how you would monitor this, right. It's not the same as actually doing a drug test. It's not clear how this would be monitored if you think that there is a problem with this and it should be monitored.

And the kind of ethical issues that arise these have been discussed certainly in the context of pharmacology, but I think apply for these other ways as well which has to do with safety. There are questions of justice and fairness, who has access to this. There are questions of potential erosions to character if the link between effort and achievement is undermined. The questions of autonomy which is that could it be the case that under some circumstances we actually require people to use enhancements with the argument that it is for the greater good.

So the question I think you can ask is what, if anything, should we do about this given that the information -- the translational information in a narrow sense is going to advance and what do we do about these more broadly conceived notions of translation.

The standard response one would be regulation and this is a question of exactly what do you we regulate. Regulations around development and implementation of drugs that are reasonably well established. But things like devices, less so, and if devices regarded as a fashion accessory such as wearable technology, then it's not clear who actually does the regulating.

Diagnostics such as the SPECT scans is also not real clear how the regulation happens. And I think, in general, for regulation issues always is getting it just right. The kind of Goldilocks principle which is you don't want too much to stifle innovation but you don't want anything where there are no safeguards.

But the other way to do this I think is something that at least I feel more comfortable with, and perhaps many people on the panel, which is education that most of us are educators of some sort. And I would suggest that there are several ways that we think about education. 30 years ago when I was a medical student here at Penn there was no ethical education. It just wasn't built into the system. Since there have been greater access to ethical education for postgraduate education, for people who are studying neuroscience, for people who are in medical schools. But I suggest it's still very clunky. If you get on a website and you answer a bunch of questions there isn't a kind of deliberative engagement with ethical issues that you would think would be desirable. And I think part of the reason is that we do not think of ethical deliberation as a core competency for what it means to be a scientist and what it means to be a clinician. And until we establish that I think we will continue to have these sort of clunky ways of doing things.

And then finally, the question is how do we get the public engaged and the public educated? Earlier Doctor Gutmann mentioned public assurance. I think one way to get public assurance is to have public engagement. And the only way to have reasonable public engagement is to have the public educated. So when they look at this, they're giving this logic of the orthopedic surgeon and the psychiatrist. They look at this and say this doesn't make sense. And how do we do that and I would suggest that at least one piece of that is that people that are working neuroscientists, working clinicians, need to engage with the public directly. And at this point there is just no incentive to do so.

In my own case as an academic physician here, there's no incentive for me to engage the public. And so I would suggest that as far as policy matters, that would be an important way because a lot of these will play out based on societal and cultural norms and the only way we can get to a place with that is to have an educated populous.

DR. WAGNER: Doctor Chatterjee, thank you.

The three of you have given us a lot to squeeze in to the little bit of time that's left for this preliminary discussion before we go to the larger panel. But I do want to open the floor for questions directed specifically to one or more of our three.

Yes, Anita.

DR. ALLEN: Tom, I'm your Forest Gump, I think, because I was sort of hovering around ELSI and the national advisory committee and human genome research. I was for a time the panel's ex officio member of the ELSI working group which is a subcommittee of the larger group that was funding all the better and faster sequencing.

And I have a memory, maybe it's a false memory, but I do remember your insurance group and how excited we were to be able to address the implications of the BRCA1 and 2 and discovery and that was great. And I don't recall the cystic fibrosis episode or component, but here is what I do remember. It goes to the larger question about how we integrate ethics and science or fail to do so.

But what I seem to remember is that the reason why the ELSI working group was ultimately disbanded in favor or a staff driven policy group was that the ELSI working group was pushing to discuss hot button questions about genetics such as behavioral genetics, genetic foundation of intelligence, genetic foundations of criminality. And because we're talking about the Federal Government it was not -- these weren't topics that the Government was eager to have being grappled with, but yet some members of the ELSI working group these were really important questions just as important as insurance and disability rights.

Do you have any memory of this and if you do remember this, what is the lesson in that for whether or not Government sponsored science can include meaningful, important, broad ethics?

DR. MURRAY: Terrific question, Anita.

So the working group was founded about '90. And Nancy Wexler was the Chair. Nancy was famous for her work in Huntington.


DR. MURRAY: And by '95 we were gone and we were replaced by Lori Andrews as the new chair and a new group. So that's when this ultimately when it was disbanded, yeah.

But I was very much in contact. And your memory, I think is correct. It was a desire to press some controversial issues that made people uncomfortable. So I'll give you one of my recommendations then. If you are interested in keeping an ELSI program vibrantly engaged with issues that might be controversial and might be more policy relevant, you want to create a variety of venues. I think you shouldn't attempt to rely -- you shouldn't attempt to rely upon the investigator initiated research program. That's not what they do best. But you should have a variety of venues. You might have commissions, panels, ad hoc panels, advisory bodies, executive offices. And they should emphasize inclusiveness, multiple voices, transparency and policy savvy. They ought to have those things. And be aware that there is going to be tension and there is going to be conflict and the institutional pressures of the Government are always going to push against that. You want to bury the controversial stuff. And you have to have different ways of funding these different things. So as an ad hoc task force we can do controversial stuff, the insurance task force, right. I had to beg for money for every meeting and usually got it. But we managed to do a report.

So I would just say we need many voices and many venues if you want to stay engaged with controversial issues.

DR. JOHNSON: So I agree with what Tom said, but in the nanotechnologies area and in my own step on technology, there's been a push to get much closer to the point of contact. That is to embed people in the research labs where you both develop their expertise with the technologies but also where just simple asking questions -- so I just wanted to add that.

DR. WAGNER: Point well made and I think it's one we want to remember. There may be many fronts on which to address the issue.

DR. GRADY: Thank you all three of you.

My question kind of builds on that a little bit. I was interested, Doctor Chatterjee, you mentioned regulation and education. But you didn't mention research and so we live in a world, of course, where it's hard to get foundational bioethics research funding. And I think it's actually getting increasingly harder because money is tight. And yet this seems to me an area where that kind of research might be very valuable. So I wonder both from your perspective of translation and from Tom's perspective of what happened in ELSI and from the two initiatives that you described, in NNI and the NSF project, what you think about sort of foundational ethical research and where it should be supported, who should support it.

DR. CHATTERJEE: I think it should be supported. You know, I guess there are different kinds of ways of thinking through what that research might entail. One is the kind of conceptual clarification laying out of ideas, that scholarly work, but not empirical, right. But I also think there is a role for empirical work in this and particularly to understand what sort of prevailing societal norms are even of how people are thinking about this and I think this ties back into this notion of public engagement is absolutely critical because many of these issues will be decided by societal norms. And until there is some consensus or at least clarity about what those societal norms are it's not clear how we'll proceed.

So I'll give you an example from sports. Most people don't like doping in professional sports, right. So there's all this stuff about baseball going on right now. On the other hand, if you look at bodybuilding competitions, everybody dopes. It's completely accepted. There are separate categories for people who do bodybuilding without doping. But that's accepted. So how do we arrive at those places, what are people thinking and how you educate them, I think that kind of empirical research there's certainly a place for it and not enough of that happens.

DR. MURRAY: I was avoiding the sports thing because I'm the chair of the ethics panel for the world anti-doping agency. A small thing, but deeply embedded into us. And I'm writing a book about this so it's -- I mean, ultimately the difference between bodybuilding or power lifting and Olympic sports and major league baseball has to do with the meaning of the activity. And that's a very complex and socially embedded thing that one has to work out. So I think that's important work to be done. It doesn't happen in a flash and it needs research. It needs thought. It needs that kind of scholarly dialogue where you put an idea out there and somebody else knocks it down. And you reformulate it or you give it up. I mean, that's why I think it's essential to have a research program. To create this scholarly community that will be well-informed about each other’s way of thinking. This is what The Hastings Center has been masterful at for decades. You have scientists, engineers, legal scholars, practitioners, philosophers, theologians, all engage with one another.

DR. GUTMANN: I wonder if this would be a friendly addition. I would urge us not to make a sharp distinction between empirical and basic ethical research. Because for example, the empirical distinction between bodybuilding and doping unless you entertain the idea, which I will just throw out, that another distinction here is that doping is against the rules of some sports. And maybe it should and maybe it shouldn't be, but athletes who dope are trying surreptitiously to violate the rules. And I'm not sure as an ethicist as somebody who has studied moral and ethical philosophy that there's a platonic ideal out there that you should have a rule that you can't use these drugs and you can use a lot of other drugs. But I am sure as somebody who studied both moral and legal and political philosophy that it is really bad for organized sports who have seemingly great athletes who are role models for young children and older people alike, for them to be violating rules and just winning by cheating. And the reason I go on about this is that when we talk about basic ethical research we often are funding research that doesn't use empirical examples. And when we talk about funding empirical research we're often funding research that doesn't entertain ethical questions. And that's the same, really destructive dichotomy as the extreme dichotomy between brain and mind. Which is the very subject of our ethical investigation.

So I just felt called upon to make that statement which I hope you agree with, but I think if we're going to call for, which I think we should call for, the funding of some kind of ELSI project to do two things. One is I would very much take Anjan's thing that it had to be an educational ELSI, not just scooping up the manure or clearing the way, but having the actual researchers being educated and feel that they're ethical engineers and ethical scientists.

And secondly, that it not have this stark dichotomy which is a false dichotomy, a destructive dichotomy between basic ethical research and empirical research.

DR. MURRAY: Well, said, Amy. In its best, sort of the kind of interdisciplinary research which The Hastings Center helped pioneered and which often occurred in the ELSI project did just that. It brought people from all these different -- the empirical, the conceptual all together in a very creative way. It was always attention because we go at the world in different ways. We have different ways of thinking about the world. We have different methodologies coming to appreciate the insights that those different frames of mind methodologies can contribute to the large practical question is wonderful. It's a joyous occasion when that happens.

The institutional pressures of a peer review program by a scientific agency pushed towards the empirical. So if you want to keep that tension alive and if I leave nothing else, to keep that creative attention alive, that's the point I want to drive home. It's going to require continual work and maintenance.

DR. CHATTERJEE: If I can just address that. Of course, I agree. And I think this is similar to when if you're doing cognitive neuroscience research, for example, which is the data just doesn't appear, right, regardless of our faith in big science. But it's not going to just deliver understanding. And it's predicated on having a pretty decent theory about what you're testing and I would say that this is analogous in the sense that your really need to have a firm conceptual base to even know what to look at empirically. And the kind of dialogues that Thomas talked about is absolutely critical.

DR. WAGNER: Just for a quick clarification. Amy slipped in something that I thought was pretty significant when she was talking about the kind of ELSI we might entertain as a Commission. And originally it had been, Tom, you talked about in serving in a prophetic role. But what you slipped in is that it should also be serving as an educational role along the lines it seem to be Debra what you had been talking about to ensure that even at the basic science or engineering lab bench there is awareness and commitment.

DR. JOHNSON: I'm not sure I understand the discussion that's going on frankly. Because I'm worried that the ELSI model which I don't in anyway want to critique. I mean, and this goes back to your question. It depends on what we're trying to do here. If what you want is a group of people who keep alive the issues and write about them and think about them and you develop expertise in the neuroethics, that's one thing. But if what you want to do is sort of impact the initiative you got to get inside. You have to actually have people who are in an enclave of expertise, but who are willing to -- my mission in life has been to impact engineers. And so you have to get in there with them.

Now, there is the problem of being coactive once you're in there. So you need both.

DR. GUTMANN: I think there is a misunderstanding here. I'm going to call it an educational ELSI because I just think it's important that it educate public and it educate practitioners. And while it is important that it exists alongside of individual experts and ethics being embedded in enterprises, those individual experts embedded in enterprises will be not only coaptive possibly, but just isolated. If there isn't a collaborative group that is moving forward interesting and accurate practical research in neuroethics. Otherwise, the whole understanding of what an ethical enterprise is in neuroscience is going to stagnate. I mean, it's not as if although the basic high level principles don't change over millennia, the actual application, as you said, the conceptions that you to use to move it forward need to keep up with the context and the science. So I don't think these are -- ought to be seen as a pose. I think one is very -- one, in fact, even more so is dependent on the other. There's a symbiosis here.

DR. WAGNER: Tom, we'll give you the last comment in this session.

DR. MURRAY: I think this all very clarifying. And in fact, an ELSI program -- I tried to be -- maybe I didn't make it as clear as I wanted to.

You will have multiple goals. You may want to affect public policy. You may want to build a body of research and create a community of scholars. You may want to do pubic engagement education. Those can be synergistic, but they require different strategies to achieve them individually. So I really want to lay out a challenge. If there is going to be a kind of ELSI program for this neuroscience initiative, it's 25 years off. Take advantage of the changes in new media, in social media. Find ways to engage and educate the public that we just did not have. We didn't have the possibility of doing when the genome ELSI program began. And it seems like the possibilities are so rich. It's probably going to require a 20-year-old to figure out how to do it. But it seems like a wonderful thing. So don't fall prey to the deficit model. The deficit model is if we can only explain it better, if they would listen to us better, then they will understand what's going on and why they should let us do it. That's not the way to go about it. We have to really engage and listen as well.

DR. WAGNER: I want the three of you to stay right where you are even as you accept our thanks for your presentations.

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.