Transcript, Meeting 17, Session 8 and Concluding Remarks

Date

June 10, 2014

Location

Atlanta, GA

Presenters

Round Table Discussion

Transcript

DR. GUTMANN: Welcome back. Thank you again. We have a roundtable now, and, as is our tradition, we want to get input from you as to what you think is the single most important issue that we should deal with, whether it be a finding or recommendation in our report. You understand we're going to deal with more than one issue, and I'm sure you have multiple issues you care about, but just to get us started, what is the single issue most in need of the Bioethics Commission's attention as we consider the ethics of neuroscience research on the one hand and on the other the potential applications of the results of that research? To put it in shorthand, and it is just a shorthand, we are thinking of the ethics of neuroscience and also the neuroscience of ethics, and I'll be begin at this end with Dr. Mele.

You didn't make eye contact and I called on you anyway.

DR. MELE: I'm so handicapped in answering this question because I'm not an ethicist. I don't even ever think in a theoretical way about what's right and wrong. I just go by what my parents taught me when it comes to ethics. So can I take a pass? Is that allowed?

DR. GUTMANN: Sure.

DR. MELE: Okay. I'm doing it.

DR. McGINN: I wanted to make a meta comment first about the question you posed to us. You said what's the single most important question we should attend to when we are thinking about brain research and the applications made thereof. I just wanted to point out that there's a kind of an assumption underlying that question, namely that technological applications coming out of the science, I think equal attention should also be given to the technologies that are developed which make the science possible. And there's room for addressing -- that's part of what I thought was interesting about speaking about neurotechnology-enabled brain research. In fact, wasn't it -- the BRAIN Initiative is what, Brain Research through Advancing Innovative Technology. At least that acronym in the Obama initiative seems to recognize that there's an important bidirectional relationship between the neural technologies and the brain research.

So are you making a recommendation or just making a meta observation?

DR. McGINN: Well, I'm saying when you do address ethical questions, that you also direct questions that are directed at the technologies that make it possible, because in the era of big science or expensive science, some science is only possible when the technologies make it possible. And if there are serious issues that are raised by those technologies, there are constraints which can be put on those because of concern with what kinds of scientific uses might be made of it.

As far as my own personal opinion about what's the most important, I guess I would nominate two things but they are both obvious. One is consent, given all the people with vested interests in getting on with the work and tempted to be perfunctory about consent scripts and voluntariness. It's got to be rigorous and maybe done by third parties.

And then the other one is the notion of harm again. It should not be simply physiological harm. It should also think seriously about the potential for effect on psychic wellbeing.

DR. GUTMANN: Thank you. Dr. Lin.

DR. LIN: On saying what would be most important, what I'm about to say is personal, it's not a view of the Academies. It should be but it's not. This actually came up in a conversation with Christine in a sidebar. Mostly the study, the consideration of ethical issues in science is seen by scientists as a block, as a bad thing, somebody is going to stop me from doing what I want to do. And we in fact tried, as we tried to grapple with the history of ELSI in science, we couldn't find really important examples of where important research was motivated by ethical concerns. There are probably some. You could argue, for example, that research on the atomic bomb was in fact driven by positive ethical concerns. We didn't want Hitler to win the war, to get it first.

But mostly when people think about ethics in science, and certainly the practicing scientists -- this was the reason why we emphasize lightweight processes, that mostly people see, scientists see consideration of ethical issues in some formal sense as an impediment to their work. And if you can find a way of turning it around to where ethics can support their work, that's really an important thing to do. In the language of economics, the incentives right now are misaligned to think about ethics in any serious way. If you can change, quote, the market, I think you are in a much better shape.

DR. GUTMANN: Thank you. Dr. Greene.

DR. GREENE: I'm not qualified I think to put my finger on a particular policy question, but speaking as an experimental psychologist and a neuroscientist, I think the take-home message I would give is one that I spent a lot of time on in my original remarks, although that's not where our discussion went on, which comes back to the camera analogy and the idea that we have automatic settings and manual mode, we have gut reactions and we have the ability to reflect.

And I think that we are too ready to trust our automatic settings, too ready to trust our gut reactions about things. And as the camera analogy suggests, for most of the time our gut reactions are pretty good. It's not like get rid of them. But when it comes to bioethical issues, which really force us to confront things that our brains were not designed by biology or culture or personal experience to handle well, or at least we can't count on them being so, we really have to be willing to put our gut reactions aside and think about things in a more reflective way. And I would urge you to think specifically about the consequences of different policy choices.

DR. GUTMANN: Thank you for coming back to that, because that is a very positive point. It feeds into that we not -- and we've taken great pains to not simply focus on the thou-shalt-not’s of ethics but what, actually, the potential is, of science. Is it the case that what you're saying -- you also said at the beginning is of a piece with -- it should not be just neuroscience but behavioral science more generally. So Daniel Kahneman, thinking fast and thinking slow, is of a piece where there's evidence also of how our fast thinking works in many situations but not in many other important ones.

DR. GREENE: Right. And to add to that, I would say that it's really the behavioral research that's primary and I view the neuroscientific research as just a piece of that.

DR. GUTMANN: But I can say by tooting your horn and other neuroscientists that it's an important piece and it may become an increasingly important piece, but it is a piece of behavioral research.

DR. McGINN: May I make a brief comment?

DR. GUTMANN: Please. And then I'm going to recognize Nita, who was next on the queue earlier.

DR. FARAHANY: I got my question answered.

DR. GUTMANN: Okay. Christin then. Please.

DR. McGINN: Just a brief comment in response to Professor Lin's comment about science perceived as a barrier or something that blocks. There are several sources to that. One is -- and I've got concrete experience of that when I gave a presentation on ethical issues related to nanotechnology to the heads of 13 nanotechnology laboratories at an NSF meeting. And there was initial incredible resistance there because they perceive ethics to be (a) religiously based, believe it or not, or (b) that it's equivalent to moralism. They put it into the camp of the subjective, it's not data driven, this kind of thing. So there are really some stereotypes that scientists have about it.

Then the third confusion is the belief that in doing ethics you're somehow trying deliberately to change people's behavior as opposed to letting them know about what some of the issues are so that they can't plead ignorance of those things downstream. So there's a lot of stereotypes about ethics that scientists and engineers have. That can be combated but they are rarely combated.

DR. GUTMANN: Steve.

DR. HAUSER: Thank you, and maybe I'll go back to Dr. McGinn's comment that clearly alerted a number of us: could we use imaging to rank and grade our brains. Yet I do this every day tracking disease. And we always judge each other in many ways, and we rank each other formally through other means, and we rank our cognitive performance formally and make judgments about that. So my question is to what -- how are emerging imaging technologies more than incrementally changing this landscape?

DR. McGINN: First of all, if you're asking for a technical answer, you're asking the wrong person. But I have several responses to your question, and one is old ethical wine in new technological bottles can still be very important. So it may not be a fundamentally different ethical question than the question of how people should relate to information about our genetic natures. But until we passed the House bill that employers could not really utilize genetic information in making employment decisions, it was open to people to utilize that now-accessible information about genes in making such discriminations between people.

By the way, I have found, it’s really interesting that many, many, many people, the majority of the people that I surveyed, were of the opinion that unless an issue was unique to a particular technology, it was not worth paying attention to, so tell me what's different about this, as if to say if there's nothing different, then it's not really worth paying attention to. That's where the metaphor came from – old ethical wine in new technological bottles could still be worth savoring. Or, you know, whatever you do to wine.

So that's sort of my response to you.

DR. HAUSER: Maybe I can just follow up a little bit, because you've written about the obligation to limit distortion as we communicate the value of information.         

DR. McGINN: I think that, for example, one of the questions on our most recent survey had to do with whether or not researchers have an ethical responsibility if they know that somebody with whom, a journalist or media person with whom they've cooperated is about to engage in what you might say is hype or exaggeration or distortion of their most recent finding, even if it's to their benefit, do they have an ethical responsibility to do the best they can? Obviously they don't have the ability to control it, but to try and do something to rectify that. Or is it ethically permissible then to acquiesce in that distortion? I think there is an ethical responsibility to do that. And one of the reasons is if it keeps on, if people acquiesce cumulatively in distortion and the people who are getting the money and whose results are being trumpeted don't deliver the goods, it can really then challenge the credibility of the enterprise.

That's the other reason, by the way, your point about incentives being not aligned, I think ethics can be a really important and good thing for the scientific enterprise if for no other reason -- this is the reason why I think NSF has insisted on it. They don't want it to be able to be said about them downstream, if something were to go wrong, that they did not attend upstream to the ethical and social issues. So it's a form of rational insurance policy.

DR. GUTMANN: It actually flows from Josh's beware of gut reactions rather than things that, if you are thinking more slowly, you would do. So immediate incentives may be to avoid ethical issues and just get on with narrow science, but one major ethical lapse in especially but not only in an emerging science and it can derail the progress of that for a long time. So being proactive as we've called for and integrative in ethics makes practical not just ethical sense, if you're committed as we are as a society to moving scientific progress forward. That doesn't speak to the ranking of brains.

I think Steve's point on that is we do hire people in part on the basis of their cognitive capacities, and that has an implicit ranking in it. And the more we know about people's cognitive capacities -- we shouldn't do it falsely based on irrelevant or false data, but I think you're worried about a kind of rise of the meritocracy. Do you know Michael Young's Rise of the Meritocracy where everybody, the society is ranked solely on the basis of some meritocratic ranking.?

DR. McGINN: Insofar as you are suggesting, and I'm not sure whether you are, that this is déjà vu all over again.

DR. GUTMANN: No, I'm not suggesting that.

DR. McGINN: I think the fundamental difference here is that these technologies in the brain research that they might make possible might allow us to have information about brains which is more than simply guesses or based on hunches or observation, and that can be seized upon by individuals to make discriminations when it comes to allocating resources, for example. And that's a different deal than relying upon hunches or giving tests or how clever a conversation with somebody is.

I'm also concerned about the social constructions that might be made of that intelligence that we are able to allegedly measure with respect to these new -- I see that as opening a can of worms.

DR. GUTMANN: Let me just follow up on that then for the record. So a lot of science has potential good as well as potential bad. Finding out more about the human brain certainly falls into that category. There's some science, the development of the atomic bomb was one in which many of the scientists engaged in that project said it's only because we have an enemy of the sort that Hitler represents that we are willing to do that. They said that. But that was because they were developing a weapon of mass destruction that they could not imagine doing if there weren't an enemy of the sort of Hitler out there.  

Everything that we're talking about, I think, in neuroscience, has potentially some very good applications as well as potentially some bad ones. And I'm wondering whether we're correct in our primary concern that we enable the progress of this science with minimal constraints on the progress, minimal being set by what would be ethically wrong to do. Go ahead.

DR. McGINN: One brief comment about your atomic bomb example. After we knew that we basically had defeated Germany, so that was no longer the original animating motive for the development of the atomic bomb, only one scientist resigned from the Manhattan Project, Joseph Rotblat. That's well to keep in mind, as well as Oppenheimer's famous phrase, "A sweet intellectual problem."

The notion of momentum that can attach to a scientific endeavor, a scientific cum technological endeavor, is not to be forgotten, it seems to me.

And then I just had one brief response to your comments.

DR. GUTMANN: Right, but you are not suggesting anything that's now being done in neuroscience. I mean, if you are -- I'm not asking this as a rhetorical question -- tell us, because I don't know of anything that's being done in neurosciences of the sort of --

DR. McGINN: No, I do not. But one little observation I did have which was suggested by your comment. Instead of thinking about neuroscience en bloc, as you made that comment, the thought emerged in my mind that it might be useful to sort of break that up, I mean, if you think about the neurotechnologies that make possible various kinds of brain research. So rather than talking about brain research, good or bad, think about different subdomains of neurotechnologies that might make this kind of an inquiry, for example, to alleviate an aneurysm, combating epilepsy, versus other ones which might be more diagnostic.

DR GUTMANN: Got it.

DR. McGINN: So breaking it up in terms of technologies can help one to avoid the usual treating as a block.

DR. GUTMANN: We are not treating it as a block. We are just speaking in -- you know, but there could be funded research on that, so that would be important.

Dan. Okay. Go ahead.

DR. LIN: You perhaps sought an example. There is some neuroscience research now that seeks to understand the neuroscience of trust, of interpersonal trust. And it's certainly an interesting problem and you can certainly imagine ways of manipulating perceptions of trust as a result of this. Can I think of good ways of doing this? Sure. I can think of really bad ways of doing it also. And there are, I'm sure there are many, many other examples of that kind of thing. I think the sort of thing that Bob is referring to is that to the extent possible, and I don't want to put words in your mouth, but I think what you're saying is to the extent possible think about what the downside is and try to establish blocks against it, not to suppress the research in any way but to at least anticipate that this is a possibility.

DR. GUTMANN: We've got it and I appreciate that. One of our colleagues who is a senior adviser to the Commission, Jonathan Moreno, has written a lot on the topic of potentially threatening uses as well as the good uses. Dan.

DR. SULMASY: We've in some previous work talked about having a sort of fact check for claims made, for instance, in synthetic biology, et cetera. I'm wondering whether we ought to have in some ways, whether there would be room for sort of exaggerated neuroscience implications check dot com, or something, and I wonder if beyond -- we spent a lot of time talking about free will versus mechanism, and we have free will versus determinism -- whether there aren't other things that we might want to think about. If you have other suggestions I'd like to hear them. For instance, mechanism versus supervenient mind, Aristotle versus Kant and Mill, hardwired for altruism, hardwired for determinism. Are those kinds of claims that need to be in all of your views debunked or are they reasonable claims that ought to be out there and respected?

DR. FARAHANY: Let me say one thing to that, which is that SfN has started a blog that does a fact check on a number of things including -- they haven't gotten into a free will one but they've done neuroscience in the courtroom, they've done quite a few different over-claimed things, which has been terrific, and then they reach out to other experts and have them do additional fact checking on it and they update it, and it's really a great resource and one that I think is a really welcome addition.

DR. MELE: I don't think I can answer that question directly but I've done a lot of interviews because of the big grants and other things with journalists. And what happens is they'll keep asking until you say the kind of thing they want to hear. A lot of people don't know that, so it might be nice if we had like information for, let's say, scientists about how to conduct an interview, just tips and advice. I actually helped an alcohol marketer do a responsible drinking campaign. I did TV and stuff. And so I had to go through this little bit of training. And actually, it helped me later on when I was interviewing people about the grants. So if you don't say certain things, then they won't report it, and the general news will be more accurate than it can be.

DR. GUTMANN: This actually does also go back to the main point that Josh underlined for us, which I hope we can incorporate in our report, because it actually is directly relevant to journalism. If people are too ready to trust their gut reactions, this poses a very big challenge to education because we have to try to educate people to think about more than just what their gut reaction is. And journalists are very aware that they need readers, and if they can get to people's gut reaction with exactly the example that Josh began with, one human interest story will beat out any, whatever number you put out there, a thousand victims of a hurricane, 10,000 victims of a terrorist attack, one story that just gets people's emotional juices going beats that out.

The only answer to that is not any restrictions on the free press, is not, I think, a lot of education of journalists, because the only antidote to that is the education of the public. And while this isn't -- and our society, while this isn't bioethics, it's a subset of ethics. If our society invests very little in educating the public, the downstream effects of that on the appreciation of science, on the appreciation of thinking slow versus thinking fast are going to be bad, not good. So while that's a very general statement, I think it's an accurate one and it flows from something that we are learning from research, that people are more wired to think fast rather than slow. And that's a serious, serious problem.

Herbert, you wanted to say something.

DR. LIN: I have a lot of sympathy with what you said a minute ago about educating the public to pay less attention to their gut and more attention to their analytic processes and so on. I note with a certain regret that the advice that we get from people who study the process of science communications to the public goes exactly in the opposite direction, which is the right antidote is not to try to persuade the public to think more slowly but rather to get them to think fast in ways that are more favorable.

DR. GUTMANN: So you have to get people's attention first. So this is again -- self-correcting what I just said, it's not that you can just assume everyone has got an analytic intelligence. We don't ourselves, even those of us who pride ourselves on it, we don't. So you have to get people's attention first, and telling stories and using examples, as this Commission knows I harp on, we have to use examples. But then you have to move on to the data, the evidence, and so on, right?

DR. LIN: No disagreement.

DR. GUTMANN: It's been very helpful to have this because it's an important part of what we need to do.

Jim has a question from a member of the audience.

DR. WAGNER: Actually we have several submissions from the audience, and it gives me an opportunity, first of all, to thank the audience and a number of very thoughtful people and professionals that we have here, faculty and physicians and students. Some of these are in the nature of I would really like to communicate with you on my own research in free will, et cetera. I don't know that we announced today, but we want that input. We don't use these sessions for that, but we want that input and we ask for it to be submitted on info@bioethics.gov for our staff to look at.

But the question is from a Ph.D. student, Stephanie Hain or Hair, is asking right in this subject, the concept that public scholarship and public attention seems to come, seems to prey on the impulsive; whereas academic health sciences research is supposed to be more reflective. She's wondering about not just educating the public, but she writes, "Can we or should we focus on strategies to change the structure of the Academy through policy and curricular change? Many, especially young researchers, feel pressure to over-interpret the results or overstate claims related to neuroscience and psychology research, especially in the realm of moral decision making." She comments that this may even help you get published in the larger named journals. One wonders what's happening with Nature and Science at times. So she is saying is there something not just about educating the public but what about educating our students in this area.

You already mentioned give them a little media prep. But also with regard to understanding how we communicate most productively within the Academy and more carefully outside.

DR. McGINN: Just a brief sort of statement of fact that at Stanford there are two courses in the entire school of engineering that have the word ethics in them. In the School of Humanities and Sciences there are none, and there is one in the School of Medicine. So I've always found it really interesting that -- I don't mean this to be nasty, but the natural science community seems to be very reluctant to introduce ethics courses as things which would somehow be required and an integral part of education at the undergraduate or graduate level for their students. I mean, at least at the undergraduate level at Stanford every undergraduate engineering student must take a course dealing with the issues of responsibility in engineering. It's a requirement. I don't know why that is, but Lederman's statement, I think the scientific community just wants to think that it's somehow -- it's a different beast, and you really shouldn't traffic in those kinds things other than fabrication, falsification, and plagiarism.

DR. GUTMANN: Dr. Mele, I'm going to give you the last word before I wrap up.

DR. MELE: Okay. I have just a brief thought on that. We have lots of studies, for example, showing that priming people with the belief that there's no free will increases misbehavior, and there are data like that in other domains too. Like if you tell people they don't have self-control, they'll get worse. So if people who are going to be talking to the press or doing these studies or thinking about exaggerating are exposed to hard data indicating that this news has a bad effect, then that might just make them think a little harder about being more accurate and not exaggerating. I think that could help. It's not coercive even. It's just informing.

DR. GUTMANN: Thank you. And I want to just say on the behalf of the whole Commission and our terrific staff that this has been very edifying, interesting, and it will help us in thinking through our report, and I want to thank the four of you for joining us and taking the time and thoughtfulness to give us good advice. So thank you very much and we are adjourned.

 (Meeting Adjourned)

 

 

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.