Transcript, Meeting 16 Opening Remarks and Session 6

Date

February 11, 2014

Location

Washington, D.C.

Presenters

Pamela Sankar, Ph.D.
 
Associate Professor 
Department of Medical Ethics and Health Policy 
Senior Fellow, Leonard Davis Institute of Health Economics University of Pennsylvania
 
Barbara Herr Harthorn, Ph.D.
 
Director, NSF Center for Nanotechnology in Society 
Professor, Department of Anthropology
University of California, Santa Barbara
 
Mildred Cho, Ph.D.
 
Associate Director
Professor of Pediatrics
Stanford Center for Biomedical Ethics
Stanford University
 
Erik Fisher, Ph.D.
 
Associate Director for Integration 
Center for Nanotechnology in Society
Assistant Professor, School of Politics and Global Studies and the Consortium for Science, Policy and Outcomes
Arizona State University

Transcript

CHAIR GUTMANN:  Good morning, everybody.  I'm Amy Gutmann.  I'm President of the University of Pennsylvania and Chair of President Obama's Presidential Commission for the Study of Bioethical Issues.

On behalf of myself and our wonderful Vice Chair, Jim Wagner, who is the President of Emory University, we welcome everybody to the second day of our 16th meeting deliberations.

Before we continue, let me recognize the presence of our Designated Federal Official, who is also Bioethics Commission Executive Director Lisa M. Lee.

Lisa, will you please stand so everybody can see you?  Thank you.

What we're going to do today is continue our discussion of the ethical issues in neuroscience, focusing in particular on the integration of science and ethics from a project's earliest stages.  Let me underline the fact that we are focusing on neuroscience not because anyone, ourselves certainly included, believe that the ethical issues of neuroscience are completely unique or greater than in any other field, but because neurosciences are a relatively young science, because we have a national focus now from the presidential level in many foundations  on moving neuroscience forward and because we believe as a Commission that it's important to integrate ethics early on in the stage of science, and this gives us an opportunity to advise the President and the government and other institutions on how best to do it, and that's why we've gathered experts to give us advice on this this morning.

For those of you in the audience who were not with us yesterday, I'd like to take a moment to explain how we take public comments.  At the registration table out front and with all of our staff members, there are cards, and we ask that you write down any comments you have on a card, and as we did yesterday staff members will pass up cards to Jim or me, and we will read the questions and address them.

It also gives me an opportunity to ask our wonderful staff to stand up so you can see them, and they have cards.

Thank you.

With that, we're going to start our first session, which is the implementation strategy for ethics integration, and I'm going to turn it over to our Vice Chair, Jim Wagner.

VICE CHAIR WAGNER:  Well, good morning, everybody.  Good morning to fellow Commissioners.

Oh, sorry.

CHAIR GUTMANN:  You know what I just thought?  Let's go around and have Commission members introduce themselves.

VICE CHAIR WAGNER:  That's fine.  Nita.

DR. FARAHANY:  I am Nita Farahany, Professor of Law and Philosophy and Professor of Genome Sciences and Policy at Duke University.

DR. KUCHERLAPATI:  Raju Kucherlapati, Department of Genetics in Madison at Harvard Medical School.

DR. MICHAEL:  Nelson Michael, an HIV researcher at the Walter Reed Army Institute of Research.

DR. ALLEN:  Anita Allen, Vice Provost for Faculty at University of Pennsylvania, and Professor of Law and Philosophy at Penn.

DR. HAUSER:  Stephen Hauser, Professor and Chair of Department of Neurology at the UC-San Francisco.

DR. GRADY:  I'm Christine Grady at the Department of Bioethics at the National Institutes of Health Clinical Center.

DR. SULMASY.  Dan Sulmasy, University of Chicago, Department of Medicine and Divinity School.

 

SESSION 6

 

VICE CHAIR WAGNER:  Thank you all.

And this is our session exploring the implementation of strategies for ethics integration.  Welcome to our panelists.  Thank you so much.

My role will be initially anyway to introduce each of you and to read those introductions into the record, and then we'll move one right after the other before we open for discussion.  Actually it will be as much in response to your presentations and a roundtable discussion as well.  We're combining all of that this morning.

So our first speaker is Dr. Pamela Sankar.  She will start things off.  She is an Associate Professor of Bioethics in the Department of Medical Ethics and Health Policy at the Perelman School of Medicine; Senior Fellow at the Leonard Davis Institute for Health Economics; and faculty in the Masters of Public Health Program at the University of Pennsylvania.

She is also Chair of the Genomics and Society Working Group of the National Human Genome Research Institute.  Her major areas of interest include the history of DNA typing, social and ethical issues in genetic research, and operationalization and teaching of social responsibility in science.

Dr. Shankar, welcome.

DR. SHANKAR:  Thank you.

It's an honor to be here.  I look forward to the discussion and hearing from my colleagues.  I think this is a very timely and very exciting topic for me to be part of this conversation.

I've been asked today to speak briefly about the origins of the ELSI Program in NHGRI and some of the early history and some of the early attempts that they made for dealing with these issues within the genomics world.

So I'm going to speak briefly about that early history and about two conflicts that I think were inevitable given the way ELSI was constructed in the beginning, and then about two kinds of initiatives that they put in place and what I think is the relative success of each sort of as models going forward.

Okay.  So ideas or proposals for mapping the human genome emerged during the mid-1980s and very soon after the science was proposed, questions were raised about the possible social and ethical implications of this work, and two reports in particular, one by the National Academy of Sciences, the other by the OTA, Office of Technology Assessment, highlighted what would be the social and ethical issues, and this was covered in the press.  There was a lot of talk about whether or not this was a good idea from the social and ethical perspective.

The science went forward, and was funded and at a press conference, James Watson, who was the Director of the Human Genome Project, was pressed by a reporter.  So they said, you know, "You have done all this work.  You have moved the science forward.  What have you done about everybody's concerns about the social and ethical issues?"

And Watson off the cuff invented ELSI, and he said, "We are going to have a program within the Human Genome Project, and it's going to be funded by three to five percent."  He said five or he said three and they changed it to five.

And so there you have it.  That's the origin of ELSI, all right, which is sort of a wonderful story and I think also part of the reason that there are some problems with ELSI.

So what you have then is a program created within NIH, within the vary office doing the research that it is supposed to be watching over, and this is unprecedented, and it's unprecedented enough to simply have an enduring ethics program within NIH, let alone something within its own office.

They chose philosophers as their first two Directors, and they came up with a list of mission statements.  In the mission statement they came up with a list of research areas that they would look at, and they included topics like individual psychological responses to knowledge of genetic variation, uses and misuses of genetics om the past, conceptual and philosophical implications of the human genome.

So this is NIH.  This is research.  Watson is presented with the issue of how do you deal with social and ethical issues.  He says you invent a research project.  That's what you do.  It's a research office.  It's like any other research office.

That said, he then is talking to Congress again, and Congress says, "You created this office, but how really are you actually going to deal with some of these issues?"

And Watson says, "Well, we're going to create policy."

So here you have a research office and now it's going to create policies.  There are no guidelines for this.  There's no statement about how they're supposed to do it, and the gap between what's the expertise that's required for basic research and the expertise required for policy is never really addressed, and that remains one of the fundamental problems within the ELSI project.

The other issue that never gets resolved is where are the ethical issues.  Where do they live?  What are we really talking about here?

And Watson's idea, and I think this reflects many other people at the time, is that the ethics are focused on where there are mistakes and misunderstandings and misconduct, and so his idea is that we have to ensure that society learns how to use the information only in beneficial ways. 

So there's nothing wrong with the science.  It's how the science gets applied or it's scientists who hype their findings or insurance companies that misunderstand the significance of carrier status.

So the solutions also are then external to the science.  They are technical.  They're practical.  They're targeted.  They're rule-bound.  They're essentially RCR.  They're, you know, responsible conduct of research kinds of solutions.  That's what he thinks is the issues, and that's where he would imagine the problems would be solved.

On the other side, you have the philosophers, who are the Director of the agencies, and you also have all of the scientists who have been brought in by the promise of being funded to do basic science, basic research on very existing and very interesting topics. 

And many of those people are coming out of philosophy or they're coming out of sociology or anthropology, and they're asking questions of ethics.  They're framing ethics as the relationship between science and the common good.  So they're interested in normative inquiries.  So not only the RCR questions of is it true and is it fair, but also is it wise.  Ought it to be done?   And I don't mean the Human Genome Project writ large ought it to be done, but specific projects, and sociological inquiry.  How are topics chosen?  How is research designed?  For whom and in what ways will the results be useful?  Who benefits?  Is any of this research going to help, the question of health disparities, for example?

And so those are two very different concepts of where do the ethics live in looking at these massive, these big projects.  How do you approach the projects?  How do you look at them?

And over the years of ELSI I think that these two contradictions have maintained themselves.  I think in the early years, and I think continuing today that the contradiction with the policy versus research has not been very productive in the format of ELSI.  I think that the latter one, the second one though has actually in some ways been very productive, and I'm going to talk just a little bit about different programs that they've tried to implement to deal with some of these issues.

So early on Watson once again is before Congress and pressed once again to come up with something, and he wants very much for there to be policies, and so the ELSI program itself doesn't seem to be moving quite at the pace he would like, and so he establishes the NIH/DOE Working Group on ELSI Issues.

And DOE at this time also had some ELSI money.  So that's why they're in it, and some of the policy work then gets shifted over to this group, and in some ways it's a solution because it relieves the tension or the pressure on the ELSI Group. 

Another way though, what it does is it divides the two groups, and what could be very fruitful discussions and fruitful debate is set aside because two organizations are dealing with these things separately.

And the working group also has -- this is a bureaucratic issue specific to NIH, but I think something to think about, and that is that it has no real place in the bureaucracy of NIH.  It doesn't exist except on the days that it meets.  It has no enduring, literally doesn't, because it's the way that the bureaucracy works.

So that I think didn't work terribly well.  I think the other one, the other thing that they tried early on were research consortia, which they still have, and I think these were brilliant.  I think these are fantastic.

So they bring all of the grantees together who have proposed topics on similar issues, and put them in a roomy together twice a year, and they bring the scientists in as well who are working on those same topics, and sometimes now, in fact, they have to apply together on projects.

When I was on them in the early years, you didn't apply together, but you were in these consortia together, and I was on one about race and genomics, and they were fora where people could, had to learn to talk to each other.  They had to sort out what words meant.  They had to sort out where their priorities were, what their assumptions were, and some of those conversations were some of the most intellectually interesting conversations I've ever seen and have ever participated in, and I also think that they resulted in the creation of a new community of people which are people who can really talk together about how you can make science socially responsible, and they're informed from the ethics perspective and they're informed from the sociological perspective, and they're informed from the scientific perspective.

So I think that in the early years  ELSI gave us some very good models that we could look to going forward, and I think that the policy versus research issue is something that it would be nice if they would be willing to deal with that better, but I think the conflict that happens when you bring people together who have very different ideas of what is ethics is actually very productive, and I think it makes both sides think harder about what they want and what they mean.

So I will close there.  Thank you.

VICE CHAIR WAGNER:  Very good.  Thank you.

We move on next to Dr. Barbara Herr Harthorn.  She is Professor of Anthropology and Director and lead principal investigator of the National Science Foundation's Center for Nanotechnology in Science at the University of California, Santa Barbara.

Dr. Harthorn leads an international and interdisciplinary team of researchers that study risk and benefit perceptions regarding new technological development among diverse expert, industry and public stakeholders.  Her group also studies modes of effective public engagement and deliberation.

She was the founding board member of the International Society for the Study of Nanoscience and Emerging Technologies, and is a Fellow of the American Association for the Advancement of Science.

Thank you for joining us this morning.

DR. HARTHORN:  Thank you.

Good morning.  One clarification.  It is the Center for Nanotechnology in Society, not in Science, and that is a crucial point or I wouldn't intervene.

VICE CHAIR WAGNER:  Yes, it is.  Thank you for that correction.

DR. HARTHORN:  I've been asked to speak this morning about how the National Nanotechnology Initiative employs advisory committees to inform ethics integration into nanotechnology research and development, and I'm going to read my comments to you.

I'll respond in three parts:  first, how the advisory committees are structured; second, how they inform ethics integration on nanotech research and development; and, thirdly, other processes by which knowledge about social and ethical issues is being incorporated into the nanotech R&D enterprise.

The NNI has had many successes and, as an early effort at integrating research on science in society at this scale, provides lessons that could enable us to do even better at achieving the BRAIN Initiative's goal of "maintaining our highest ethical standards."

The NNI is a sprawling interagency initiative with primary funding flowing to its many participating agencies.  The two main advisory committees established for the NNI are PCAST, the President's Council Advisors on Science and Technology, which has served since 2004 as the National Nanotech Advisory Panel, which was stipulated in the authorization bill, and the National Academies of Science National Research Councils, which since 2002 has provided a series of independent reviews of the NNI.

Reviews by both advisory groups have been conducted primarily as closed door sessions where the committees gather information and invite expert testimony from many sectors, which is then synthesized into reports.

As a Societal Issues Center Director, I have provided oral and extensive written testimony for both PCAST and NAS.  PCAST has also incorporated our research results in their reports.

Both committees are composed primarily of scientists and engineers, and for PCAST industry representatives.  So societal ethics issues are not a strong priority.  Both PCAST and NRC reports have included praise for accomplishments and particularly in the case of NRC reports highlighted areas of concern, including those impinging on ethics and responsible development.

Societal and ethical implications and public participation were firmly stipulated in the NNI authorization bill in 2003, and within a number of inter and intra-agency initiatives.  The National Nanotechnology Coordinating Office has taken the lead in conjunction with NSET and its working groups in organizing stakeholder workshops, and NSF has organized events like the Nano-2 Conference in 2010 to assess progress toward goals, including societal implications.

The 2006 NRC report described nanotechnology as "a potentially disruptive emerging technology" that would require a different approach to handling risks, benefits and uncertainty.  Responsible development of nanotechnology they stated requires "collaborations between chemists and toxicologists, as well as social scientists who desire to address the ethical and policy issues related to use of nanotechnology.  This new approach entails taking an integrated approach to ethical issues that will also involve the public in thinking through the implications of nanotechnology."

The report asserted the value of "informed outside review and societal participation in decision making about the introduction of significant new technologies into our environment."

PCAST's 2010 report lauded "the NNI's strong and growing portfolio of research on the societal implications of nanotechnology, nanotechnology education, and public outreach."

The NNI and particularly the NSF has been unprecedentedly forward looking in its attention to upstream societal ethics concerns.  In 2003, the NSF funded a series of societal research and education projects and in 2005 awarded two national centers at UC Santa Barbara and Arizona State University to address societal ethical issues.

Though modest in funding by science and engineering center standards, they jointly represent the largest such investment in the world.  Now in their ninth of ten years of funding, these centers and related societal research have produced an international network of scholars and a robust body of scholar and policy relevant research that touches on many of the issues discussed in this room in the last day.

Our center at UCSB has produced a range of published work on modes of incorporating diverse voices into public deliberation, on multiple party risk and benefit perceptions, worker safety, social, political and economic analyses of the global nano innovation system, particularly in China and the U.S., and comparative historical analyses of other emerging technologies.

Both centers have developed pioneering research and education programs for integrating responsible development research with their nanoscale science and engineering colleagues, and both are co-generating knowledge and engaging with a large range of stakeholders.

In spite of this output and the production of a rising generation of responsible development scholars -- and I would signal my colleague, Erik Fisher, down the table here ‑‑ I believe that the integration of societal ethics research into the NNI has primarily taken place through individual informal and bottom-up channels rather than top-down structures, or formal processes for incorporation.

I provide below examples of ways we and others across these many agencies, committees, and networks have sought to integrate societal ethics research with nanotech R&D in the absence of formal mechanisms.  Safety is one place where the integration of advisory concerns about potential hazards of engineered nano materials has produced extensive research.

Environmental health and safety have been flagged as a critical area of interagency and international, I would add, coordination with noticeable effects.  For example, two large eco toxicology centers at UCLA and Duke were jointly funded in 2008 by NSF and EPA.  Though primarily focused on technical hazards of ENMs in the environment, they include societal issues. 

For example, I lead a team of societal researchers in the UCLA Center for Environmental Implications of Nanotechnology working to integrate public and expert risk perception research findings with the scientific risk assessment enterprise.  This has involved extensive social science/nanoscience collaboration over the past five years, including the co-production of two international surveys of nano materials companies' workplace safety practices, and my ongoing work on the center's Executive Committee.

And I would note center structures are particularly conducive to such integration.

The prominence of research on societal implications of nanoscale technologies has already peaked in priority in the NNI, in my view, in the absence of emergent controversy.  Nano social science researchers have engaged the science and engineering community, toxicologists, policy makers, industry partners, NGOs, and diverse publics in the U.S. and abroad on issues of risk and benefit, governance, innovation, and the future.

A very partial list of such activities includes organizing societal components in regional, national, and international workshops, conferences and meetings about the global nanotechnology innovation system; reporting on public deliberations on imagined nano futures, hopes and concerns, including human enhancement; providing empirical evidence about public risk perceptions for risk management and communications efforts; sharing science and society  news via blogs, news clippings, clearinghouses, and science cafes, museum nano days, NGO conferences, et cetera; collaborating as full partners, i.e., co-PIs, on research, education and outreach initiatives with nanoscale science and engineering colleagues; and finally, training a new generation of scientists and engineers who think beyond the bench by imbedding in our center's case nanoscale scientists and engineers in training into our social science enterprise.

The societal ethics component of the NNI is relatively small and just now reaching maturity.  The BRAIN Initiative launched ten months ago conveyed a strong desire to capture the public's imagination about the possibilities for science and technology development to solve critical medical problems and to advance understanding in vital new directions.  It acknowledges the critical anticipated role of ethical responsible development to achieve this goal.

Determining the nature and extent of societal concerns that need to be addressed by the BRAIN Initiative in a systematic, ethical and scientifically valid way will require significant investment in social and behavioral research and in the infrastructure for its coordination and dissemination.

Thank you.

VICE CHAIR WAGNER:  Dr. Harthorn, thank you very much.

Our next speaker is Dr. Mildred Cho.  Welcome.

She's a professor in the Division of Medical Genetics of the Department of Pediatrics at Stanford University; Associate Director of the Stanford Center for Biomedical Ethics; and Director of the Center for Integration of Research on Genetics and Ethics.

She's a member of the International and National Advisory Boards including that for Genome Canada, the March of Dimes and the Board of Reviewing Editors of Science Magazine.

Dr. Cho established the Benchside Ethics Consultation Service at Stanford in 2005 and is Chair of the Working Group to develop a national collaborative Research Ethics Consultation Service.

We are delighted to have you here.  Thank you.

DR. CHO:  Thanks.

I'm here to tell you a little bit more about this Research Ethics Consultation Service, and I think that you might have this publication already that I sent previously that describes it, sort of our concept of it, and we borrowed shamelessly from David Rothman's book in this title, but to sort of convey the idea that we were sort of thinking of it as an analog to clinical ethics consultation in the sense that it was bringing a wide range of perspectives and voices to the benchside.

And in this we described that our thinking was that the purpose of our Research Ethics Consultation Service was to provide -- that the overall goal of it was to maximize the benefits and minimize the potential harms of research to society and as my colleague here described, as an alternative to more top-down approaches, seeking a sort of bottom-up approach but that might be scalable, and also as an adjunct to policy approaches that depend  on scientists' awareness and initiatives about social responsibility in science.

And we sought to do this by providing a forum for consideration of issues beyond misconduct or responsible conduct of research issues and those that are required by regulation, such as human subjects regulation and, therefore, a forum for discussing broader issues of social responsibility.

And I will mention that Chris Grady is a member of the Research Ethics Consultation Service at NIH, which is probably the largest, oldest, most well established one in the country.

(Laughter.)

DR. CHO: But ours was actually a bit different in that I think the NIH one combines clinical ethics and research ethics services.  Ours was really focused on the research ethics separately from the clinical ethics.

And so I will tell you a little bit about our specific experiences at Stanford.  Our Benchside Ethics Consultation Service has to date provided over 100, probably about 110 consultations over the last eight years to researchers at Stanford and outside of Stanford to biotech and Pharma companies, to IRBs, hospital ethics committees, funding agencies, including at NIH, journal editors, and research participants on a wide variety of consultation topics.

These were not limited to human subjects or clinical research ethical issues, and we were surprised that very few of these were actually about misconduct and RCR issues.  So we were happy about that.

And there has been a focus on early stages of research.  So the planning stages are even what we call the sort of pre-planning stages when people would just sort of have an idea about something, which we were also pleased about.

So in the first couple of years 28 percent of our consults were what we called in the planning or pre-planning stages.  Overall there are 36 percent.  So it's quite a significant chunk.

Just to give you an idea of what sorts of things researchers bring to this, and these are initiated by researchers, it includes sort of what we call project specific consultation topics which include things that you are very familiar with, such as incidental findings, which was the single most popular topic so far; questions regarding the sort of distinction between research and clinical practice; questions about clinical research design and biobanking are examples of this.

But we also had a number of researchers come to us and with the project specific ones, the researchers come and say, "I think there's an ethical issue here of this type and can you help me with it?"

In the thing that we're calling field-based topics, these are areas where a researcher doesn't know if there's an ethical or societal issue and says, "Can you just tell me?  We're thinking about moving into this area.  Can you just talk through with me whether there's anything I should be thinking about in terms of ethical or societal issues?"

And I have a little asterisk next to the examples that I have there up on the slide to indicate that these were consultations that resulted in a  publication, a joint publication between us and the researchers about newly identified or interesting new ethical topics.

So Stanford is not unique in having Research Ethics Consultation Service, and one of the fellows that was working at Stanford during the previously -- the research I showed you previously also has now studied this phenomenon more broadly across the country.  She did a survey in 2010 and surveyed  all the institutions that had a clinical and translational sciences award from NIH.  There were only 46 at the time, but 33 of them already had a Research Ethics Consultation Service, and I can talk more offline about why that is.

But nine of those had been started prior to the CTSA funding but only two of them were in existence for six or more years.  So this is a fairly new thing at the time in 2010 because of the CTSA funding.

Some of them provided consultations outside the institution.  The majority were supported by CTSA funding, although a quarter had no funding at all.  There were several services that were very active in that they conducted more than ten consultations in the prior year, but most had conducted only fewer than five in the prior year, so again indications that there's sort of nascent activity.

The future of research ethics consultation services in the CTSA is a bit uncertain right now because the nature and extent of the support for it through the CTSA mechanism is a bit uncertain.  However, as Dr. Wagner mentioned, I am leading a national effort to create a collaborative service that goes across institutions and would provide consultation to anyone even if they were not part of the CTSA network.

So some of the issues that come up in these and that is faced by the CTSA Research Ethics Consultation Services and others alike is sort of the question about whether we need such a thing.  Do scientists think they need such a thing, and what do they mean by ethical and social implications?

So it's trying to answer this question.  We did a survey of U.S. life scientists.  Eight hundred and fifty-six of them responded, faculty, staff, postdocs and graduate students included.  About half of them reported that they would find a Research Ethics Consultation Service moderate, very, or extremely useful.  Thirty-six percent reported that they would find it personally useful to themselves, while the 51 percent reporting it said that they would find it useful for the institution, and 36 percent also reported that an ethical or societal question arose in the course of their own research at least once, where we left the definition of ethical or societal question open, and that was their own interpretation.

So, of course, that issue of what do they think that means is very important, and potentially a barrier to utilizing these types of services.  So we did do some studies drawing on focused groups, interviews, and again, surveys for this, and some of the potential barriers that we identified will not surprise anyone who knows scientists, that there may be a lack of awareness about relevance of ethical and societal issues to their own daily practice in the laboratory.  So this quote illustrates that.

I think a major issue is that many scientists, myself included, don't have a clear idea of what an ethical issue necessarily is and if it applies to and also a perceived lack of need consultation illustrated by this quote, "I feel I'm able to determine and resolve ethical and societal concerns pertaining to my research on my own."

And although those are not surprising, I think there are also things that we can address through education, in part.  They don't trouble me as much as this last quote, which is, I think, more serious, a perceived incongruence of the act of integrating ethics and societal concerns into their own scientific practices and goals, which is more of a climate issue.

So this quote says, "When you decide on different experiments that you want to do, the first consideration is, of course, to make sure you have the appropriate techniques to be able to do it, but secondly, is it an interesting question just scientifically speaking academically, aside from any beneficial contributions it could possibly make?"

And once you've established that, it's up to you if you want to also empathize therapeutic applications.  So to this scientist I think this illustrates this idea which is common in the scientific community that really doing science is about being good academically, being rigorous, and then it's optional if you want to sort of address these larger societal issues, addressing societal issues means therapeutic applications.  So a very narrow concept of what that is.  So this is clearly a potential barrier to sort of this type of integration that requires initiative on the part of the scientist.

So another important aspect of this for us has been trying to think about what it means to have a successful program in research ethics consultation and what is a successful consultation service, which obviously requires definition of what the goals are, but not necessarily that trivial.  You have to really think about what the goals are.  Is it engagement in and of itself or are there other products that you would like to see that come out of that?

So one thing that we have looked at is whether uptake of the service has increased, whether the stage of utilization, so going back in the research process has increased, which is something that we desired as a goal.  We think we've seen that trend so that researchers approach the service earlier in their research process.

We have identified new and emerging ethical issues through the consultation, and it creates opportunity for researcher and bioethics collaboration, as well as products such as publications, modifications to research projects, policy changes.

So, for example, we've had I think at least three institutional policy changes as a result of the consultation and at least half a dozen publications which were joint researcher and bioethics collaborations.

One can also measure researcher satisfaction and attitudes, as well as something I think would be important in the long run, which is to try to get a handle on climate change, so the change in institutional and professional climate in terms of giving permission, encouragement or even expecting that scientists engage with ethical and societal issues.

And I just put a quote which was from an email that I got from a researcher that was unsolicited, which can be a vindication of researcher satisfaction.

So finally, I save all these.  So I think that research ethics consultation is becoming a more established institutional mechanism in the U.S. for integrating ethical and societal considerations into biomedical research that could potentially be broadened outside that scope.  It does and has been able to address a broad range of topics and provides services to a range of clients.

It has provided opportunities for addressing not only project specific ethical issues, but also those facing whole fields going beyond responsible conduct of research and misconduct and also provides opportunities for interdisciplinary collaboration and hopefully climate change.

And I think that the services may benefit from institutional support, not just financial support, but support for the whole concept and leadership to encourage its use.

Thanks.

VICE CHAIR WAGNER:  Thank you, Mildred.  Indeed, cool as it said.

(Laughter.)

VICE CHAIR WAGNER:  Let's move to our final presentation for this panel anyway, Dr. Erik Fisher.  He is the Director of the Sociotechnical Integration Research Project, which coordinates a series of studies by imbedded humanists and social scientists in laboratories across a dozen nations.

He's also Associate Director for Integration at the Center for Nanotechnology in Society at Arizona State University, and an Assistant Professor with the joint appointment in the School of Politics and Global Studies and the Consortium for Science Policy and Outcomes.

A founding editor of the Journal of Responsible Innovation, Dr. Fisher has also served as a guest editor for Science and Engineering Ethics, and co-edited the inaugural volume of the Yearbook of Nanotechnology in Society.

Welcome to you.  Good to have you here.

DR. FISHER:  Thank you very much.

VICE CHAIR WAGNER:  Let's have you push your button here.

DR. FISHER:  I'll be reading my remarks, but I also have slides to accompany them.

I'm grateful for this opportunity to address the Commission.  In my remarks this morning I want to suggest that ethical deliberation and scientific creativity can be mutually informative and even synergistic.

My talk derives from the NSF project called STIR, or Socio-Technical Integration Research.  After describing the project's methodology, I'll look at some of the typical outcomes, then offer reflections on conditions for effective sociotechnical integration at the lab scale.

STIR consists of a coordinated set of 30 studies each one of which imbeds a Ph.D. level humanist or social scientist into a research laboratory for 12 weeks to engage in a process of semi-structured collaborative inquiry.

This inquiry across the cultures attempts to explicate and ideally address the socioethical context of scientific research as an integral part of that research rather than as an add-on, a bureaucratic  burden, or an active of compliance.

Explicitly attending to the socioethical context of their research can, however, represent a formidable challenge for many practicing scientists who not only lack the training, but more often than not are also disincentivized from engaging in ethical reflection while in the midst of their daily research activities.

Lab researchers also by and large operate within a cultural paradigm that posits that it is not possible and certainly not desirable to try to integrate socioethical inquiry into science.  Consequently the STIR project empirically investigates the possibility and utility of collaboratively working towards this form of integration.

In order to make sense of the process by which ongoing dialogue with an imbedded humanist can affect change in scientific research, I developed a framework for midstream modulation, which I can explain offline that consists of the following three stages.

Number one, in de facto modulation, socioethical dimensions are pervasive in scientific research, but are largely invisible to scientific practitioners.

Stage two, feeding back observations and descriptions of this context can render it more visible to practitioners and heighten their reflexive awareness.

Stage three, often this can lead scientific practitioners to engage in ethical deliberation and make voluntary changes in material research practices.

The interdisciplinary dialogues are structured by a decision protocol that is meant to be minimally invasive.  We treat the protocol as a generic model for laboratory decisions, using it facilitate collaborative description of the research process as it unfolds.  Once imbedded into the research process, it helps to elucidate choice, expanding both the values and concerns that factor into research decisions, as well as the perceived technical options for responding to them.

Now, initially what happens?  This process tends to produce a pattern of dissonance or paradigm misalignment or incongruence.  For instance, participating scientists will make statements such as, "We don't make decisions," or, "my research doesn't involve any ethics."  There's a number of different ways in which this can play out.

Such statements seemingly undermine the very premise of the study.  However, rather than confronting or disagreeing with their interlocutors or attempting to educate them by formal acts of pedagogy or retreating, the imbedded humanist simply proceeds with the protocol exercise of eliciting and feeding back descriptions of the research context.

This can days or weeks later lead to a eureka moment, a reversal in which the participating scientist undoes the previous statements and now sees his or her research through a more social scientific or ethical lens, as in the statement, "I guess this really is a decision."

I'd like to turn now to five brief examples.  In one case, previously tacit environmental issues surfaced and emerged over time.  This led to reflections and experimental changes which, in turn, helped the researchers resurrect a promising but previously abandoned project.

Eighteen months after the study's conclusion, a return to the lab revealed that the protocol was still being used, and that the changes that arose from the conversations with the humanist had been central to a successful dissertation defense.

In another case, researchers came to see that they held conflicting views on the role of science in society and that these views effectively deflected their own ethical reflections.

Also, after the philosopher completed his study, the lab group continued to debate an ethical issue he had sparked.  This led the group as a whole to make a behavioral change in their collective safety practices.

Learning can also be bi-directional.  This political scientist, doctoral student, became an adept experimentalist and ended up helping a biomedical researcher in another lab improve his experimentation processes.   More importantly, she also catalyzed a new form of public communication and patient engagement in outreach on the part of one of her labs.

Interactions and dialogues can also flag ethical issues and dilemmas that cannot be resolved at the level of scientific teams and that require more extended and systematic deliberation, for instance, by ethical experts or members of the public.

Finally, I want to emphasize the confluence of hands-on reflective practice and professional identity.  Only one of the five Ph.D. trained scientists who when asked at the outset of this study stated that integration was part of their work.  However, the others stated that it was an add-on.

Ten weeks later after weekly meetings that led to numerous changes in research practice and communication, all five stated that integration of socioethical aspects was "part of the job."

This table of display is preliminary results from a survey of the 30 studies.  The survey is in the field as I speak.  We anticipate 30 responses.  It shows that studies that imbedded the STIR protocol for a 12-week or shorter period of time produce relevant changes in both socioethical awareness and behavior.

I would like to emphasize three things that this approach does.  First, it builds and exercises ethical capacity by which practitioners identify, reflect on and respond to ethical issues.  Such capacity supports what the Commission has elsewhere termed prudent vigilance both at the individual investigator level and I believe more collectively in the scientific enterprise. 

I believe it is also relevant to the Commission's interest in education and in the abilities of neuroscientists to engage in accurate and effective public communication.

Second it helps to bring to light material adjustments in the science.  This is important from the standpoint of downstream implications, disruptive technological developments and path dependencies and more generally the social shaping and governance of technology.

Finally, it can identify ethical issues for broader analysis and deliberation.

In closing, I would like to identify four conditions for effective collaboration-based integration, and by "integration" I don't mean all forms of integration.  I know the Commission is considering numerous forms.  I'm speaking about lab level, small group, sociotechnical integration.

Number one, integration should be imbedded.  Any dialogues and exercises should take place alongside regular scientific research routines.  This will not take time away from the research, and it will help scientists perceive that what they are already doing from a different lens.  It will also allow for transformative insights and learning.

Number two, integration takes time.  The process is organic and tacit and requires incubation.  I'm concerned that institutionalization could fail if it were embodied in a potentially alienating form of moral authority.  It makes more sense, I believe, to institute temporary visits or rotations that facilitate ongoing productively disruptive dialogues and that serve as a dynamic bridge between the cultures.

Integration also needs to be voluntary.  I believe this is true because one cannot compel virtuous behavior.  It must be chosen for its own sake, a present means to a valued end or interest.  As a corollary, I think it's important for moral responsibility not to be vested solely in a moral expert.  I think it needs to be shared and has the moral capacity, needs to be built and exercised throughout the scientific enterprise.

Finally, framing matters.  I would encourage the Commission in their recommendations to focus on building moral capacity rather than on applying moral expertise, to emphasize collaboration rather than compliance, and recognize that even though this type of approach can add value to science, that's not the reason to do it.  Rather added value has a spillover effect which is important because it gives the lie to the myth that ethical inquiry is somehow anathema to science or will slow down R&D.

Thank you.

VICE CHAIR WAGNER:  Erik, thank you very, very much.

We've got plenty of time to work with you folks.  I think what I'd suggest is since this constitutes our only panel for the day, we might approach our questions to them as we would in a roundtable.

We've heard several suggestions about integrating ethics into the practice of neuroscience whether it's at the bench through consultation or through imbedded experts and protocols.  Bringing researchers together in consortia we heard; establishing focused centers to address integration.

You know our task, and we'd like you to help us with your top thoughts, your priorities, if you will.  What would you say is the single -- this is a repeat of the question we asked our panelists yesterday -- what is the single most important initiative or step that we as a Commission can recommend for effectively integrating ethics in the practice of neuroscience?

Why don't we just run down the table?  And, Pamela, we'll start with you.  The single most.

DR. SANKAR:   (Pause.)

VICE CHAIR WAGNER:  Do you want to take a pass?

DR. SANKAR:  I'm going to pass on this one.

VICE CHAIR WAGNER:  Okay.  But we're going to come back to you.

DR. SANKAR:   You're going to come back to me.

VICE CHAIR WAGNER:  Barbara.

DR. HARTHORN:  Thank you.

Well, I think I stated it at the end of my comments.  We have built research and there will continue to be research, but I think unless there is effective feedback, and I know you alluded to this yesterday when you were talking about ELSI and the problems of siloing, and having stand-alone centers is not siloing, and it does elevate the inquiry in a way that would not happen if there were not national centers directed to it.  So that is a method.

But the structural integration through which the research gets and the policy work gets integrated is something that the research community can't really produce on its own.

So I think there has to be both.  I think there has to be a structural mechanism whereby the societal implications work can be integrated into the enterprise.

VICE CHAIR WAGNER:  Okay.  Actually I heard two things from you.  One is to go ahead and have centers who focus on these issues but also to have mechanisms to bridge those back to the bench.

DR. HARTHORN:  Yeah, I smuggled two.

VICE CHAIR WAGNER:  Yeah, that's okay.  Pam was having trouble with the first one.  We'll come back.

DR. CHO:  Well, I think it would be really useful if the Commission could even in just stressing that it's a need to have integration of any sort would be, I think, even a step forward from where we are because, as you know, I think there is just a reluctance to unsilo.  So --

VICE CHAIR WAGNER:  So to build a case, present the case and insist on it.

DR. CHO:  Yes, yes, and also that in order for  existing policies to work, you know, that were generated in sort of the more top-down type of approaches, they can't really work alone, and they require this integration, and so there's this sort of dependence relationship that hasn't addressed the backend.

VICE CHAIR WAGNER:  Good point.  As Erik pointed out, it seems to be a learned dependence though.

DR. FISHER:  Yes.  May I make two?

VICE CHAIR WAGNER:  Please, but only if you push your button.  Okay.  It's a deal.

DR. FISHER:  I would say that  regardless of what you end up recommending that you frame it in a way that is sensitive to and appealing to the climate that you've heard discussed so many times by members who have testified today and yesterday, the climate of research that is, I think, for some good reasons, skeptical of interventions.  It's a tradition in science and it's not going away.

So whatever you recommend, I think it needs to take that into account ideally in a way that's inviting or challenging, and then the other recommendation I would make or point would be that, as the Commission has already suggested, that you recommend a diversity of approaches.  Don't insists on one mechanism and allow these to be taken up and experimented with and organically evolve.

VICE CHAIR WAGNER:  We'll come back to that one, I think.

All right.  Pamela.

DR. SANKAR:  Yes.  Now I can answer.  I hesitated because what I want to say is so fundamental that I can't imagine you'd actually put this in the report, but I think that science education has to be fundamentally restructured from the get-go, and that that is what is ultimately going to address these issues, and I think that scientists think the way they think because their education leads them to think that way.  They're siloed because that's how they're trained.  It should be no surprise that they exist within a particular world when that is how science education is organized, and I think that that is something that has to be dealt with not just because of the advances in neuroscience, but the advances in science across the board.

And nanotechnology, genomics, all of these things are raising such fundamental, essentially life threatening, globally threatening and also promising issues that the day has passed where we can have commissions or consortia and actually feel like we're fundamentally dealing with it.

VICE CHAIR WAGNER:  I think that blends well with what Mildred was saying.

Nita, and then I've got Dan and Christine.

DR. FARAHANY:  I heard Jim use a phrase which I thought was really helpful, building the case and trying to underscore what you all have said, and Erik had a quote up on one of his slides where a scientist said something like but why should I want to talk to the public, and it seems to me like those two things go hand in hand, which is we have to build the case in order to be able to fundamentally restructure science and education.

So I think it's clear that ethics integration is essential, but how do we get there for building the case?  What do you think the best arguments are for building the case in order to fundamentally restructure science education, in order to provide what I think is really essential, is the ethical rationale to transit from where we are today to a model of ethics integration.

VICE CHAIR WAGNER:  Any of you, sure.

DR. SANKAR:  I think that one of the things that we found and I think Eric's work really supports this, is that a lot of the most interesting motivations end up coming from the scientists themselves, and for instance, Mildred Cho and I worked with the Human Microbiome Project, and what's been fascinating with that group is that when you bring together people trained in ecology to people working in clinical sciences, the ecologists are able to formulate arguments about the fundamental importance of ethics in a way that's so different from the clinicians, and the clinicians back to the ecologists that they come up themselves with the justifications for how these things should happen.

And so I think that there really is a sort of synergy that can happen that they can create the justifications because they see the flaws and the reasonings and the dangers and the excitement or the potential benefits in each other's work.

VICE CHAIR WAGNER:  Others?  Others to that response, to my question?

DR. FISHER:  I'll agree that motivation tends to come from the scientists.  One of the interesting contrasts, you know, one of the few contrasts, I think, between the consultation model and the imbedded humanist model is as we were discussing before the panel, for consultation that's initiated by the scientists for the imbedded experience that's initiated by the social scientist humanist.

However, it's interesting that the very first pilot study in the STIR project was an invitation from the labor director to me to spend time in his lab.

So I think that's a powerful engine to tap into.  I'm a little concerned about using approaches that are future based, which I think have been suggested in the past to motivate ethical behavior. I'm also aware of sort of institutional attempts to frame scientific responsibility as, you know, we've got this covered.  You know, everything should, you know, rest assured, don't worry.

That might be good in terms of, you know, maintaining social calm and order, but I don't think it taps into the powerful creativity, motivations and insights that are really needed to drive this forward.

So one way to -- it's hard to really give you a direct answer -- but one way to frame this would be in terms of an invitation to conduct an experiment.  Let's see what we can do to close this gap.  All right?  If you can get buy-in on any level, you know, that, well, we should do more or, you know, we might need to look into this, and then the second step is to say, well, we really admit that we don't have mechanisms.

Admitting it or, you know, its ignorance is a resource for invention.  I think those are two powerful first steps, and then the third can be the pursuit and invention of mechanism.

DR. SULMASY:  Well, thanks for giving us some good examples about integration which I very much think is a positive step forward and a way to proceed, and I think we're as a Commission probably very enthusiastic about it.

But nothing is perfect, right?  And I was wondering whether you would help reflect with us and what you see are potential downsides to this method, and in particular, the question of whether too much integration, cooperation leads to losing one's critical edge and becoming sort of co-opted, right?  If you're the integrated scientific ethicist at Myriad Genetics, you're not going to be against gene patenting, for instance. 

So that's a potential downside, and I wonder if you could reflect for us on how to guard against being co-opted as we move towards an integrated, cooperative model.

VICE CHAIR WAGNER:  Dan, in addition, your question fits so well with one we have from the audience.  May I blend those two together and make sure we get them answered?

Peggy Mason is a professor at the University of Chicago.

DR. MASON:  Which happens also to be from University of Chicago.

VICE CHAIR WAGNER:  Oh, Peggy, welcome.  I can't believe I looked at that.  Welcome back, Peggy.

And she also is cautioning, and so that's why I thought we'd put Peggy's question in here.

"I'm alarmed by the idea of ethical prescreening. The idea that some science is wise and some is not forms a slippery slope toward thought police even.  All scientific questions should be asked," she says.  "As long as the execution is ethical, the plan is rigorous, and the question of fundamental interest, then the work is valuable."

So in sum she says, "I agree with the researcher's views that alarmed Dr. Cho.  The most exciting scientific progress will require open intellectual inquiry, curiosity that is not constrained by someone else's view of the greater good."

So constraint is one of the concerns of the negative here.  So can we throw the negative and the co-opted question -- I'm being co-opted -- out to the panel?  Who wants to start?

DR. HARTHORN:  I'll take a stab here.  So at CNS UCSB we've used a different and I would say complementary model to the one at ASU, which is instead of imbedding social scientists and humanists into the research enterprise, we have imbedded nanoscale science and engineering students into the social science enterprise.  So we have material scientists traveling to China to interview, to co-interview industry, developers, government laboratory people.

So what that does is engage them about a wider issue about what is beyond the bench, and a lot of them already have questions and doubts about their commitment to the bench, and we have found that we actually get a crop of very, very stop students who are interested in this, and which also speaks to Erik's point about voluntary.  This has to be there.

But I would say from our experience we have run this as an experiment for eight and a half years now, and the people who show up who want to do this and whose advisors are supportive because they continue their bench science as on track, and so this is something they do in addition.  This is not something you -- if you put the resources there and you have a context in which it can take place, this is not hard work.  This is really a pleasure.

VICE CHAIR WAGNER:  Mildred, yes, I would hope to hear from you.

DR. CHO:  I'll try to take on both of those topics.  The co-opting issue, I think, is a very important one, and we've considered that in the setting of research ethics consultation, which started out as an institutional phenomenon.  So one of the reasons why we were -- we have been wanting to try to do this at sort of national collaborative level is to take it out of individual institutions, to mitigate what would be a conflict of interest basically, especially when consulting for your own researchers.  So that was part of that motivation.

And then this sort of concern about constraint, I sympathize with that concern, but I would also argue that there isn't really such a thing as open, free science without constraints.  There's all kinds of constraints on sciences, where the funding comes from, where it doesn't come from, all sorts of things.  So I think I wouldn't look at it as a constraint but as a sort of taking into consideration a larger framework and framing of scientific questions.

So maybe considering that the way that you as an individual scientist are seeing the purpose of a study may not be given the same priority as other stakeholder communities.

VICE CHAIR WAGNER:  I think as I understand the constraint question it's more about at the genesis, at the coming up with an idea.  The concern or let me not do concern.  The notion that there is something pure about curiosity that could be blunted by imagining that even my curiosity to discover more is somehow going to be regulated by any sort of a, you know, whether it's ethics, an ethics team in prescreening or some other purpose.

I think many academics -- or some other entity -- I think for academics that's a concern about the freedoms they have to imagine where they want to take the research.  Again, this may be back to building the case and making the point that these things should work in synergy as opposed to being regulatory.

But it sounds like Amy has got a thought on that.

CHAIR GUTMANN:  Well, I think following up on that, I think that is building the case for the integration of ethics and science because fundamental to building the case is understanding that being an ethical scientist doesn't mean that ethics is all about constraining what you do.  Ethics is about understanding.

So let's begin with the fundamental ethical issue of neuroscience, as I understand it.  It's understanding that neuroscience, that the enterprise of neuroscience has a social meaning and social purpose; that neuroscientists as scientists are contributing to the public good in a way.

That's an ethical statement.  It's not a scientific statement.  It's an ethical statement, but it's an ethical statement about neuroscience.

Now, if that's true, if the reason we are commissioned as a Bioethics Commission to speak on the ethics in and of neuroscience is because there's a public good here, multiple public goods.

Then understanding why ethics and neuroscience need to be integrated is understanding neuroscience to begin with.  It's understanding the part that isn't narrowly scientific that propels President Obama to have a national initiative called the BRAIN Initiative.

And I think if we can begin there and make the case, as Nita said, we will at least win half the battle.  The other half of the battle, I think, as Raju said, is coming up with an understanding of what models can work, not a single model unless you think of it as if you can think of a model that's a model of a model, you know, that's a pluralistic one.  You might say any model that works is going to have to be capacious in the different ways in which you integrate because as I would like to -- and I may actually.

I was talking to our Executive Director yesterday.  I'd like to write, co-author an article on this about bioethics and neuroscience, "Never too Early or too Late," right?  It's never too early to learn why it's important and it's the case that everything that's good science is good ethics, has good ethics integrated, and it's never too late to have a consultation about a complex project that you're engaged in.

So I think for people like me who have taught ethics and public policy and science and ethics, it seems so fundamental, but there are so many misconceptions out there, as you could see by your quotes, but very highly educated, excellent people in both ethics and science.

VICE CHAIR WAGNER:  I think we can talk about this one for quite a -- oh, Erik would like to as a matter of fact.  Go ahead.

DR. FISHER:  If I could sort of offer some remarks that dovetail with the last comments and also respond to the question that's still on the table, I believe.

So in response to this idea of constraining the science, I think that as the Chair has suggested, it's possible to unpack the imbedded values that are already part of the enterprise.

Now, of course, logically that makes perfect sense.  Of course, the difficulty is that socially these values might have simply been promised by scientists who were seeking funding.  However, this is the gap.  This is the exciting gap, how to bridge this.

And I think by asking fundamental question -- why are you doing this?  Why do you think your lab director wants to get this grant in particular?  Why does your institution promote it in this way?  Why did the funding agency write the solicitation in the following way? -- by asking these questions you can really impact what's already there, and in the process the scientists can take ownership rather than the moral expert insisting on what the logical case is.

I think it's a subtle move, but it allows for a co-responsible approach.  So I agree that we don't want to unnecessarily constrain scientific creativity, and I would suggest that the word "curiosity" which has been used a few times is potentially an engine for both ethical care and scientific creativity.

Co-optation was a concern.  I think that's a very real concern.  Maintaining a critical edge, avoiding conflicts of interest, that's one reason that I think whether it's a consultation or imbedded or another model, it's important that there's an end to the interactions, and that the interactions aren't permanently houses in some sort of officer type capacity which would take the excitement out of them.

I think it's also important for there to be a community so that the interacting social scientists or ethicist can come back to his or her critical community and reflect on what's going on, particularly if there's a number of people who are doing this at the same time, and obviously there are networks in both of our areas that are merging to try to maintain that community.

And I think maybe the last point I'll make is that there's another concern I have, and that is advocacy, both sort of unwarranted advocacy and also the perception of advocacy.  So that's partly a barrier to entry, but it's also a potential abuse of authority.

I don't think there's any question that the science won't be slowed down.  I think we know that, but what about the social science?  There's a way in which the social scientists have to constrain him or herself, and on some level this is sort of rear guard action.  I don't know that it's front and center to what the Commission's concerns are now, but if you're successful, you don't want to set up a situation in which scientists are sort of running for cover because they feel that they're going to be judged and they have no mechanism to argue back or to maintain.

So I would put advocacy as a third sort of concern.

VICE CHAIR WAGNER:  No, I do appreciate that.

I do want to move to our next question, but I think we need to face the fact that in this room with this crowd ethics is so much into our forebrains that it may be difficult for us to understand, and we need to pay attention to an audience, a scientific audience that we hope to help improve the quality of their science.

But there is a notion that knowledge in and of itself is amoral, but how we pursue knowledge and how we use knowledge is where ethics needs to be applied.  And the hop, skip and a jump from saying knowledge is amoral, that's a freeing thought.  The notion that we are going to be bound somehow, I think, for a particular audience is something we need to be sensitive to as we advance and, as I say, make the case that we're actually helping to amplify and in some ways not necessarily slow down, actually accelerate the value of knowledge that is obtained by ethical means for ethical purposes.

Christine, you're next on my list.

DR. GRADY:  First I want to thank everybody for being here and for your comments.

I have a very specific question, but I want to respond to this issue that has just been discussed because I think perhaps it is a matter of framing a little bit.  I mean, I understood your question, Jim, to be in reiterating Peggy's question constraint and prescreening, and I've never heard anybody say prescreening.  I think prescreening is the wrong notion of what we're talking about.

And so we have to be really careful about how we think about and talk about what's happening

VICE CHAIR WAGNER:  How the question is --

DR. GRADY:  Absolutely.  And along that line I've noticed in several of the presentations not an absence of the word "ethics" but a move towards genomics in science and science in society and, you know, characterizing these activities without using the word "ethics."  I would love to hear whether that's an intentional move or, you know,  just sort of how things are evolving, but I'd love to hear some thoughts on that.

But my specific question has to go with funding.  I heard, you know, setting aside three to five percent was a move that was sort of off the cuff, but allowed for ELSI.  We wouldn't have done it without the funding.

Mildred said 24 percent of the CTSA Research Ethics Council services have no funding at all.  The projects that you both described sound like they're NSF grants, but are renewable hopefully.  I mean, I don't know.

So if we're going to say -- and, I mean, obviously none of this can happen without funding, right?  So if we're going to say anything about funding, what should we say?

So those are my two easy questions.

VICE CHAIR WAGNER:  Please, Erik and then Barbara.

DR. FISHER:  I think it is a conscious choice to use the word "society" or "socioethics" or "societal and ethical issues," and that's perhaps a meta ethical decision.  It's a recognition that there are pluralistic ethical frameworks and values and that the last thing we want to do is have the very first conversation be an argument about which ethical framework.

VICE CHAIR WAGNER:  Barbara had a comment and the Peggy or Pamela.

DR. HARTHORN:  So, yes, what he said.

(Laughter.)

VICE CHAIR WAGNER:  Okay.

DR. HARTHORN:  So really it took more move to see ethical frameworks as values, and I think we tend to use value language more than ethics values, and particularly in the international context where the cross-cultural values that pertain are not necessarily all coming out of the Western philosophical tradition.  It's a particularly critical issue.

Also because part of what we are doing is trying to understand without preconception what all the different stakeholders in the enterprise has as their conceptions about ethical practices.  We are open about that, and we actually see that part of the benefit of having an enterprise like this is that you can operate from empirical data on that rather than intuitions, and the intuitions that a lot of the people involved in the enterprise have don't necessarily align with the empirical data that we collect.

And the funding, NSF funds national centers for a ten-year horizon.  So we are approaching sunset, and we are thinking about what comes next, and as a Director I would say we would -- one of the reasons I would come here and talk is that I think we really are looking to pass the baton, if that's the time we would like.  We've built up this whole body of worth, and we would like to find a way to maintain it and convey it to the next generation of researchers in neuro and synthetic biology and the other emerging technologies around.

Thank you.

VICE CHAIR WAGNER:  Pamela.

DR. SANKAR:  Well, I'll start backwards because I was going to address that at the end, but now that you've raised that I think it's important that the funding come from the government agencies that are funding the science, and although there might have been something a little uncomfortable about putting ELSI directly in NHGRI, I still think that the responsibility lies with the organizations that are funding the science.

And just to follow up on the comment that you're going to sunset at ten years because I know Mildred's program is also going to sunset at ten years, which is funded through the NHGRI, and it's very interesting because the Genome Centers that have been funded on five-year cycles aren't sunsetting.  The ethics are sunsetting, right?

And so the question is really why is it treated differently.  Now, there are practical issues and there's budgetary issues, and I understand that, but I think that it's something to look at, that why is it that certain kinds of things should presumably take on their on institutional -- I mean, the ideas that the activities that they've encouraged  through the SEER Programs at NHGRI should now be supported by the institutions, which I think is a good idea, but then arguably you could say that about the science as well.

I do think that one of the things that circles through this in terms of people not using the word "ethics" is when you use the word "ethics," we find that they assume you mean informed consent, and so we don't use the word "ethics" at all.  I mean I don't when I talk to people.

But I think that that speaks to a broader issue which circles back to the comment I wanted to make in the first place, which is that the reframing is not simply a rhetorical strategy.  I also think that understanding that social responsibility is not just a phrase, but it's a concept that needs to be made robust, it needs to be understood; it needs to be detailed; it needs to be operationalized, and it is not at this point.

Social responsibility, although I think people think seriously and take it seriously, is in many ways a phrase.  You try to look for a definition of social responsibility  of science, and you will not find very many, and I think that that's important, and I think it's particularly important when you get in these arguments about scientific freedom versus social responsibility.  I would hope that eventually that becomes a false distinction.

And I think that the idea of scientific freedom is very often thrown up in the face of social responsibility, and the problem is that there's no -- because it's such an easily collapsible and easily fought against concept, social responsibility, that somehow it's constraining your curiosity, there's no traction there to make the argument back.

And the point would be there's never pure science.  There's never pure knowledge.  There's always a reason that you've done what you've done, and the point is that if you're training people to integrate social responsibility from the beginning, then what they're pursuing is going to be something that serves the common good.

And I think that is very idealistic, I know, but I think that's the direction that this should be going.

VICE CHAIR WAGNER:  Well, we might get some arguments on the purity of knowledge, but I think your observation of the different times that ethics programs are sunsetting from their associated science makes the case that we are not yet fully integrated, right?

Anita.

DR. ALLEN:  Wow, this has been such a rich conversation, and a lot of what I believe has been well stated by Dr. Cho and Dr. Gutmann and others about the relationship between science and ethics, so I just want to say something, which kind of makes it sound like a manifesto or something, and then I want to specifically address something that Dr. Sankar said.

So I guess the manifesto thing I want to say is that, you know, it seems to me that the enterprise of normative ethics applies to all human activities equally, including science, and the notion that we heard from Dr. Peggy -- I've forgotten your last name.

DR. MASON:  Mason.

DR. ALLEN:  Mason.  Thank you, Dr. Mason.

-- that all questions should be asked, I agree with that, but to say that all questions should be asked does not entail that all questions should be scientifically researched in every context, without regard to consequences or without regard to funding sources.

And similarly, valuing the free play of ideas which I do passionately does not entail that no researcher is ethically responsible for the questions asked or the issues raised by his or her research, or that scientists have no obligation to speak out about misuse of the misinterpretations of their research.

So you know, having strong libertarian values about ideas and thinking and expression takes us not all the way toward answering the question, you know, what kinds of reasonable ethical constraints must there be, and those constraints though need not be in the form of prescreening or anything even close to that, but just recognizing that there are going to be some limitations on science as there are limitations on every human activity that we might engage in.

So this manifesto comment relates somewhat to what I want to say to Dr. Sankar, which is I really loved your historical memory of what happened with ELSI in the context of genetics because I was actually a member of the National Advisory Committee for Human Genome Research way back in the early '90s and mid-'90s, and I was, as one of the few humanists on that committee, was the liaison between the advisory group and the DOE-NIH ELSI Working Group.

And one of the reasons why that group did not work ‑‑ and I think you're right.  It didn't work -- was because that group of ELSI people, lawyers, social scientists, ethicists, wanted more of scientists and more of the government than they were willing to deliver, and just one example.

The ELSI Working Group wanted the scientific community whose research was being funded by the federal government to say something about the terrible problem of misunderstanding of what genetics was teaching us about intelligence and race, and the scientists did not want to touch that issue.  The government didn't want to touch that issue, and yet it was one of the main ELSI problems that the ELSI Working Group saw as being a product of the emphasis on human genome research and human genome discoveries.

So that kind of timidity, I think, is a problem, and the ELSI Working Group, as you say, had no bureaucratic home.  They were powerless to do anything about what they thought were important things to do.

That said, I want to praise that working group for its being out in the forefront of the question of insurance discrimination because when the BRCA discoveries were made, that was a group that convened a conference to discuss what should be the limits on insurance companies' use of genetic information, and that was a very productive meeting which years later led to the GINA statute being enacted.

And also I think that that group did a good thing in raising the problem of how do we use genetic information.  So applied to neuroscience, I hope that the scientists and the government will not be overly timid in allowing us to have a conversation about the hard questions that will arise from what we learn about brains, and if we find brain differences between men and women or this group or that group, will we have the courage to talk about that and to deal with that as a nation?

VICE CHAIR WAGNER:  More of a statement I think.

DR. ALLEN:  Yeah.

VICE CHAIR WAGNER:  Thank you for that.

Raju, I have you next.

DR. KUCHERLAPATI:  Thank you very much.

I have a pragmatic question, but to lead to the question, you know, listening to four of you making the presentations, I sort of classify them into two different approaches.  One approach is the top-down approach, whether the NIH or the NSF would say that we will support these programs and give support to people when they would go and investigate and then try to publish paper and so on, and the second approach that Mildred and Erik talked about is the bottom-up approach, locally trying to develop programs and try to help investigators and trying to integrate that.

We heard, you know, good and bad about all of those different things, but here just now, you know, the goal of this effort is to try to, you know, bring ethics to neuroscience, and the neuroscience community is very large.  You know, the annual neuroscience meeting attracts more than 20,000 people to come, and that's probably half of the investigators in this country or even less.  So it's a very large community.

So the goal is to try to reach, you know, these 50,000 or 75,000, you know, investigators, students and so on and so forth.

So Pamela suggested one approach is to try to educate, you know, everybody, and long term, I think, you know, incorporating these ethical principles in educational programs and so on and so forth would be of benefits, you know, in the future.

But we have an issue today.  How do we reach this very large group of individuals and community?  Obviously what the President's Commission does addresses the issue, but I don't know how many people read, you know, Presidential Commission reports.

What are your ideas?  How do you think that we would be -- we should try to reach this very large population of, you know, scientists and get them into the fold?

VICE CHAIR WAGNER:  Go ahead.

DR. SANKAR:  I think other people have said today, which is true, that it has to be multiple approaches.  There is no single approach.  I know that one of the things that has influenced people beyond the ELSI community in terms of reaching out to genomics people is imbedding the request for ethics within funding proposals, and so if the genomics people want research, they have to have an ethics piece.

Now, there's many reasons for this.  It has not always worked very well, but it is certainly a way to reach out beyond the ELSI community itself because what that leads the scientists to do then is to find people in their universities or close by that they can collaborate with, and it leads to some very interesting interactions.

So I think you have to think of, you know, there is the education piece, and I think then you have to think of what are the combination of strategies that you're going to use to sort of get this message out, and one certainly would be through funding strategies.

VICE CHAIR WAGNER:  Mildred, did you have a comment?

DR. CHO:  I completely agree with that, and to somewhat get back to the question that Chris raised about the funding, I think it is important that the funding for ethics be in some way maybe not necessarily dependent on the funding for the science parts, but I do think that what you said is important in terms of making it be part of the science to think about the ethics as opposed to having them separate, and I think that can be done through the funding.

Of course, that does lead to potential co-opting issues and so forth, but I do think that having the support be seen as coming from the agencies that fund the science elevates it in a way that's very important institutionally and as well as through other sort of linchpins in the scientific process, such as journals and through journal editors and so forth.

And another way that neuroscientists might be engaged specifically is through their very large professional societies, and I know that there are  some people trying to do that from within those professional groups, but that's another way that the genomics community has, for example, tried to do some of that.

CHAIR GUTMANN:  Oh, no, I think this is very helpful, and I think we have to be both theoretically grounded and practically astute, and so it is true that the potential for co-optation is there when ethics is integrated, but the alternative is having ethics totally unintegrated.  So there are going to be philosophers and theologians of bioethics who can work without grants sitting and doing really important and interesting work without government funding.

But if we want, as we do, and we've been charged to have the vast neuroscience community take the ethics of their profession seriously, then it seems to me that requiring at least large grants to neuroscientists that are collaborative in nature, which most of them are, ensure the ethics of what they're doing is a way of incentivizing both those working in ethics and those working in neuroscience to come together and address real, real issues.

Why is this important not only for science generally, but for neuroscience, as we'll talk about in the next session when we talk about a preliminary report?  It is hard to find in the vast array of neuroscience research research that doesn't raise really important ethical issues.  It's just the nature of working in the area of the human brain and human neurology that raises a lot of ethical issues.

So I think this is an important and productive way forward.

VICE CHAIR WAGNER:  Nelson.

DR. MICHAEL:  So my question to you is really one of approach.  Typically, I think historically, ethics training evokes responses in those that are taking that training, and it's not necessarily a positive response because it's essentially an approach for risk mitigation.  You need to take this training just because we tell you you have to.  That never works well, or because if you don't know this body of evidence, something bad could happen to you.

I mean this is just things that humans don't like to hear.  How do we approach research integration with ethics as an opportunity for research scientists versus risk mitigation?

And for funders, how do you approach the implication that this is really a requirement versus integration really does strengthen the research enterprise and finds efficiencies in the process by its very nature developing an interdisciplinary approach, which I think greatly strengthens science.

If I'm a researcher and you lead me to understand that by having a research consultation, I am going to be talking not just to ethicists, but I'll be thinking about novel ways that there may be second or third order effects of my work, not again for risk mitigation, but there may be implications that have societal implications, and this could allow me to think of a novel synthesis that gives me a novel approach that will ask questions that funders will be interested in and, therefore, I'm putting myself in a more competitive position.

I think if we take that kind of approach, you're going to have a lot better uptake than to say you need to take this body of work because it's a requirement; it's a duty, and, two, that it will keep you out of trouble.

VICE CHAIR WAGNER:  You know, if you don't mind, I'm going to do the same thing that I did to Dan because we have a question that I think amplifies what you're saying.

Lyric Jorgensen out there?  Well, the question though is your point.  "Many panelists have suggested there is value in adding an ethicist to research team.  Could you describe some of the principles or guidelines for determining when this sort of addition is value added?"

And I think that's what you're talking about, and it goes on really to ask your question again.

So what about that point?  How do we -- again, and almost making the case for how is it that this is not rejected as something that is supposed to keep us out of trouble, but rather or at least in addition to help increase the value of the work we want to do.  Fair?

Erik.

DR. FISHER:  It's a wonderful question, and I want to spin off of what I think the previous round of discussions alluded to, and that is the funding mechanisms at the research solicitation level.

So there are obviously a number of targets and challenges, but assuming that there is buy-in by the agency and assuming that there's a multi-pronged approach where there's certainly formal education, but then there's also the sort of research in motion with ethics idea, I think it's very important to have program officers and their reviewing panels have discussions about why we're doing this because otherwise no matter how well stated the language is in the form of proposals, it will just be ignored.

So I really -- and how do you have those discussions?  Where do you have those discussions?  There's multiple fora for that, but I think if those were championing this issue or at least aware of that potential bottleneck, they can creatively address it.

DR. SANKAR:  I think that one of the ways to think about this is to  move away from an RCR concept which tends to be rules and checklists, and move toward social responsibility, which is a much more positive engagement, and it's what we can do to make the science better.  It's not what do I have to do to not get in trouble.

VICE CHAIR WAGNER:  With that, Pamela, that's our final word.  To all of you, thank you so much for your contributions this morning.

(Applause.)

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.