Transcript, Meeting 16 Session 3

Date

February 10, 2014

Location

Washington, D.C.

Presenters

Nicholas Steneck, Ph.D.
 
Director, Research Ethics and Integrity Program
Michigan Institute for Clinical and Health Research
Professor Emeritus of History
University of Michigan
 
David E. Wright, Ph.D.
 
Director
Office of Research Integrity
U.S. Department of Health and Human Services
 
Peggy Mason, Ph.D.
 
Chair, Ethics Committee
Society for Neuroscience
Professor, Department of Neurobiology 
University of Chicago

Download the Transcript

Transcript

CHAIR GUTMANN:  May I please ask everybody, especially our presenters and Commission members, to take seats so we can reconvene?

It has been a very stimulating session so far, and I know it will continue as long as we can continue.  So I will being now with our next session which focuses appropriately enough on profession ethics and professionalism in neuroscience, and we'll hear first from Dr. Nicholas Steneck, who is the Director of Research Ethics and Integrity Program of the Michigan Institute for Clinical and Health Research, and Professor Emeritus of History at the University of Michigan.

Dr. Steneck chaired the University of Michigan's Task Force on Integrity and Scholarship and Public Health Service Advisory Committee on Research Integrity.  He helped establish and recently directed the Office of Research Integrity and the National Institutes of Health Research on Research Integrity Program and has published many articles on important topics, including the history of research misconduct policy, responsible conduct of research instruction, and the role of values in university research.

Dr. Steneck is also a Fellow of the American Association for the Advancement of Science.

Welcome.

DR. STENECK:  Thank you very much.

I'm going to read my remarks so that I will get through hopefully in the ten minutes that I've got.

I appreciate the opportunity to address you this morning to discuss what characterizes a virtuous scientist and the factors that have been shown to influence researchers' ethical behavior.

Before turning to this topic, let me clarify the focus of my remarks.  Scientific research raises difficult ethical questions.  The issues I am interested in and have been working with for over 30 years do not know of moral disagreement.  There is widespread agreement that scientists should be honest and avoid practices such as fabrication, falsification, and plagiarism.

The fact remains that some scientists are not honest and do misbehave, raising two questions: why do scientists still misbehave? And what can be done to foster higher standards for the responsible conduct of research?

The principles set out in codes of ethics identify the virtues that should guide scientists.  The international code that I'm most familiar with and helped to develop, the Singapore Statement on Research Integrity, begins with four principles:  honesty, accountability, professional courtesy and fairness, good stewardship in research.

There are more fundamental virtues underlying each of these principles, but as a basic starting point, statements of principles provide a reasonable summary of the attributes of a virtuous scientist.

For the day‑to‑day administration of science, however, scientists work more in a duty base than virtue based world.  The Singapore statement summarizes these duties under 14 responsibilities.  In other context, the duties of scientists are formalized as guidelines, policies, rules and regulations.

Professional scientists have a duty to meet the standards set by their profession or field of study.

Research and research integrity confirms that scientists do not always live up to their responsibilities.  The frequency of misbehavior varies with perceived seriousness.  Studies suggest that at least one in every 100 scientists over the last few years engaged in behaviors their colleagues regarded as seriously wrong.

The frequency of lesser misbehavior ranges anywhere from a few percent to a high of 50 percent or one in every two researchers.  Putting these two categories of misbehavior together, the overall behavior curve for science follows a normal distribution with low numbers but not insignificant numbers at the extremes and unacceptably high numbers in the middle.

The costs of misbehavior in science is significant.  One study put the price of a single misconduct investigation at over $500,000.  Cleaning up after misconduct is also expensive, such as the cost of retracting the dozens of articles involved in some misconduct cases.

Two researchers in Europe recently alleged that the use of fraudulent research and clinical recommendations for surgery could have led to 800,000 unnecessary deaths over the last decade.

Another study of equipoise in cancer clinical trials suggested that overly optimistic pre‑trial estimates of efficacy could render nearly a third of all trials worthless, money down the drain, so to speak.  Misbehavior in research is an economic as well as a moral issue.

Why do researchers misbehave in these ways?  Thirty years ago when it was widely believed that research misconduct was extremely rare, the few bad apples theory was widely offered as the best explanation.  In response, policy makers established a definition of bad apples:  fabrication, falsification, and plagiarism, and put in place procedures for bringing allegations, conducting investigations, and reaching conclusions.

A decade later, when the evolving misconduct policies failed to rein in the problems of research misconduct, researchers and policy makers turned to a lack of proper training as the next best explanation. This led to new requirements for training and a dramatic increase in responsible conduct in research courses and material.

A decade after that, when better training failed to stem the continued trickle of new cases, attention turned to research on research behavior, encouraged by journal editors and a small grant program established by the Office of Research Integrity.

Some of the new findings about factors that contribute to misconduct and misbehavior include the following. Workplace conditions make a difference.  Researcher who work in a strained and unjust environments are likely to engage in irresponsible practices.

Researchers are not good at evaluating their own behavior.  They happen to believe that they are more faithful to the accepted norms of science than their colleagues.

Some researchers are not good mentors. In a review of confirmed cases of research misconduct ORI reported that 71 percent of mentors had not reviewed the raw data; 47 percent had not set standards for responsible research.

So if regulation in science has significant shortcomings, scientists seem to report suspected misconduct. Large numbers of flawed publications slip through the peer review process. Some are not retracted when the flaws are discovered.

These are examples, not an exhaustive list of findings. They are designed to suggest that there are many explanations for irresponsible behavior in science and considerable room for improvement.

Most of the solutions to the problems identified are generic to science and, therefore, difficult to solve within the context of a specific funding program such as the BRAIN Initiative.  But programs such as this can set examples.

I will end with two suggestions and one final caution. Training:  U.S. requirements for RCR training are largely unfunded.

Mandates:  PSH and NSF have supported RCR resource development and a few training related research projects, but the fact remains that the cost of delivering integrity training rests on research institutions and faculty.

This has resulted in heavy reliance on general, low cost, modest effort, online training programs, and the volunteered or required effort of faculty many of whom are not up to date on responsible research practices.  It would be instructive to see what could be accomplished if the BRAIN Initiative could at least include opportunity and perhaps a requirement to develop field specific efforts to foster responsible conduct of research as a part of the overall research program and not as simply a general RCR training as now required.

Second, best practices:  provide field specific RCR training is hindered by the fact that many areas of research, including the neurosciences, have not developed detailed guidelines for responsible practice.  Some effort to improve standards and best practice as part of the BRAIN Initiative could promote sharing, improve peer review, reduce waste, and perhaps even prevent some irresponsible practices.

For most suggestions, my primary interest in raising awareness and promoting understanding is focused on duty.  I am aware of the fact that additional ethics training might help scientists understand why virtue in and of itself is important and act more responsibly in the process.

Personally, I feel, however, that the clarification of duty is more important.

The caution I would like to end with stems from the troubling fact that much of science today seems to be driven not by duty or virtue, but by utilitarian ethic or what some have called market science.  This is true of all science, but there is evidence that some special programs, such as the BRAIN Initiative, create unique market forces that negatively impact the integrity of science.

Scientists sometimes resort to unrealistic expectations to get their programs' moved to the head of the funding queue.  Irrelevant research can be spun in ways that make it seem relevant.  There is evidence that project‑based science can encourage mediocre science and a loss of sense of truth, creating what one research team called, with particular reference to the neurosciences, bubble science.

None of this bodes well for the integrity of the research.  Research misconduct does seem to follow the slippery slope principle.  Stepping on the slope of minor misconduct is a significant risk factor for going on to commit major misconduct.  If focused initiatives increase the pressure that incline some researchers to engage in irresponsible practices, then it would seem these initiatives should include additional measures to promote integrity in research.

I have suggested that two such measures, but there are many more that could be explored during the discussion.

Thank you for the invitation to present and for your attention.

CHAIR GUTMANN:  Thank you very much Dr. Steneck.

We'll hear next from Dr. David Wright.  Dr. Wright is Director of the U.S. Department of Health and Human Services, Office of Research Integrity.

He previously served as Michigan State University's Assistant Vice President for Research Ethics and Standards, as well as its Intellectual Integrity Officer where he oversaw most of the university's research regulatory compliance activity.  So he has real insight into on the grounds at the university level operations.

Dr. Wright is currently Professor and Chairperson of Michigan State University's Department of Community, Agriculture, Recreation and Resource Studies ‑‑

DR. WRIGHT:  Not anymore.

CHAIR GUTMANN:  Oh, not anymore.  Okay.

Thank you for joining us this morning.

(Laughter.)

DR. WRIGHT:  I have been at ORI for two years, but that's where I was previously.

Thank you for the invitation.  I'm going to offer just a few bullet points of what we know from the perspective of ORI about research integrity in emerging sciences, and first I'm going to give you just a little overview of what ORI is and does, and I'm going to follow very closely on some of Nick's comments.

This is where we live out in Rockville, Maryland.  We're in the same building as the Office of Human Research Protections from whom you're going to hear, I suspect.  Our mission is to promote the integrity of PHS supported extramural and intramural research programs by administering the PHS research misconduct regulations, 42 CFR Part 93, which requires responding effective to allegations of research misconduct and promoting integrity in research.

What research misconduct means specifically in the regulations is fabrication, that is, it's making up data or results and recording or reporting them; falsification, which is manipulating research materials, equipment or processes or changing or omitting data or results such that the research is not accurately represented in the research record; or plagiarism, which is the appropriation of another person's ideas, processes, results or words without giving appropriate credit; and research misconduct, of course, does not include honest error or honest differences of opinion.

So this is the amount of business that we've done since 1992.  We've made 224 misconduct findings.  There appears to be a growing number of findings in clinical research, which may have some bearing to what you are interested in.

The total number of allegations per year before 2007 was 225 that we received.  In 2012 and '13, that number has gone to 425, and it appears to be stable at that number.  It's almost a doubling.  There are a number of very interesting possibilities about why that may be occurring, but it is a huge jump in business.

As technology related to the production and dissemination of scientific data has changed, particularly programs like Photoshop and the use of the Internet, science has become increasingly visual and, therefore, the number of cases we get that involves images has grown pretty dramatically, and there's a short chart of that.

So with that background, research misconduct issues appear to us to be mostly the same across disciplines and fields of research regardless of that field's maturity, with some nuances.  In newly emerging fields, there may be techniques that grant and general reviewers are not yet familiar with that could make detecting fabrication and falsification somewhat more difficult.

On the other hand, in hot new fields more and more people are watching developments closely, making it more likely the misconduct might be discovered, and so those are offsetting.

Hot new fields attract lots of new people in pursuit of funds and fame and create intense competition.  That combination may make misconduct more likely.  I think that's a reference to the market science that Nick Steneck alluded to.

As money pours into emerging fields where new discoveries can have near term applications in medicine or industry, conflict of interest issues may become more prominent, and conflict of interest issues can, we believe, relate to the commission of research misconduct and can sometimes impede institutions from handling it well.

In clinical neuroscience, publications ‑‑ that should say "research results" ‑‑ have evolved from case reports to results of hypothesis driven research.  The incidence of misconduct is relatively rare in the former and may be increasing in the latter.

In terms of preventing research misconduct, there are two major approaches.  One is the training in the responsible conduct of research, which Dr. Steneck alluded to.  There's another approach which we endorse.  We endorse both, but we think that the latter has received less attention than it should, and that is rigorous data management policies enforced by PIs who review all grant proposals and manuscripts along with side‑by‑side evaluation of raw data supporting those proposals and manuscripts before they leave the lab we believe would be crucial in preventing misconduct.

Thank you.

CHAIR GUTMANN:  Thank you very much.

Our final speaker is Dr. Peggy Mason.  She is Professor of Neurobiology at the University of Chicago.  She's taught medical students, undergraduates and graduate students, and won awards for her teaching and mentoring.

 Dr. Mason has served on the American Physiological Society's Publications Committee, and was a member of the Society for Neuroscience's Responsible Conduct Working Group, which revised the society's responsible conduct of research guidelines.

Dr. Mason now chairs the recently formed Ethics Committee of the Society for Neuroscience.

Thank you for joining us.

DR. MASON:  Thank you for the opportunity to speak with you.

My name is Peggy Mason.  I'm Professor of Neurobiology at the University of Chicago.  I've also been an active member of the Society for Neuroscience for more than 30 years, and I've served on a number of SFN committees in the more recent years.

Since late 2012, I have served as the Inaugural Chair of the Society for Neuroscience's Ethics Committee.

At the outset, I want to make clear that my remarks have not been vetted by the Ethics Committee, the Society for Neuroscience, or the University of Chicago, nor are my remarks endorsed by any of those entities.  Of course, my perspective is informed by my various professional experiences.  Nonetheless, the views that I express here are solely my own.

I will start by briefly telling you about the goals and operations of the Ethics Committee.  I will then turn to what I view as the major challenges that we face in ensuring that scientists hold to the highest ethical standards in pursuing understanding and discovery.

Finally, I will consider approaches to increasing engagement and enthusiastic buy‑in to ethical scientific standards.

So first, a brief overview of the Society for Neuroscience's Ethics Committee ‑‑ very hard to say, by the way.  The Ethics Committee's charge is to ensure consistency in the treatment of ethical issues across the spectrum of society activities.  In reality most of what we currently do is to evaluate allegations regarding scientific communications with the Society for Neuroscience, which publishes the Journal of Neuroscience.

These communications come in the form of submitted or published abstracts, manuscripts or published articles.  Details of our process for evaluating complaints are laid out in the first of the two Neuroscience quarterly articles that I sent to you.

Importantly, we have two goals.  The first and foremost goal is to maintain the hygiene of the scientific literature.  In essence we provide a defense for the scientific literature which, like a child, cannot defend itself.

To that end, we recommend the rejection of manuscripts and retraction of papers with serious errors, actions designed to protect the accuracy of the scientific record and ensure forward scientific progress.

Our second goal is to reprimand those who have failed to adhere to Society for Neuroscience's policies on responsible conduct.  At present, reprimands take the form of punitive sanctions which prevent individuals from participation in Society for Neuroscience activities or communications.

In pursuing our two goals of scientific hygiene and punishment of offenders, we follow several principles.  First, as elaborated upon in the second Neuroscience Quarterly article that I sent to you, the Ethics Committee does not consider the backstory behind misrepresentation of data when considering whether and how to rectify the scientific literature.  Intent is immaterial when it comes to our deliberations regarding retractions and corrections.

Second, all identifying information about complainants, alleged offenders and bystanders, such as co‑authors, is restricted to the fewest number of people possible.  For example, the identity of complaints is known to me so that I can inform them of the outcome of their complaint, but it is not known to the Ethics Committee members.

Related to this, sanctions are not made public.

Third, we are committed to due process for individuals accused of wrongdoing, who are always afforded right off the bat a chance to explain themselves.

Fourth, even in situations where junior scientists' hands have been those that produced an ethical violation, for example, used one image to illustrate two different conditions, we will always consider that the senior scientist shares culpability.  The senior scientist's job is to train and provide oversight of scientists in training.  The buck stops with the senior scientist.

Finally, the Ethics Committee is aware of the profound uncertainty regarding the motive behind any particular violation.  To paraphrase the Shadow, who knows what intentions lurk in the hearts or actually brains of neuroscientists.

We, therefore, act with commensurate modesty.  We aim to motivate scientists to educate themselves and correct ethical missteps.  We do not aim to end scientific careers.

With that background, I now turn to the challenges that keep laboratory scientists from seriously and openly engaging with ethical issues.

First, engaging scientists who face myriad concerns with respect to funding, experimental progress, regulatory compliance, education, administration and personnel is not easy.  Pressing tasks overwhelm most working scientists on a day‑to‑day basis.  It would be profoundly unrealistic to expect scientists to facilely and voluntarily spend time on ethics training over preparing for a class finishing a manuscript or sending in a grant application.

In this era of shrinking resources and with all of the things in the status quo, dedicating time to ethics training is simply not going to win out over publications or grants.

A second challenge is the great fear and panic engendered by the thought of being accused of an ethics violation.  Because of the huge emotional meaning assigned to ethics violations, most scientists have constructed a view of those who violate ethics as "other indifferent."  They imagine a person that has secretly and intentionally fabricated data late at night over the course of many years thereby polluting multiple published articles.  In my view, such egregious cases do occur, but they occur rarely, and they are probably likely to continue to occur rarely.

The vast majority of our cases do not involve wholesale data fabrication or multiple articles.  Most would never have happened if there were more attention paid to ethics and scientific rigor within the laboratory environment, discussion of relevant papers and journal clubs, laboratory meetings.

I have also heard American scientists distance themselves from ethical violators by asserting that responsible conduct is really only a problem in other countries.  An informal analysis that we've made suggests that ethics cases arise from countries in rough proportion to the origin of article submissions.

Some scientists think that only junior scientists violate ethical standards, and others think that primarily senior scientists do so.  In reality, violating ethical standards cuts across all levels of the scientific ladder.

In sum, there are diverse myths about ethics that scientists believe, but what these myths all have in common is that they serve to distance ethics as a problem of others.

Finally, neuroscientists are a community of individuals bound to each other by ties that stretch over decades and often extend into the personal.  Scientists don't want to get their colleagues or friends in trouble.  They feel sure that so‑and‑so did not mean to do anything wrong and implicitly throw the scientific record under the bus because of personal feelings and allegiances.

I can say from personal experience that informing even a perfect stranger of an ethical allegation is emotionally taxing.  I never want anyone to have acted unethically.  I always want a happy ending.  I always want to be wrong.

In closing, how can we promote virtuous scientific practices?  How can we increase engagement and enthusiastic buy‑in to thinking about ethics, talking about ethics, and rigorous scientific practice, turning that talk into a better reality?

First, I think we really want to dial down the emotions by debunking the myths.  For example, many cases are not anything approaching career ending, and they are easily resolved by education of the authors.  One of my motivations in writing articles for the members of the Society of Neuroscience is, in fact, to talk about the facts of ethics and ethics violations.  I want the sober facts to replace the myths that use Hitchcockian methods to tap into people's greatest fears.

Second, I believe that we need to replace punitive sanctions with educational opportunities.  Rather than creating an adversarial relationship with an accused, let's use a more collaborative approach that can turn past violators into advocates for ethics and scientific rigor.  Let's use the process of ethical investigations to convert scientists PI by PI to believers in the value of the deliberately ethical approach to research.

To that end I am likely to invite past violators into future discussions.  I want to show with my actions that I don't consider one ethical violation as an FFP brand to be carried forever more.  Truthfully, I would never have known of some of the more arcane violations, such as copyright ones, if I had not sat on committees that considered these issues.  Ignorance does not evil make.

Rather, ignorance is the breeding ground for education.  Let's foster open discussions that accurately portray the interesting dilemmas that working scientists face and the responsible solutions that will best serve forward progress.  That approach has a shooting chance of engaging working scientists.

CHAIR GUTMANN:  Thank you.

You all were models of staying within your time limits.  So thank you for that as well.

And it's open for questions by Commission members and discussion.  And I would like to point out in compliments to all of you that you've given us the backdrop, if you will, for what we've been talking about as a recommendation about integrating ethics early on into the practice of neuroscience, and as Raju said earlier, while the BRAIN Initiative is focused on mapping of the brain, our charge from the President is not meant to be so narrow as to apply only to that research, but to neuroscience research more broadly speaking.

So if I could ask any one of you to just follow up since you were so nicely within your time and say what you think we should be considering as specifically as you can in recommendations to integrate ethics into neuroscience research.

Peggy, maybe we should start with you because you are on the front lines of vetting some, but we're just taking a step back.  So give me some example of what you think we could recommend that isn't being done right now?

DR. MASON:  Well, first, I want to reiterate on I think a comment that Nick made that the RCR requirement is unfunded, and that is a big problem.  That's just a big problem.

CHAIR GUTMANN:  Yeah.

DR. MASON:  Secondly, the RCR requirement is educating the students.  It's not doing anything for the PIs.  So we need a continuing education issue.  We need to address some kind of a continuing education issue for PIs.

CHAIR GUTMANN:  We should underline that because a lot of younger scientists are pointing out that while the requirement took effect so younger people have had it, a lot of the PIs didn't have that requirement.  So it's a‑‑

DR. MASON:  I don't think I did.  If I did, it was not memorable.

CHAIR GUTMANN:  You played catch up, Peggy, very quickly.

DR. MASON:  Right.  So I learned from being on a committee, and in fact, that's a way to teach PIs, is to put them on various committees so they can realize how important this stuff is.

 The other thing I'll say is that Society for Neuroscience has the resources, has the intent, has the motivation, and has the experience to do this.  I have painted a slightly Pollyanna picture, and there are cases that are not so restricted to one publication or even to publications only in Journal of Neuroscience, and when I communicate with other journals, they simply don't have any experience or resources to handle an allegation, that that's a big problem.

 So we're the only one really in our field that's putting a lot of energy into it.  We're willing to take a lead on it, but it's really a problem.

CHAIR GUTMANN:  Good.  Nick.

DR. STENECK:  I'm delighted to see what you're doing.  When I was working with ORI, we actually had a grant program to encourage professional societies to do more to clarify and set up procedures, and the program was ended because the societies were not interested.  They were largely not interested because it was more challenging than they anticipated it was, and so the small amount of money that ORI had to give out for it was not enough to really get a major society to do anything.

But the challenge I face at a very large university is how do you get a discussion in a laboratory setting where it's very busy and where your students in a classroom on RCR will tell you, "Professor Steneck, you don't know what it's like in a laboratory.  You don't know what's going on there."

So when we teach postdocs, for example, of course, like this very practical one we teach, they come in with things like, "I have five different sources of data.  How do I integrate them?"  And, you know, nobody has told them how to do that.  You'd be surprised at the number that are not even told how to keep a laboratory notebook and what goes into a laboratory notebook.

I use the example of how did we reduce infections from sticking needles into people, catheterization.  You train them.  You post what's there in the clinic.  You talk about it.  You debrief afterwards, and what happens?  The infection rate drops down.  I don't know what it was, 50, 60, something like that.

We don't do that in research integrity.  Everybody is so busy doing what they do there's no time to sit and talk about these issues, and that's where the discussions have to take place.

CHAIR GUTMANN:  No, no, if you have a response, please.

DR. WRIGHT:  Two things.  One is the RCR mandate that NSF and now NIH has is a mandate of time and attention, but not content.  It hasn't been carefully thought through, nor has there been any systematic attempt to evaluate efficacy.  Both of those things are huge challenges.

I would add that institutions in my opinion, speaking for myself as Peggy invited me to say  ‑‑ thank you ‑‑ could do a lot more in terms of institutional requirements for data management and control.  We see allegations of research misconduct frequently where the respondent has no laboratory notebooks at all, and this has been going on sometimes for some period of time, and where manuscripts or grant proposals were fabricated or falsified, data leave the laboratory and the PI has only looked at summary data, not the raw data underlying the experiments.

Faculty at universities, of which I was one until very recently, are understandably resistant to Draconian intrusion into their laboratories by the institution, but as a kind of peer mandated measure to increase vigilance of data integrity, that would do as much as any one thing, it seems to me, to prevent misconduct.

CHAIR GUTMANN:  Thank you very much.

I have a list of almost all of our Commission members, Christine, Anita, Jim and Raju, and Nelson.  We're getting there.

Yes, please.

DR. GRADY:  Thank you all for your comments.

I want to just push you on one thing that all three of you just alluded to, but I heard the challenge of engaging scientists in what some of the ethical issues are while recognizing that they have all these other pressing demands on their time, and that they see the most important thing as, you know, doing what they need to do to get their grants, to get their publications published and get promoted, whatever.

So I don't think that the answer to that ‑‑ I mean, and you all alluded to this as well ‑‑ are the online RCR courses because people do them while they're multitasking because they don't really want to do them.  They're just required to.  They don't remember them.  They don't probably learn much from them, and so I don't think that's the solution.

So that's one question.  What is the solution?  How do you engage?  How do you build in education in a way that people feel engaged, but it doesn't take away from all their other pressing demands?

And then the second question I have is, I mean, you've mentioned the importance of keeping notebooks and monitoring data before it goes out for publications and things like that.  I'm impressed by, you know, what's happening in clinical research, where I always describe it as a team sport.

You know, we have multi‑center, multinational clinical trials where, you know, they engage 1,000 people in one trial.  How do you monitor the notebooks in that kind of case, or how would you get people engaged in responsible conduct or research in that kind of case?

CHAIR GUTMANN:  So hold those questions.  Hold your answers because I'm going to take two at a time because I just did a quick calculation and there's no way we'll get to the end otherwise.

So Anita.  I'll clump them into twos.

DR. ALLEN:  my questions is very, very straightforward.  I was struck by the fact that in listing the problems, you listed fabrication, falsification, lack of accountability, plagiarism, copyright violations, but I didn't hear conflicts of interest, as such, as among the concerns.  So I just wanted to know where does conflicts of interest fit into your picture of the ethical issues that neuroscience might encounter.

CHAIR GUTMANN:  Good.  And any one of you.  I'm not going to ask all three of you to respond to all of these because of time constraints.  So who would like to take it?

Nick.

DR. WRIGHT:  Let me, to Christine's comment there.  I have a conflict of interest here which I have to declare because I work for an online training company.  So you need to know that.

But it has its place, and that is if there is a certain amount of basic knowledge that you have to have.  The average researcher doesn't have that basic knowledge.  So you've got to in some way impart that, and the most efficient way to do it is to do it online.

You then have to do the subsequent training in the laboratory to do more, obviously, to do that.

What I think is you have to convince researchers that this isn't a diversion from their time.  The people I know who do it well, it fits in with their normal work routine.  It isn't something that all of a sudden, oops, we have to stop and talk about integrity.  It's just a part of their life that they do it.

We're doing a grant proposal.  What are we doing on conflict of interest?  What are we doing on information, and so on?  Where's your books?  I want to see them when I walk into the laboratory.

So a good laboratory, it's just incorporated into that laboratory.  It's the fact that the ones who haven't done it don't know how to do it, and they look at it as an imposition.

CHAIR GUTMANN:  What about Anita's question about conflict of interest?  Peggy.

DR. MASON:  Yes.  It's there.  I didn't mention it, but we recently just published a slew of corrections because people had inaccurately not declared their conflict of interest.

DR. ALLEN:  Can you say just briefly what they were?

They had money from a drug company or from a device manufacturer, or what was their conflict of interest?

DR. MASON:  They had patents on molecules that were part of the article, and these are published articles.  So we simply added in that there was a conflict, this conflict of interest, and that the reviewers were unaware of it at the time of the acceptance.

CHAIR GUTMANN:  Jim.

VICE CHAIR WAGNER:  I had a question on research culture.  We've been talking a lot about policing and practices and policies, and maybe it's just an impression that I have, but actually it's connected with questions of conflict of interest and bias.

You mentioned hypothesis driven research, Peggy, and you all dealt with it.  It seems to me in the physical sciences more than the biological sciences, when one states a hypothesis it's also understood to be a statement of bias.  In other words, I'm exposing my bias.  This is how I hope the experiment will work out.

And so we see hypothesis shared very, very freely.  Does the Higgs boson exist?  Does it not?  Many labs go to work on that.

I wonder if that is a practice that could be employed.  Is it just my impression that it's not employed as often in biomedical research?  And if not, is it something that would help us address this bias question in the culture research?

You wanted to do two at a time, right?

CHAIR GUTMANN:  Yeah.  Raju.

DR. KUCHERLAPATI:  I would like to ask a question of clarification about the magnitude of the problem.  So one is Nick said that by one measure, one of every two investigators, you know, could be considered to have misconduct.  And, David, you know, in your presentation you said that Office of Research Integrity looks at 200 cases until recently, and they went up to 400 cases, and considering the overall number of efforts around the country, that number seems to be very small.

So are we talking about two different things?   What is the real magnitude of the problem that all of us and the Commission has to worry about?

CHAIR GUTMANN:  So why don't we start with Jim on openness about hypotheses and research bias, and how common is research misconduct.  Who wants to take the first one? Nick?

DR. STENECK:  There are two worries about conflict of interest.  One of them is that you just didn't declare it, and somebody needs to know that, and you violated the rule by not declaring it.  So that's misconduct of the sort that's not defined.

The more troubling one is that we know from research that funding actually impacts outcomes, that if you test the same drug, the people that are funded by the drug company and the people who aren't funded by the drug company come up with different answers.

I worry more about the second one than I worry about the first one because the first one, okay, it's a mistake.  You educate the person, you do whatever you want to, and that's the end of it.

The second one, if you don't know those built‑in biases, you're going to make decisions about the use of that drug based on research.  If you have too much of it, that's going to skew it in another direction.  So that's where I worry about those kind of subtle things that they're not misconduct, but in the long term they're having a much bigger impact on the impact of the research and the valuable nature of the research.

CHAIR GUTMANN:  They're harder to expose.

DR. STENECK:  Yes.  And I can answer the question very quickly about misconduct rates.

CHAIR GUTMANN:  Yes.

DR. STENECK:  What I said is misconduct is about 1 percent.  Okay?  So by my estimates, RI sees one in every ten to one in every hundred cases that exists out there.  I think David will agree that they don't see all the cases.  Whether he and I agree on the ten to a hundred or not is another question.

The one in two are these questionable practices, which in the long term, as I say, may have much more impact on the actual validity of the record and the public decisions we make.  And that's what's most important.  And those are the ones that can be as high as 20, 30, 40 percent.

CHAIR GUTMANN:  David, please.

DR. WRIGHT:  Just quickly, on conflict of interest.  Another issue there is when there's an allegation of misconduct and it's handled by an institution that has a major financial commitment to the area of research and its major faculty and that sort of thing, do they pursue those allegations as diligently as they would other ones?

And, for example, do they protect whistleblowers and others who may be collateral damage?  I don't want to over‑emphasize this, but it's a nontrivial problem from our point of view.

And with relation to your question about the magnitude of misconduct, I largely agree with Nick.  We handle, because of our limited resources, only very serious cases.  So the cases ‑‑ let's say 425 that we take in for a year; those are allegations, as I say ‑‑ we might open cases only in a fraction of those and make 13 or 14 findings a year.  But that represents just the tip of the iceberg, not the magnitude of the problem.

CHAIR GUTMANN:  Thank you very much.

Nelson and Nita.

DR. MICHAEL:  So I'll revert to the training I got considering the uniform I'm wearing.  In the military, the mission defines the task, the task defines standards, and we train to standard.  And that's a mantra you can ask any private in the U.S. military.

So Nick, you opened your conversation by saying you're concerned that we haven't defined standards well.  I will tell you that since I do lots of online training, not just for research ethics but everything the Army can think of that results in an online training ‑‑ probably two or three times a week, not kidding ‑‑ training in some ways has become unrestrained a definition of standard.  And that's what I'm really concerned with.

It seems like in your initial comments, you also seemed concerned that we haven't defined the standards we're trying to train people to.  Given that individuals don't want to really uptake the training anyhow because it diverts them from what they perceive as the critical mission of executing science and developing resources to do more science, it would seem to me that if you had truly better‑defined standards, then the training wouldn't be as onerous.

CHAIR GUTMANN:  Nita?

DR. FARAHANY:  I just wanted to pick up on something that Nick had said about big science and big science projects, and about your unique concerns about big science projects, that they can create particular concerns about distortions of science, about motivations that are at odds with ethical progress in science.

And I was hoping you could speak a little bit more to that and unpack that since one of our charges is to think about the brain initiative specifically, and about unique ethical challenges that may be posed to that, and any thoughts that either of you may have about whether you agree or disagree with whether or not that's the case.

CHAIR GUTMANN:  We have a standard here.  We go Nick, David, Peggy.  So why don't we do that.

DR. STENECK:  Nelson, I'm not sure what actually the focus of your comment was at this point.  My personal view is that industry, I haven't looked as well at the military, actually does a much better job of setting standards, and that universities, because of the academic freedom, are very reluctant to do that kind of same standard training.

I'm not convinced that it would suddenly stifle all science if we had standard notebooks and a few other things like that.  So I personally think in this area, at least, what industry does in training, and probably the military, is very good.  You need that basic training before you can go on and do everything else.

So I don't disagree with that.  I don't know if others want to talk about that.

CHAIR GUTMANN:  Peggy?

DR. MASON:  I want to first echo one thing that David said, which is that the institutional investigations, they have an inherent conflict of interest, to protect their faculty members.  And what we get back from the institutions varies from I don't know a polite word for it to fabulous.

(Laughter.)

DR. MASON:  It would be just terrific if we got some help there so that there was standards for how they have to address their inherent conflict of interest when they evaluate an investigation against their own faculty member.

I also wanted to talk a little bit to the hypothesis as bias.  I actually didn't hypothesize.  I did not drink the hypothesis Kool‑Aid.  I think that, firstly, my personal hero in biology is Charles Darwin; he never had a hypothesis.

I think discovery is a big deal.  I think that the way to engage scientists is instead of making them scared of being accused, making them scared of what happens if they don't figure out how bad irresponsible conduct, irresponsible building of an experiment can actually be ‑‑ so I was teaching in Paris when a friend of mine sent me this article which basically showed that they tested for p‑value with every data point.

And between 25 and 30, there was a significant difference in something that there cannot be a significant difference on, the age of these two different cohorts that were not different.  And that scared the bejeezus out of me, and I promptly had a lab meeting.

And I actually think that that's the thing to do, to get scientists to say, look, if we don't construct this experiment correctly, if we don't get the right numbers in the right groups, we're not going to get the right answer.  And we're going to be part of this problem where science is not trustworthy, it's not reproducible, et cetera.  So I want people not to be scared of us, but scared of the alternative.

CHAIR GUTMANN:  Well put.  David?

DR. WRIGHT:  One of the mantras in the responsible conduct of research is that responsibly conducted research is also more productive research.  You hear people say that.  If somebody could demonstrate that experimentally, it would be huge.

It would be no longer a distraction, as many investigators think, to do that training and to design work more carefully, but it would be an inducement to productivity.

CHAIR GUTMANN:  The problem with recommending that that be tested is the default there is, go on with irresponsible research until it's proven that responsible research is more productive, which is a totally unacceptable view since we don't have research that shows that irresponsible conduct is productive.

So I think it would be great to demonstrate that.  But we know that irresponsible research is wrong.  And that is at least an important starting point for a bioethics commission or a responsible scientist.

Please.

DR. WRIGHT:  I think there may be some other spots on the continuum between irresponsible science and optimally responsible science.  I think you could take baseline science and then optimally responsible science and compare the differences and see if the latter is more productive.

CHAIR GUTMANN:  So I think what would be an important way forward, consistent with what you've just said, is to make sure that requirements, whether they be educational or training, are likely to be productive.  Right?  You do not want to saddle any researcher with requirements that don't, either on the face of them or through evidence, help in producing good science.

And our regulatory parsimony principle, which is part of doing good ethics and good science, is meant to address that.  And I think that's what you're suggesting and we can very strongly get behind.

So when surgeons now in many hospitals are required to have checklists that they go through ‑‑ that takes some time; you have a checklist ‑‑ there is evidence that that minimizes mistakes.  And those are the kinds of requirements that not only are good to institute but that you can fall behind and get them scaled up.

Nick, I'm going to give you the last word, and then we're going to adjourn for lunch and reconvene at 12:45.  But Nick, go ahead.

DR. STENECK:  There is minimum evidence that paying attention to integrity works.  I just want to say if you think bioethics research is under‑funded, we probably get 1 percent of the money for research integrity that bioethics gets for funding.  So we're even worse under‑funded.

But it's known, for example, that if you have an overly bureaucratic IRB, people break the rules more often.  So it's a very simple one.  The climate survey that's now slowly starting to be adopted, a few institutions that have looked at their ‑‑ and MSU is one of them ‑‑ that has looked at their institutions find out that their better departments actually had better climates.

So if we could do more like that, I think we could actually develop the evidence that shows that if you run a good department and you have a good climate and other things, your integrity is going to be better and your scientific results are going to be better.

CHAIR GUTMANN:  On that note, I would love you to share with us that evidence.  On that note, we will adjourn for lunch and reconvene at 12:45, but not without thanking Peggy, David, and Nick for a really fabulous panel.

(Applause)

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.