Transcript, Meeting 16 Session 4

Date

February 10, 2014

Location

Washington, D.C.

Presenters

Ushma Neill, Ph.D.
 
Director, Office of the President
Memorial Sloan-Kettering Cancer Center
Editor at Large
Journal of Clinical Investigation
 
Stephen J.A. Ward, Ph.D.
 
Professor and Director
George S. Turnbull Center
School of Journalism and Communication
University of Oregon-Portland
 
Timothy Caulfield, LL.M., F.R.S.C., F.C.A.H.S.
 
Canada Research Chair in Health Law and Policy 
Professor in the Faculty of Law and the School of Public Health University of Alberta 
 
Eric Racine, Ph.D.  
 
Director, Neuroethics Research Unit 
Associate Research Professor
Institut de Recherches Cliniques de Montréal 
Associate Research Professor, Department of Medicine 
Université de Montréal
Adjunct Professor, Department of Medicine and Department of Neurology and Neurosurgery, McGill University

Download the Transcript

Transcript

VICE CHAIR WAGNER:  And once I get my glasses on, I can do my job.  Who are we missing?  We'll wait for Raju and Chris; they'll show up soon.  So let me go ahead and do the introductions, anyway.

We're turning now in this section to the topic that was mentioned earlier, communications and neuroscience research by scientists and journalists.  We've got quite a panel.  I've had a chance to chat with several of them.  And so we look forward to hearing from all of you.

 The first is Dr. Ushma Neill.  That was close, I hope.  She will start off, and is the director of the Office of the President at Memorial Sloan‑Kettering Cancer Institute, serving as chief of staff to the center's president and CEO, editor at large for the Journal of Clinical Investigation, having previously served as the journal's executive director, and as editor at Nature Medicine.

She's also chief editorial consultant to the newly launched journal, Molecular Metabolism.  Welcome.  We're interested to hear from you.

DR. NEILL:  Thank you very much for the chance to address this very august group.  If you don't mind, I will read my comments so that I can make sure to keep to time.

So science and research are built on a foundation of integrity and truth, and over the course of my 13 years as an editor, I've found that the overwhelming proportion of scientists do operate under these principles and present only proven and replicable data.

But given the intense pressures scientists face in a harsh funding climate, there's an ever‑present need to produce results, and specifically to produce positive results that support a central hypothesis and result in a high‑profile publication.

Unfortunately, this intense pressure to publish or perish has led a few unscrupulous scientists to present doctored data, and this has called much of the enterprise into question ‑‑ Diederik Stapel, Anil Potti, and Hwang Woo-Suk are just a few of the recent scientific offenders who've made headlines.

There's also been a significantly increased media attention to scientific malfeasance, with the New York Times and the website's Retraction Watch and the now‑defunct Science Fraud that follow every journal's correction and retraction with a somewhat breathless reporting style.  And I admit that I'm not beyond the rubbernecking.  I get the RSS feeds.  I read the coverage as well.

This increased attention to scientific fraud has therefore necessitated an increased level of scrutiny at the journal.  Journals can no longer act as though all data that we receive is unimpeachable; as we have to stake our journal's reputation on the quality of the articles we publish, we therefore need to verify the veracity of what we publish.

What I want to show you are some of the methods that the journal I represent here today, the Journal of Clinical Investigation, has used to verify what we publish.  Many of the policies and procedures we put into place are designed to find errors that have been introduced, some by design, some by carelessness, before they are published.  Other procedures were developed to be reactive to the different ways that people have managed to get around our earlier‑imposed roadblocks.

First let me mention that in 2013, the JCI received nearly 4,000 new publications, new submissions, and subsequently published 410 articles.  We screen only those manuscripts that have been accepted for publication.

During the process of preparing manuscripts for print and online publication, we gather the high‑resolution original versions of all figures and display items.  We test the text for plagiarism.  We use tools from the Office of Research Integrity to determine which figures have been manipulated.

Just to make sure that we're all on the same page in terms of science ‑‑ I'm sorry, this is not going to project particularly well over there ‑‑ I want to start out by explaining western blots.  This is a tool that scientists use to measure protein.

In this example, what I'm showing you here, the different columns correspond to different test conditions, and the row or protein of interest is at the top, with a so‑called loading control in the lower row.  The loading control is a smaller, irrelevant protein, in this case alpha Tubulin, whose relative amount would remain consistent regardless of the experimental condition.

A loading control allows you to verify that the same amount of protein is put into each lane, as if the relative amounts are different in that row, you can tell what the different amounts of protein were put into the column to begin with.

We use a set of tools from the ORI to screen these images.  These tools can detect abrupt changes in background or pixelation.  What was not appreciated in the original version at first glance is that there's a splice between two of the columns, and yet there is no splice in the loading control.  Therefore, these two rows could not have matched in terms of the experimental conditions.

In this next slide, perhaps we can appreciate better on this monitor than that that there's two further examples.  Even without the ORI screening software, you can see two of the columns were pasted on top of the underlying bands, and on the far right the same lane was pasted over the background three times.

In the lower example, you can see that all the lanes have been pasted together.  But what you might not appreciate at first glance is that the same lane was used three times.

Splicing lanes together is, in and of itself, not always a crime.  There are times that a set of examples that's irrelevant to the overall experiments are run in between the actual lanes of interest.

On this slide you can see columns and rows with a space in between them, or a line, indicating that there were columns that were removed.  We also ask authors to make a note in the figure legend indicating that the spliced lanes were noncontiguous but from the same blot run at the same time.

 Given how much time we spent policing the splicing of these immunoblots, we introduced a policy that every article that's submitted in revised form must also upload a supplementary file for the editor's eyes only that contains the full uncut film of the blot featured in the figure.

The supplement must make clear where the lanes originated, as you can see in this example.  This apparatus also allows an editor to investigate and make sure that what we see in the final figure is verifiable.

This practice has cut down on the number of queries we've had to make after a manuscript is accepted, and happily has reduced the amount of time between acceptance and publication.  This step discourages authors from submitting something fraudulent in the first place, given that they will have to produce the originals before a manuscript is accepted.

Recall, however, that the JCI encourages revision of around 430 articles a year, and this step necessitated the hiring of an additional full‑time employee, in addition to the non‑trivial amount of time that the editors themselves have to spend looking at these figures.  Focusing on integrity has its costs.

The ORI software is also useful in detecting edges and figure manipulation in images.  In this example, you can see how an author tried to obscure red or green areas of intensities in these cells.  The ORI filters allow you to see at a glance when someone has used the "Erase" tool; abrupt pixelation changes are immediately apparent.

Unfortunately, the ORI tools cannot catch everything, and some instances of manipulation are only caught later.  A concerned reader pointed out this issue to us after the article had been published.  The authors clearly use different portions of the exact same fields to represent different conditions.  After an extensive investigation conducted by their university, the article was retracted.

In cases like this, where the figure manipulation was deliberate and extensive, we must refer these allegations to the author's institutions.  Journals do not have the resources or authority to conduct investigations from afar.

However, I do agree with Dr. Mason's earlier contention that some institutions provide a whitewash versus others that provide a very comprehensive report.  We take their findings into consideration when we are making our final decisions about correcting the scientific record.

I want to make sure to mention that not all misconduct or fraud we monitor is related to doctored data.  For example, there are countless disagreements about authorship, and we will only agree to publish once all authors have agreed on an author list and an order.  And when there are disagreements, we ask the authors to sign a document with the final title and author list that they themselves have negotiated.

Speaking of authorship, this manuscript was retracted when someone listed as an author indicated that the first time he saw this manuscript was when it was published on our website.  The senior author had never asked for permission to use the mice generated by the other author, nor had he filed a materials transfer agreement or made other arrangements.

Additionally, the senior author forged the other author's signature on our copyright transfer form that we were using at that time.  He assumed that the other author would be proud and happy to have a JCI article.

Since that point, we have moved to a system wherein each author's email is obtained at the time of submission, and all are contacted to indicate a manuscript has been submitted on which they are an author.  Notification also happens again at the revision phase.

And if the manuscript is accepted, each author must then electronically sign a form transferring copyright and indicating that their portion of the work was done ethically and in keeping with the JCI's editorial policies and practices.

We often use the fact thought authors have agreed to our editorial policies and practices to broker other disagreements that arise after publication, disagreements that also fall under the purview of responsible conduct.

One prerequisite of publication is that authors must make available, at cost, any newly discussed reagents, construct, or animals that are discussed within the publication.  If they do not, we have often had to threaten withdrawal or retraction until compliance is achieved.

We also require that the structure of any newly synthesized compounds used in the manuscript be revealed.  Clinical trials data must be deposited in ClinicalTrials.gov, and any microarray data needs to be entered into a MIAME‑compliant database.

Failure to disclose conflicts of interest has also arisen on multiple occasions, and after the fact, corrections must be made so that the readership can appropriately interpret any conclusions given in a particular study.

Problems with disclosure also led to the retraction of this article.  Soon after publication, the IACUC, or Institutional Animal Care and Usage Committee, from the senior author's institution contacted me to ask for copies of the original submission of the manuscript, as the author had been on probation for mistreating his experimental animals and was not authorized to do the experiments that were listed in the published manuscript.

In fact, there was a full exposé within the Seattle Times that came out coincident with the report that the university gave to the JCI.  The author had made a declaration of IACUC approval within the methods section, and we trusted that it was in earnest.

Given this unfortunate incident, we introduced the policy that submitting authors must specifically click on a button that is otherwise set to "No" when we ask them if they had their experiments approved by an institutional review board or an IACUC, such that we specifically have their agreement.  And we also list a field for them to give a declaration.

Here on the lower portion of the screen, you can see the screen that we use at the JCI for tracking manuscripts.  You can see the indication of approval takes center stage in red, and I also put in an arrow so you can see where these authors uploaded their full, uncut gels in this revised manuscript.

When I give talks about preparing manuscripts for maximal impact, I often end with stridently worded advice to keep data safe, to keep lab notebooks updated, and to save copies of well‑labeled figures in multiple places.  Excuses of lost USB drives, laboratory moves, and labeling errors are the modern‑day equivalent of, "The dog ate my homework," and are not acceptable in today's digital age.

I also advise PIs regularly to look at lab notebooks, verify every data point themselves, and sometimes ask a second person to verify experiments independently. As scientists we’re trained to be meticulous, to use proper nomenclature, to “dot every I and cross every T” when we're designing experiments, and PIs must remain vigilant and meticulous about the data that goes out under their name once that hypothesis has been proven.

And the journals, too, must take pains to verify that what we publish is credible.  I hope you're able to appreciate what the JCI, at least, is doing to ensure that our content is trustworthy.  Thank you.

VICE CHAIR WAGNER:  Thank you, Dr. Neill.

Next we hear from Dr. Stephen Ward, professor and director of the George S. Turnbull Center, the Portland base of the University of Oregon School of Journalism and Communication.

Previously he was the first James E. Burgess Professor of Journalism Ethics and founder of the Center for Journalism Ethics at the School of Journalism and Mass Communication at the University of Wisconsin Madison.

He also has been director of the Graduate School of Journalism at the University of British Columbia in Vancouver, Canada, and is the founding chair of the Ethics Advisory Committee of the Canadian Association of Journalists.

Dr. Ward is the author of many books and articles on journalism ethics, and serves on many editorial and advisory boards for ethics organizations and for journals on media ethics and science.  Thank you for joining us today.

DR. WARD:  Well, thank you again.  It's an honor to come here to speak before you.  I'm going to very quickly tell you where journalism ethics came from and the model that it was based on, on an old media environment that no longer partially exists.

I'm going to tell you the new environment and how that has created complete turmoil in my field of journalism ethics.  And finally, I'll try to indicate implications for science journalism, hype and so on.

First of all, you can be fairly precise and aware of where journalism ethics comes from, at least in the United States ‑‑ I would say 1921/1922, with the development of a code of ethics by the now‑called Society of Professional Journalists.

I present that as the beginning of professional ethics because it was explicit ‑‑ that is, it was written down; it was collaborative, and agreed to by many journalists across the field.  And for that reason, we have ‑‑ of course, ethics in journalism goes back to the 17th century, but we have a movement towards a professional model.

And this is the model that we ended up with.  It comes from the creation of a mass commercial press, and the power that press gained in the early 1900s.  Eventually, the press became a virtual monopoly on information for the public and advertising for the public, stuck between what the public was passively dependent upon the media and what was happening in the world.

The really good question is why at this point in time would journalists start to create professional organizations and codes of ethics.  The answer was, severe doubt, skepticism, public criticism of those mediators, that priestly class of journalists between the world and the public.

At this point in time, everyone is concerned about the commercial model of the press.  We still are today, press barons and so on.  In response, what was created were these codes of ethics in almost every state in the land, nationally, and elsewhere in Canada, and so on.

It was a very professional ethics, and the familiar properties and principles I'm sure are new to you.  If you're going to be in the middle between the public and the world, you'd better be impartial.  You'd better be objective.  You'd better be independent, editorially independent in conflicts of interest, hopefully avoid it.  You'd better minimize harm, and you'd better be accountable.  And you can find those in almost any code of ethics today.

Now, what's happened, of course, on all of that is that the new model, meaning a chaotic, expanding universe of media, simply means that the professional press and this code of ethics, this approach to journalism, is diminished, questioned, and so on.

I can't tell you how different it is to teach journalism ethics today, or even to do journalism and talk about journalism or science communication.  Perhaps in the old days, if you were a very bad teacher, you would take the four principles of the Code of Professional Journalists, have the students understand the principles, take some cases, and say, well, here's how you apply the principles.

You can't do that.  Every principle will be challenged by your students in that class, including objectivity, obviously, including impartiality, including independence.  And at the same time, we have citizens who are practicing journalism who don't come from a professional background at all ‑‑ not that that's necessarily required, I might add in there.

There's two macro trends.  We have a mixed news media, mixed in the sense of who practices it.  We have amateurs, citizens, professionals, and whatever.  Mixed in the sense of types of journalism styles and formats, as you all know, bloggers, social media, and so on.  But the other aspect where I won't touch on today is that it's also global, with global implications.

So these trends that have come along have left us, what is the impact?  It means the model that we conceived of journalism and its ethics in the past is under severe scrutiny because the role of the journalist in that media sphere has changed.

We have, obviously, issues of identity ‑‑ who is a journalist.  Right?  We have new forms of journalism who practice different forms of journalism, and the new practices mean new values.

If I'm going to live blog, live blog a court session ‑‑ I never had a chance to do that in journalism; I'm not sure I wanted to, either ‑‑ the values of accuracy and speed and verification all come into play here.  How do they operate in a situation like that?  New values and old values, of course, most of the new media online is much more partial, it's more partisan, so impartiality is questioned.

And yes, where is journalism ethics going?  Let me give you some actual concrete examples of work that I'm involved in that will tell you just how somewhat confusing this whole field is.  I'm looking down at my time here.

Let's take the venerable Society of Professional Journalists, who wanted to change it to Society of Professional Journalism, or get rid of the word "Professional" recently, because they weren't sure that would do.

But they're looking at developing a new code of ethics, revising their code of ethics.  And I'm on one of the committees that's doing this.  But the problem is that there's complete disagreement ‑‑ not complete disagreement, but extensive disagreement between the people who are doing the code writing and the people who are doing the journalism.

Some want to go back in time, sort of let's just stay with the old principles that we have.  Others think that a complete revolution in principles has to come into being. To the point, for example, where one recent book on journalism ethics questioned the very need for a principle of independence, of editorial independence, or at least certainly not the way it was understood because many, many people write online, and forms of journalism don't work for professional organizations which enforce conflicts of interest and independent guidelines.  It doesn't apply, according to this view.

Let me give you another example.  I'm also helping the Online Newsroom Association, or News Workers Association, develop their codes also.  Listen to this approach.

There's so much dissension among journalists out there that the people who are leading this charge do not believe that there's any sort of universal content you can put into the code and tell everybody to abide by.  There's too much disagreement on that.

So what are we going to do?  We're going to be procedural.  We're going to tell people how to think about the issues and develop their own codes, so each association can develop their own codes using our so‑called building blocks, the tools.

So it's things like think about this, think about that, consider this, consider that, but staying away from any specific universal content in the code.  That's how much things have changed in this particular field.

There is, of course, types of journalism that I just want to point out.  There's now something called brand journalism.  Brand journalism is where corporations get around, skirt the mainstream media, have their own websites, and do journalism articles on their own websites.  It could be Red Bull.  It could be Cisco Systems.  It could be anything.  All right?

They hire journalists to work for them.  There are some editorial restraints, like don't trash the owner and don't trash the corporation you're writing for, and don't highlight the competition, editorial reservations that I would never have lived with as a journalist when I was a journalist, but it's part of this brand journalism.  Is this independence?  Is it any worse than the independence in other parts of the media?

Or consider another type of journalism that's going on, agenda‑driven journalism, as best as I can call it right now.  One example is there's quite a few right‑wing groups who are using money, Libertarian groups ‑‑ and I'm not using that pejoratively ‑‑ who are setting up websites at statehouses around this nation to report on issues from a perspective, a perspective of small government, lower taxes, and so on, a deliberate political perspective.  But they claim they're objective, they're impartial, and they're independent because they report the facts like everyone else.

So the whole issue of what is independence in this era is really up for grabs.  And I have my own views on that, but I'm just trying to show you the problem.

I think with respect, in the little time I have left, I think what science has to do ‑‑ and I'm willing to give you a few more comments for what I think science can do later because I'm running out of time ‑‑ is, first of all, the communicators have to reimagine what science journalism can be in this completely new environment.

We are never going to, and I'm not sure I want to, control science journalism and what the people get in terms of information.  What we need to be is a counterbalancing force, a sort of coalition of the sane and the rational ‑‑ I'm being facetious ‑‑ a coalition of groups within science, mainstream, online, NGOs, whatever, where we develop a system of writers and informers who at least can provide some segment of the population a place to go where it is reasonably reliable, reasonably participatory, reasonably engaged, and so on and so forth.  I can say more about that.

In terms of hype, as far as I'm concerned, hype is even more today and it's certainly not going away.  It is an intrinsic part of the public sphere that I just described.  So we have to do something about that.

And in 47 seconds, I think first we need a very rigorous calling out of people who do exaggerate and hype.  And that means certain institutional media structures, I think.  I think we also need media literacy so that the public themselves can detect bogus phony‑baloney.  And we teach it across the disciplines of our universities, not just in journalism schools.

And also, we develop new media spaces that are reliable, that are not hyping, are non‑hypeable.  And it's not everything.  The hype will still continue.  But I think it's better than what we've got now.

So hopefully that gives you some idea of the scope of the ethical issues that we're dealing with today.  Thank you.

VICE CHAIR WAGNER:  Thank you.

Next we have Professor Timothy Caulfield, who is a Canada Research Chair in Health Law and Policy and a Professor in the Faculty of Law and the School of Public Health at the University of Alberta.  He is a Fellow of the Trudeau Foundation, and a Health Senior Scholar with the Alberta Heritage Foundation for Medical Research.

Professor Caulfield has been involved with a number of national and international policy and research ethics committees, including the Canadian Biotechnology Advisory Committee, Genome Canada's Science Advisory Committee, the Ethics and Public Policy Committee for the International Society for Stem Cell Sub‑Research, and the Federal Panel on Research Ethics.

He is a Fellow of the Royal Society of Canada and the Canadian Academy of Health Sciences.  Welcome, Professor Caulfield.

PROFESSOR CAULFIELD:  Well, thank you very much, and it's a real honor to have the opportunity to speak with you today on a subject that I feel very passionate about, and that is the topic of science hype.

Now, I was told that I should tell you about the sources and give some hints to solutions.  And that's exactly what I'm going to do.  I'm going to start, though, by saying that it's important to note that hype pushes in both directions.

I think people think of hype largely in relation to hyping benefits.  But, you know, in the world of ELSI, we have something that myself and some other colleagues, including Bob Cook‑Deegan, have called ELSI hype.  ELSI researchers are under the exact same pressures, and that kind of hype can also be detrimental to policy discussions.

But I am here largely to talk about the positive kind of hype that we always see.  And this is becoming a very sexy topic, as Stephen has pointed out.  You are starting to see this in the public sphere more and more.

You're starting to see more skepticism in the public press, but also amongst the general population, about what science has to tell us.  And I think that's very important so I think this is a very timely discussion.

Okay.  To the sources of hype.  Well, there are so many, it is very difficult to enumerate them all, and they all play together, as I'll say in a moment.  Myself and my colleague Celeste Condit wrote a piece where we called it a hype pipeline, and that's not entirely accurate, which I'll come back to at the end.

But really, there are so many forces that work together to create, to hype, to spin what we hear about science and what we hear about research, particularly, I think it's fair to say, in the biomedical realm, from career pressure to funding pressure to media spin, obviously, to vested interests.

Now, all of these, as I say, they build on each other.  And I'm just going to touch on a couple of them, some of the research for a few of these areas.

The research itself, obviously, is hyped.  The moment it leaves the laboratory, there's some evidence that research is hyped.  And again, there's a lot of very interesting recent research, whether you're talking about medical studies, whether you're talking about animal studies.  There's been interesting empirical research that show the degree to which those kinds of studies are hyped.

And also, there's some interesting research that shows the more competitive a research environment, such as the United States, the more likely you're going to have hyped research.  And this was interesting work done in the context of social science research, humanities research, again showing a high degree of hype and a lack of ability to replicate the work.

So you have the research from the institutions being hyped, and then it goes, of course, to the writing up the papers.  This is interesting research that shows that 40 percent of abstracts ‑‑ now, keep in mind, who writes the abstracts?  The authors write the abstracts ‑‑ 40 percent of the abstracts have some degree of hype in them.

Then it goes to the press release, and there's lots of studies on press releases showing the degree of spin in press releases.  And these guys had a relatively conservative definition of spin.  And you see that half of the press releases have some kind of spin in it.

And other research has shown this, again showing that you have this spin that occurs, and that that spin is then subsequently picked up in the popular press.

I don't need to say a lot about the popular press.  Everyone here knows that there is a massive amount of hype here.  And it's no surprise.  Right?  Journalists, have got to as Roger Highfield has said, they have to justify their existence, not only to the public, but to their editors.

They've got to sell the story.  Right?  And as a result of that, you get all these ridiculous headlines.  And everyone here could put up probably 100,000 of them, from justifying Justin Bieber ‑‑ there's a little Canadian call-out for you ‑‑ to genes causing virtually everything.  Right?

So you have these kinds of headlines, and that is to some degree a direct result of all of the forces that I've just described.  In addition to that, as you guys know, the headlines themselves are even further hyped.  And they're not written by the authors.

I think it's important to note not to blame the media for everything.  In fact, some of our own work and work that other people have done has shown that sometimes the media ‑‑ well, I shouldn't say "sometimes."  Our work has found that the media is actually relatively accurate in what they say.

Many of the errors, and I'm curious if my colleagues agree with me, are really what I call errors of omission.  They leave out important details – not room, whether it's about conflicts of interest or risks, et cetera.  So I think we have to be careful, the degree to which we point our finger at the media.

The other very important thing, and this dovetails very nicely on this morning's conversation, we have to remember this is a systemic problem.  There are fantastic pressures on researchers today, commercialization pressure being just one of them.  In my own country, it's become intense.

Now, this is my very favorite example.  Maybe you've heard this before.  This is President Obama's State of the Union address in 2011, when he said that this is our "Sputnik moment."  He wasn't talking about going to the moon.  He wasn't talking about even just doing good research.  He was talking about using research to drive the economy.  All right?

That is a lot of pressure.  He has said that we'll invest in biomedical research and create countless new jobs for our people.  All right?  I mean, that is real translational pressure.

And that also leads to another kind of hype, and that's the hype that's associated with partnering with industry and commercial interests.  Now, I'm not saying that that is inherently wrong.  But that invites a further kind of hype in the hype pipeline.

And then we've already heard about this a little bit, and this is the tremendous career pressure that has always existed for researchers.  And study after study has shown that that can lead to hype.

Indeed, recent research has shown again that the more competitive an environment, the more the pressure to publish exists, the more likely bias is ultimately going to find its way into the actual published research.

I'd also like to talk a little bit of some of the underplayed sources of hype that people often forget about.  I'm sure everyone here has heard the sources of hype that I've already mentioned, but here are just a couple of my favorite, something that's been called the white hat bias.

This is a tendency for journalists to publish and to report on studies that are perceived to be in the public interest or public good or noble.  All right?  So you have this tendency ‑‑ now, this has been done in the context of obesity research, but you do have this tendency to publish research about results that seem to support the public good.

You also have special interest groups ‑‑ and again, I'm not saying that in a pejorative manner ‑‑ but you have special interest groups that push particular agendas.  Again, that can result in a kind of hype.

And then finally, you have the new media and its impact on research, whether it's Twitter ‑‑ I've been tweeting about this meeting already, helping the hype situation.  You have Twitter.  You have Facebook.  You have all these other sources of communication that further fragment the knowledge market, but also serve as a source of hype.

Another one that's underplayed and seems frivolous, but it's not at all, is the role of celebrities.  Celebrities can have a tremendous impact on how both the public perceive the system ‑‑ our own work on the Jolie gene, as it's been come to be known; we found that in the media reports of the Jolie gene, 68 percent of them there's no mention of the rarity of her condition, a really good example, tangible example, of how it can have an impact.  And of course, that can have a real impact on both the science, but also on how the public perceives the value of that science.

So what are some of the ramifications associated with this?  I'm going to do this very quickly.  There's so many, you guys.  Premature implementation of technologies; perhaps the inappropriate exploitation by the market of science; poor research and funding policies; some of our own work and other research has shown that hype actually impacts how science is funded and funding decisions.

It can have an impact on the efficiencies of research.  Now, I'm a believer in science; I think that the scientific method ultimately prevails.  But hype slows down that replication process.

It can have an ill effect on policy by doing a poor job of informing ELSI issues.  In addition to that, it may result in backlash.  And this is still speculation, but as I said earlier, we're starting to see more and more headlines like this, like whatever happened to stem cells?  We've heard about these miracles; where are they?  Where are the miraculous genetic cures?  Where are the miracle drugs?  Et cetera.  And these are in the popular press.

Now, some of the solutions here.  I called it a pipeline, a hype pipeline, but it really is a cycle of hype.  All of these forces feed upon itself.  They're all complicit collaborators, and in the short term, everyone benefits.  Right?  No one is really harmed.  So that can be very problematic and make it very difficult to deal with.

So I'm just going to put up some of the hype solutions.  We've already heard about some of them, and there are too many to go through.  I do think we need to think about having real changes to the way we incentivize research, and I also think we need to face the impact of aggressive translation pressure.  In addition to that, we need to think about new ways of promoting publication, and I also fully agree with education and media training.

I'd like to end with this rather grand statement, and I believe it's very true.  I think that increasingly, in a time when science has become such a huge part of our lives, that producing and protecting good science is one of the most important functions a government can do in a liberal democracy.  And I really believe that, and I fear that we may be failing at this task.

So I'll end by thanking my wonderful team and all of the institutions that I hype my research for.  And I look forward to your questions.

VICE CHAIR WAGNER:  With no hyperbole, huh?  Thank you so much.

Dr. Eric Racine will conclude the panel.  He is the director of the Neuroethics Research Unit and Associate Research Professor at the Institut de Recherches Cliniques de Montréal.  He holds academic appointments at the University of Montréal and at McGill, and is a new investigator in the Canadian Institutes of Health Research.

Dr. Racine is currently associate editor of the Journal of Neuroethics, a member of the editorial board of the AJOB Neuroscience.  He's a member of the advisory board of the Canadian Institutes of Health Research, Institute for Neurosciences, Mental Health, and Addiction, and a member of the Dana Alliance for Brain Initiatives.  Welcome to you.

DR. RACINE:  Thank you very much.  Thank you for the invitation.  Good afternoon, and it's really a humbling moment in my academic life to be here.  Honestly, what I'll try to do is speak to the issue of public understanding of neuroscience and the media representation of neuroscience from mostly an ethics standpoint, and an ethicist who actually conducts quite a bit of empirical research.  And I think you'll see signs of that as I move ahead.

So basically, I've structured my few notes here around three what I think or consider important questions.  First of all, why does this issue of public understanding of neuroscience matter, actually, from an ethics standpoint?  It could be important for funding, for research; why does it matter for ethics?

Second, what are some potentially problematic aspects of media coverage and science communication in neuroscience?  I'll try to give you a glimpse at this based on research we've conducted.

And finally, what are solutions?  And I think I'll concur with many of the previous comments on this.

So first of all, sorry, this is not coming out tremendously well, but the first question, why does it matter?  And I think we have to look at both sides of this question.  And basically, one answer is, it doesn't really matter; from a descriptive and normative standpoint, there are potentially many reasons why we shouldn't be concerned too much about this because, for example, it could be viewed as outside of the purview of neuroscientists, for example.

Neuroscientists are not equipped to deal with issues that are of relevance here.  There's not enough evidence showing the existence of problems.  There's nothing impactful that could be done to remediate any problems.  Finally, one could argue that neuroscience is no different than other areas of biomedical science.  So why would this matter?

A response from a descriptive standpoint would be to stress that now research encompasses knowledge transfer, so not as narrowly construing science as we used to.  The public expects also return on investment, and wants to know and is very curious about the brain and neurosciences.

There could be some impactful results in neuroscience, and there is some evidence, even if it's suboptimal, that there are significant challenges.  And I'll try to show that.

From a normative standpoint, one can consider that communication is actually an act like many other acts and can be analyzed from an ethics standpoint.  What is virtue‑based, consequence‑based, or principle‑based, it's still an act.

To retort to the idea that neuroscientists would have to carry the full weight, interdisciplinary models can be developed.  Solutions can involve multiple stakeholders.  And one could argue that science can actually contribute and enlighten the public sphere.  So again, I concur with previous comments.

So I think it's important to look at both sides of the equation.  In the background, I think it's important to keep in mind that one of the underlying issues that neuroscience stages is the conflict between the manifest and the scientific image of the world, to paraphrase Wilfrid Sellars, the philosopher.

Neuroscience, I think, stages this conflict of how we view ourselves based on our own self‑understanding versus how science and neuroscience depicts us.  We talked this morning about determinism, and this shows up in media coverage of neuroscience and invests the power that we think that applications such as fMRI will have.

But you'll also find, in the words of neuroscientists, important claims that neuroscience will actually change how we view human nature.  It's going to provide a new metaphysical mirror, for example, Joshua Greene argues.

So that sets a bit the stage.  But what are some potentially problematic or challenging aspects of communication.  And I'm pulling from work here that we've published and just featuring these references for maybe the commission staff wanting to look at the details.

But basically, if we try to boil it down to a few key observations, we've found, for example, that reporting practices were suboptimal, that a balanced tone in media coverage was not predominate ‑‑ this is all neuroscience‑related media coverage; that there were important shortcomings in medical and scientific explanations that could be remediated, potentially.

Whenever there's an ethics controversy, multiple sources of information come in.  So it's not a matter of viewing the media as only a channel for information; it becomes very quickly a forum where different voices come in and different stakeholders.

Media coverage could lead to a hype misunderstanding.  Media coverage can have an impact on public health behavior.  And public understanding, interestingly, in our research has been found to be a key issue; when you interview or talk to researchers or clinicians, that's acknowledged to be an issue.  But when you look at the available guidance from an ethics standpoint, there's not much to guide them.

So I'll just try to flesh out very quickly some of these observations.  I've picked a couple.  This slide shows that, for example, when you look at media coverage of neuroscience and consider the need for replication and reporting of the small ends in many of the fMRI studies, that's not information you'll find frequently, for example.

Another key observation was a couple of years ago when we encountered a new form of interpretation of neuroscience research.  We dubbed that neuro‑realism, and basically, to again paraphrase Sellars, it means that the scientific image overpowers the manifest image.  Whatever fMRI says will be more real than real, basically.  So that's the idea.  When it's shown by fMRI or by neuroimaging, it gains additional power and impact.

Another interesting observation was found when we analyzed media coverage of neurostimulation techniques, such as deep brain stimulation.  I can't go into much detail here, but there was increasing coverage in the decade we analyzed.

Then after, when we look at the headlines featuring these stories, there were two key messages.  It's going to have broad impact from a clinical translational standpoint, and these techniques constitute major scientific breakthroughs.

Now, that was a lot of hype.  I've never seen so much hype in all the media coverage.  We've done media coverage analyses.  Then interestingly, when we did a multi‑site study of neurosurgical units in Canada ‑‑ this was an interview‑based study with clinicians ‑‑ we actually encountered that when dealing with patients with Parkinson's or neuropsychiatric conditions, these clinicians reported that not only the state of desperation of these patients was compounding expectations, but media coverage.

This is indirect evidence, but it's showing that potentially media coverage, when it's hyped, can have an impact on informed consent, and importantly, in this area of clinical practice, actually lead to disappointment and failure to meet outcomes.  That's, I think, very important to keep in mind.

Just some last slices of data, showing here, for example, data from a Canadian study of neuroimagers.  We were interested in how they were viewing REDs and the whole research ethics governance dealing with this range of issues.  And actually, surprisingly, knowledge transfer, public understanding, came up as an important area for these researchers.

As I already mentioned, and this is a really detailed slide I only want to get you to think about the overall picture, when we look at the international research ethics guideline in different countries, we could actually report in a single slide what was available to cover areas such as public understanding, knowledge translation.

If we look at confidentiality, informed consent, we wouldn't have enough of many afternoons to cover this material.  So I think this speaks to the lack of guidance and the lack of clarity of what should be the researcher's responsibilities in dealing with these kinds of issues.

Now, what can we do?  I think we can look at this as a challenge or difficulty or as an opportunity.  And I think there are probably different interesting models and incentives that could be put forward to try to remediate different aspects of the situation.

This is a detailed slide.  I only want to pull out the three key recommendations that were featured in this paper led by my colleague, Judy Ellis.  And basically, it came from a workshop that we had in Alberta, the Banff Center.  And basically, three key recommendations came up:

A paradigm shift in academic culture to value public communication; if this is not valued, like teaching or research, how can we actually get researchers to be engaged if it doesn't count ‑‑ count in different ways, but if it doesn't count?

A second recommendation was to train and support expert and knowledgeable individuals that have a foot both in neuroscience and another foot in journalism or science communication.  So can we develop models of people with broader skills?   In a third recommendation ‑‑ with again many different ideas packaged onto that.  It was basically also to try to develop and carry more empirical, conceptual research on science communication and its importance from an ethics standpoint.

So in conclusion I think public communication, public understanding, do represent a potential source of harm if you want or challenges from an ethics standpoint, and also interestingly I think constitute a terrain of ethical duties and obligations whether a scientist can fulfill I think some of their commitments to science and inform the public sphere.  And I think it's also an important area of empirical and normative research.  And I'll also thank my funders.  Thank you very much.

VICE CHAIR WAGNER:  Thank you all.  Quite a range of topics across that ‑‑ across that table.  But we do have time for some conversation, discussion.  Nelson?

ROUNDTABLE DISCUSSION

DR. MICHAEL:  This is a question that's for Dr. Neill.  In the previous session we were discussing the issue of standards and how you would address turning those standards.  You went through a very nice list with The Journal of Clinical Investigation where you used ORI tools to do image analysis, to do text searching for plagiarism.  You insisted to see full uncut gel images, and I'm sure that you have them.  That's just the list of standards that you described to us.  That's really impressive.

Just the thought is what is your perception of how other journals do this?  I mean, how prevalent are those standards across the industry if I can use that word.

And the second question is a probably little more loaded, which is what fraction of that work do you think really should be done by the investigator or by the institution?  I mean, where is the responsibility to drive these standards?  Because you may insist that this standard is important for publication, but the training doesn't occur at publication, it occurs at institutions.

DR. NEILL:  As far as the first question about how pervasive this is at other journals, many of my peer journals do the same thing.  All of the Rockefeller University Press Journals, the Journal of Experimental Medicine, the Journal of General Physiology, et cetera, also pioneered many of these methods.  So I should give credit where credit is due.

I previously worked at the Nature Family of Journals and while I was there we did spot checks or we followed up on things that were identified by whistleblowers.  I was at an ORI conference where their executive editor, Bernie Keimer, gave a talk about how they start with the standpoint of trusting the scientists and the authors to have integrity.  I don't mean to say that the JCI does not consider that standpoint, that we're looking to find errors, but they tend to do more spot checks and sort of bubble searches here and there whereas we are screening everything.

And as I mentioned before it has its costs, but it has a cost that we feel is important to pay.  I don't know enough about many other journals to comment on whether or not other journals are doing things as much as we are, but some of them are low cost.  Whether or not the purview of doing that screening lies with the authors and the institutions as opposed to the journals, I wish there was more training at the institutions.  I wish there was more standards that were put through before things were sent in for publication.

But there are so many times when scientists are in a competitive area that it makes it incredibly difficult to go through that extra screening step, and I think that now being on the administrative side of an institution I'm not sure where we would find the resources or the manpower to do those sorts of things or the mandate for it.

VICE CHAIR WAGNER:  Amy.

CHAIR GUTMANN:  So Timothy you at the end self‑ modified or corrected from being the pipeline of hype to the cycle of hype.  And I really think the cycle of hype is more accurate, so I think you're right to go there.

And the reason goes back to something you said at the beginning, which we take to heart, that the hype of good science is matched with the hype of, you know, dissing science.  And we need to worry about both if for no other reason that they're ‑‑ it's a vicious circle that the more exaggerated claims there are out there, for lots of people those are meant to inspire them, but there are people who get scared by them.

So I just want to underline that I think that cycle is something that we really have to be concerned about.  I'm not sure there's anything ‑‑ and I'm pretty confident there's nothing you can legislate to prevent it.  You can just encourage more institutions, responsible journalism, responsible scholars to do more.

But the question I have is can we distinguish between three different kinds of hype.  Now there are going to be gray areas, I always to want to say that, but there are at least three different kinds of hype and whether we're more concerned about two of them than the third.  So I think the third is ‑‑ goes with the territory of self‑promotion in journalism.

The first is false claims, inaccurate results that hype things.  The second is distorted claims, which could be from omissions or commissions.  So you distort a story if you leave out critical, qualifying facts even if every fact you say is correct.

So those are two that I think there's something you can do about.  The third is just exaggerated importance of things.  That if Angelina Jolie, you know, comes forward with her case there's going to be more attention to that than if I ‑‑ if that happened to me or an average, you know, certainly an average citizen.  That while you may think that shouldn't be the case, in an ideal world, anyone ‑‑ that's a way of calling attention to something and that's got at least as much of an upside as a downside where the other two things we can argue about it.

But you can't argue ‑‑ the other two things are just bad science, to have inaccurate results reported or to leave out critically important facts that are critically important to the understanding of your results.  I ask you are you willing to see that there's at least a difference among those, because if not I'm afraid we're chasing something that's just endemic to ‑‑ you know, we're focusing on ethics here today.  Are we really supposed to slap ourselves that we're focusing on neuroscience, that we're not focusing on everything else?

When you make ‑‑ when you have findings ‑‑ you're a scientist.  When you have findings you want them to get out there.  And I applaud scientists who can get them out there if they don't succumb to the other two things, which is false claims and distorted claims.

PROF. CAULFIELD:  Okay.  So, I mean, this is a great point and I actually do agree with you to a large degree.  I think that your category two and three are actually ‑‑ there's a lot of overlap there.  So let me give you ‑‑ let me give you an example.

I do think exaggerated claims are ‑‑ can be ‑‑ when it happens on a systematic level consistently can be very ‑‑ can be very problematic.  And you can look at areas like, you know, the areas that I've spent most of my career, stem cells and genetics and neuroscience.  You have this consistent kind of hyping of the science in order to get funding, in order to commercialize, and I do think that it can have an impact.

Now I'm a huge advocate of public ‑‑

CHAIR GUTMANN:  Yes.  Could I just ‑‑ I mean, I agree with you.  I didn't ‑‑ the third category was an exaggeration of the claims because that falls into the second.  But exaggerating the importance because of getting more publicity, exaggerated visibility in a sense ‑‑

PROF. CAULFIELD:  Yes, I have ‑‑ I think you're right.

CHAIR GUTMANN:  That's different than exaggerating the claims.

PROF. CAULFIELD:  And in fact I think that is a very important distinction because I actually think ‑‑ I believe I heard it from everyone is one of the things that I believe has to happen is that scientists need to get more engaged with the public and so often they shy away.  It's almost viewed as ‑‑ it's almost looked down upon, right.  I think we've got to reverse that trend.

Scientists have to embrace engagement with the science, and I actually think if they do that ‑‑ I think the public likes basic science.  I think they get excited about it, right, and I think it can be sold that way.

I do want to talk about your first point if I could because I think it overlaps.  I completely agree with you with the cycle of hype.  My first piece on point was called the cycle of hype, and the reason that Celeste and I talked about the pipeline was because so much of the policy discussion happens at the end of the pipeline, right, and that's because there's this tremendous polarization that occurs.

Stem cells is probably the best example of it, right, where you have exaggerated claims of harm and exaggerated claims of benefit.  When you think of the number of lives around the world, the policy, energy that was devoted to stopping human cloning is astounding to me, right.  And that's a real good example of that sort of polarization.

You could argue and we could have disagreements about it, that it also happened with gene patenting, genetic discrimination, the adverse impact of genetic information on people's lives.  You could go down these lists of things that were a result I think of a polarization.  You say genetics is going to change your life and it's a blueprint of life.  You know what, I agree with you, and now I'm afraid of genetic information getting out there when in fact the reality is in the middle.

So I think that goes to the exaggeration problem you talked about before and I think we have to be more nuanced and more honest about what the data says.

VICE CHAIR WAGNER:  Anyone else want to comment on that before – I thought Ushma did.

DR. NEILL:  There was a moment that we had before the session started where we started talking about responsible press releases and the amount of hype that we feel is responsible.  As a scientist and as an editor I wrote press releases or my colleagues and I wrote press releases on nearly every article that we published.  Some we knew had no chance of ever getting picked up in the lay press or in the media and were written for fellow editors or for fellow scientists to look at as a digest of what was going on.  The ones that we felt did have perhaps more media appeal we were always very careful with our spin.  But if you didn't have some amount of spin no one would ever comment on it.

We tried to be as responsible as we possibly could.  Where we did not police anything was when the author's own institution or funding agency also developed their own press release and sometimes sent it to us.  If it went beyond the line we did not feel it was necessarily our responsibility to rein them in other than by giving them another copy of the PDF of the article, and in those cases was when we got the most media attention.

CHAIR GUTMANN:  We have a great example of this from our own experience.  So our first assignment was on synthetic biology.  And Craig Vetner who was happy that people said he had created life, you know, we took great pains to say, no, this wasn't the creation of life.  This is what it was.

Well, you know, there were a lot of responsible journalists who just kept using the ‑‑ who had heard it, who came to the Commission, and so ‑‑ and they kept using the formulation because it catches people's attention.  And that was a good example of not merely exaggerated importance but a distortion, an actual ‑‑ in two words you couldn't capture a greater distortion of what you could do with it and yet it continues to be repeated.

And, you know something, that gets ‑‑ it got Craig's discovery more publicity but it also got it hugely more pushback.

DR. WARD:  I think you have to be careful when using the word hype and spin.  Some of it is ‑‑ if you start calling everything spin, then it doesn't make much sense.  Saying that you've done one of the first research projects into a certain thing and it's got some great results, what's wrong with that?  I don't call that spin.

You know, Beatlemania is everywhere, 50 years old today.  That's out of proportion completely to the sort of democratic and public importance of it, but whatever.  What I'm hearing here and I've heard for years is the problems of science journalism.  Nothing much has changed in terms of the not citing sources, not citing a lot of things, getting facts wrong, so on and so forth.

So we're dealing it seems to me with a cultural, social, economic sort of systematic issue here, and it's very difficult and no ethics preaching is going to ‑‑ is going to stop it.  So I try to think of it differently and ‑‑

CHAIR GUTMANN:  So you would rather use the word preaching than hype?

DR. WARD:  Yes, yes, that's right.  What I think of, though, is it's a collective responsibility of all of us to communicate science, not just the scientists, which would be an onerous burden to expect that also.  And in fact we have to have, as I keep saying, as I imagine online we can produce, groups, very different groups connected in a sort of web, different expertise, different types of media.  Yes, use social media very well with these following extra add-ons, all right.

One, what I call the sixth estate, that we are ‑‑ we create a group of people ‑‑ speaking of a specific discipline or not, you know, pick your favorite context that you want to work out of.  But actually through feedback loops, social media alerts to mainstream media and so on, calls out daily mistakes, and exaggerations, and hype, and whatever in the press.

So it's sixth estate because the fourth estate is the mainstream, the fifth estate is online criticizing the mainstream, and we've got to be criticizing both all at the same time.  That takes some sort of very fancy sort of organization.

Here's the other one that you also need to think about.  I would think that for every ‑‑ we have to impact and include in scientific communication citizens who have a chance to participate in the discussion directly and meaningfully, not simply to add a comment to an article they've read in "The New York Times."

So how do we do that?  How we engage them is in the design of our communications, is extremely important.  And this means also by the way that it would be wonderful if we had ‑‑ like the Shorenstein Center has a weekly resource e‑mail that goes out saying here's all the leading research on this particular topic that's on top of your minds in newsrooms today, right.  That could be part of that website.

I mean, I can go on along about this, about linking that then to schools of journalism and communication.  My point is is that we've got to do this systematically.  We've got to do it in coalitions and partnering and networking.  I know those are buzz words right now, but that's the only way we're going to do it.  And I'm tired of trying to solve it myself and I'm trying ‑‑ or individual researchers trying to solve it.  I don't mean we shouldn't do individual research, that's not what I'm saying, but we've got to rethink that public demand and how we do it.

CHAIR GUTMANN:  So we did call for bioethicsfactcheck.org and some people, including Kathleen Hall Jameson who helped found factcheck.org have actually begun taking up our call for that.  I would just say that it is ‑‑ it is important that if people who care about ethics stop calling mainstream journalists to task for some of the violations of clear, established journalistic ethics we will have given up a very important forum that gets a lot of attention.

And a lot of the journalists did pay attention to our correction of the synthetic biology discovery.  We always note ‑‑ and it's my bad that I noted those who didn't, but a lot did and I think it's worthwhile to do.

I would just say on the example we dealt with as well, which we should put out there of, you know, the exaggerated false claims on the negative side we had to ‑‑ we were charged with doing a report on testing vaccines with children.  And if you want to see an area where there's a huge amount of misinformation that is a real obstacle to having science progress and just reasonable discussions progress it's that.

And I give a lot of credit to the Commission and people who worked, the members of the public who worked with us on that report, that we actually got a report out that was very well received, that cut through a lot of the negativity there without exaggerating the promise and sticking to ethics.

And so I thank you all for considering this.  And I don't ‑‑ I think skepticism is the word, but not cynicism.  I mean, we really can make some headway here.

VICE CHAIR WAGNER:  Steve, you've been holding onto a question.

DR. HAUSER:  Yes.  For Ushma and others may want to weigh in also.  Another science publication issue that much has been written about is publication bias and our sort of preference for positive over negative data.  And I wondered if you could just say something about how you view the landscape of journals and editors when we receive papers that fail to replicate data published in our pages when that work seems reasonably competent.

DR. NEILL:  Sure.  The operating guides that we had at the JCI were that if we received a manuscript that reported negative data of a study that we originally published we were absolutely, positively duty bound to consider it and to not necessarily relax the rules.  It had to be a rigorous study, but that perhaps the mechanistic bar that we held for the original one would be lower.

We were absolutely duty bound to consider a negative data study of something that we published.  If it came from, the original study came from another journal we would only consider it if it had ‑‑ just because of the purview of the journal clinical investigation if it had a rather large repercussion within the clinical world or the biomedical disease driven research world.

So things that were science nature, cell nature medicine, et cetera we considered many of them but not all of them.

PROF. CAULFIELD:  This touches on the comment I wanted to make.  We keep gravitating back to the media and I think you're absolutely right.  We should ‑‑ we can't just blame the media because so much of the hype is embedded in the research, and I don't think we can forget that.  I mean, there has been a lot of interesting proposals put forward such as ‑‑ I don't know if you've heard this, maybe you haven't, having papers accepted before the results are completed.

So what you do is you would submit your method, you would submit your background, you would submit your rationale, everything, and then you ‑‑ and then the publication is bound to publish it regardless of the results.

Others have suggested doing away with impact factors.  Others have suggested that the highest impact journals should not publish studies, they should just publish reviews.  And the other of course, I'm a huge fan of this, is the idea of ‑‑ and this touches on something you said, is supporting groups like the Corcoran Collaboration, right.  They are completely independent places that do meta-analysis that are systematic reviews that also considers the biases of an interest that are associated with every study.

So we can't just blame the media.  We've got to look at the whole cycle of hype if we're going to address this, and I don't want to blame the media.

VICE CHAIR WAGNER:  Yes, and what I think we're talking about is cycles of ethics, nested in ethics.  And I look forward actually to our ‑‑

CHAIR GUTMANN:  We won't blame the media and we won't blame Canada either.

VICE CHAIR WAGNER:  No, we won't.  Great representation from our brothers and sisters up North.

So let's ‑‑ let's bring ‑‑ let's take a little break, a 15 minute break and bring our panelists from this morning up to join you folks and continue this conversation.  Thank you all very, very much.

(A brief recess was taken.)

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.