TRANSCRIPT: Meeting Six, Session 6

Staff Report on Current Scientific Studies Supported by the Federal Government

Date

August 30, 2011

Location

Washington, DC

Presenters

Jeremy Sugarman, M.D., M.P.H., M.A.

Senior Advisor

Michelle Groman, J.D.

Senior Policy and Research Analyst

Download the Transcript

Transcript

            DR. WAGNER:  Let me invite Michelle Groman and

  Jeremy Sugarman to the table.  This next session will

  be around our commission receiving a staff report.

  Michelle Groman is senior policy and research analyst

  for our commission, and has served as a lead staffer on

  this project.  But we are going to hear first from

  Jeremy.

            Jeremy Sugarman is going to report to us about

  the empirical project, as we have been calling it, that

  the staff has been conducting to collect data from

  government agencies that support research involving

  human subjects.  It's an important part of our

  information-gathering process for the human subjects

  protection review, because it's with this information

  that we will -- with some certainty, we hope -- be able

  to describe to the President the universe of human

  subjects research supported by the Federal Government

  that is being done domestically and internationally.

            Jeremy is the Harvey Meyerhoff professor of

  bioethics and medicine, professor of medicine, and

  professor of health policy and management, and deputy

  director of medicine at the Berman Institute of

  Bioethics at Johns Hopkins University, internationally

  recognized leader in the field of biomedical ethics,

  and with particular expertise in the application of

  empirical methods and evidence-based standards, for the

  evaluation and analysis of bioethical issues.

            His contributions to both medical ethics and

  policy include his work on ethics of informed consent,

  tissue banking, stem cell research, international HIV

  prevention research, and research oversight.  He has

  served with -- as a senior policy advisor and research

  analyst for the White House Advisory Committee on Human

  Radiation Experiments, and a consultant to the National

  Bioethics Advisory Commission, was a founding director

  of the Trent Center for Bioethics, Humanities, and

  History of Medicine at Duke, where he was also a

  professor of medicine and philosophy.  He is a faculty

  affiliate of the Kentucky Institute of Ethics at

  Georgetown University.

            We are pleased to have him here to use up all

  his spare time with the commission.  Jeremy, the floor

  is yours.

            DR. SUGARMAN:  Thanks, Jim, for the long

  introduction.  It's helped the search for my slides.

            (Laughter.)

            DR. SUGARMAN:  Thank you.  While the slides

  are being located, I appreciate the opportunity to be

  able to share with you some of the work that we have

  been doing to provide some data to help inform the

  committee's deliberations.

            And much of the day-to-day legwork on this

  project goes to Michelle Groman who, without her

  amazing energy and attention to detail, none of this

  work would have been possible.  So Michelle is doing

  the day-to-day work of actually finding the

  presentation.  But, really, without Michelle's efforts,

  we wouldn't have anything to really share with you.

  That said, I am going to talk, and Michelle is going to

  be there with all the details.

            So, going back to the President's charge to

  the group, we have heard this letter before, but I just

  want to call out this one sentence, because this is

  where we take as our jumping off point.  "To conduct a

  thorough review of human subjects  protection to

  determine if federal regulations and international

  standards adequately guard the health and well-being of

  participants in scientific studies supported by the

  Federal Government."  A sentence, an important

  sentence, that raises a couple of issues.

            First of all, there are no systematic data

  available across federal agencies about the scientific

  studies supported by the Federal Government.  Without

  that sort of data, without that information, it's hard

  to know what recommendations will be done.  What's the

  purpose of the committee's deliberation?  How will you

  know you're getting it right?

            We know that the research ethics landscape and

  the popular press brings to light bad cases.  We don't

  hear about good cases.  We don't know what types of

  research is being done, where it's being done, and what

  the federal investment would do.

            There is also limited available systematic

  information about this very central question about how

  well the regulations and standards guard the health and

  well-being of participants, the second clause in the

  President's charge.  And that, in part, is an empirical

  question, as well as it is a conceptual question.

            The thought going forward in conversations

  with the commission and the staff is that such data are

  needed to help inform deliberations, and that's where

  this presentation will focus.

            The commission decided, early on, to initiate

  a landscape project, which I will discuss in more

  detail, to provide basic information that is not

  available everywhere.  You have heard multiple speakers

  provide testimony in that regard -- Zeke Emanuel among

  them this morning -- saying we don't have information

  about what the landscape looks like for research

  sponsors.  And so the commission has decided to move

  forward, and I will give you an update on that project.

            To meet this second clause of that charge, we

  offer up a series of potential projects that may or may

  not be appropriate to help inform that second part,

  about how well protections are working.

            In order to help inform our work, we assembled

  an empirical advisory group to help provide expertise

  to help guide what particular kinds of questions would

  we ask in the landscape project, and also to propose

  and evaluate other empirical projects that can inform

  the commission's response.  The goal here is to provide

  that expertise and to sort of add the facts, so that

  you can have the value debate.

            The empirical advisory group is a fantastic

  group of individuals that actually came together to

  work on this.  Christine and Dan, as commissioners,

  joined us.  Rob Califf, from Duke, who has conducted

  multinational trials, who gave testimony in an earlier

  meeting.  Ruth Faden at Johns Hopkins, who was the

  chair of the advisory committee on human radiation

  experiments.  Ken Getz, an affiliate of Tufts who

  provided -- who knows more about using larger

  databases, and other experts.

            All right.  So, the landscape project, as we

  set it out, was to define the landscape as we talked

  about it, and to provide these analyses.

            Now, you will see there are 18 federal

  agencies here which are listed, which we had reason to

  believe were conducting scientific studies supported by

  the Federal Government.  The agencies -- the time line

  for the project here was in March and June.  We

  identified liaisons, and alerted them to the fact that

  the Presidential Commission was interested in these

  questions.

            The data that we were going to ask for had to

  be clarified, and the tools were developed.  So,

  because this has never been done, this is some basic

  work that the commission can claim as being critically

  important for knowing how you might map something like

  this.  You had to create tools that would work across

  agencies, and then the agencies were asked to provide

  fiscal year 2010 data in August, and additional data to

  be due.  The goal is to have all this information

  available for your next meeting.

            Now, this is now drilling down a bit.  You can

  see on the top portion of this area the research

  project database.  This is the Excel form that was sent

  to the agencies and asked them to fill out.  Countless

  conversations with the help desk, with Michelle, to

  figure out exactly how these agencies could take very

  different information systems and provide this.  We

  want to thank the agencies for doing this work.

            So, what -- this kind of question seems

  simple, what we want to know, but it's a very difficult

  set of data to gather from the ways that agencies keep,

  in different ways, this information.  And so, coming up

  with this approach has required lots of effort on the

  part of lots of agencies, and I won't name each of them

  here.

            What that does -- quick study.  In the

  research project database -- our very apt name for what

  we have, and what this is is the database that is being

  put together to drive the analyses.  You can see here

  some of the data elements.

            So, the current status is that all the

  agencies contacted have responded.  That's a start.

  Seventeen agencies have provided some or all

  project-level data for fiscal year 2010.  So we haven't

  received all the information we need, but there have

  been good faith efforts upon the agencies to do so.

            The Department of Defense has provided

  aggregate fiscal year 2010 and also the back year's

  data.  But only aggregate data, feeling that the

  project-level data was something that they were not

  able to provide to us, with the way that they keep

  their information systems.

            The empirical advisory group met, provided

  guidance, and we're in the process of finding a

  statistician who is properly trained to analyze the

  data.  Potential analyses are to answer those basic

  landscape questions about the scientific studies, the

  institutions, and funding.

            Now, the next steps are -- so we've now got a

  landscape.  And the question is:  What's next?  And we

  are at a branch point here to decide which, if any,

  projects we would go forward with, depending on what

  would be useful to the commission.

            One idea is to take this research project

  database which we have already assembled, and then

  compare it, if you will, to the clinicaltrials.gov

  database, which you have heard about.  So what we know

  is that not all scientific studies supported by the

  Federal Government would be expected to be in

  clinicaltrials.gov.  Those that would be related to

  drug and device-related research would be.  But it

  leaves out the full range of studies.  Now I can't tell

  you what that full range is until we do the analyses of

  the research project database.  But we know in advance,

  going into this, that not all of it would be expected

  to be there.

            So, what we could do by going and comparing

  what's in our, say, more comprehensive database to

  what's known in clinicaltrials.gov, would be for those

  studies where there is overlap, we will have almost

  participant-level information about those studies that

  are in our database which we don't currently have.  We

  will also know whether there are studies that appear in

  our database that should have been listed in

  clinicaltrials.gov in the name of transparency which

  aren't there.

            And so, some type of analysis like this would

  require taking a picture of the clinicaltrials.gov

  database at a particular time, which is something that

  NLM is working on, in collaboration with Duke in a

  public-private partnership, to come up with a set

  database at a set point in time to be able to analyze

  those data.

            A second possible step would be to review the

  abstracts that are not in clinicaltrials.gov.  So

  clinicaltrials.gov gives us that rich information.  But

  we don't have very detailed information about the

  science that is being done in the other settings.  So

  this would require some type of selection of the

  abstracts, and to review them for the kinds of

  information of the subjects where it's located and the

  like, so that we can inform that.

            This could be -- a sampling method could be

  used, or a comprehensive approach, but it would depend

  on what questions the commission wanted answered.  One

  approach that has been suggested that we are exploring

  is a natural language analysis of the abstracts that

  are there, so that instead of people going through and

  manually coding it, what you do is you take the full

  text and analyze the full text, and inductively come up

  with information that could inform our decisions.  We

  don't know how feasible that is, but we're in the

  process of discussing with people who do  natural

  language analysis.  There is a recent publication

  that -- last week or this week in JAMA, for

  instance -- which is beginning to use these sort of

  methods in research.

            The other empirical projects to

  consider -- now moving away from the research project

  database -- would be two possible projects that would

  provide more granularity to what's going on.  One would

  be a web-based survey of investigators, and the other

  would be a systematic assessment of human subjects

  protections.

            The advantages of conducting a web-based

  survey of investigators is we would get the perspective

  of one group of key stakeholders, not all.  But they

  are people who are accustomed to answering web-based

  surveys.  They may be very motivated, given the ANPRM,

  to be part of this conversation, whereas normally they

  may resist, or not really enjoy doing another survey.

  And the potential domains, the advantage of going

  broader than the experiences of the commission or those

  who provide testimony is we might have more

  generalizable data about impressions, about what works,

  what doesn't, what kind of barriers are faced.

            Could this ask questions, whether community engagement

  occurred, what's their experience with human subjects

  protection, what works and what doesn't, and whether

  they believe that important research projects have been

  delayed or abandoned because of procedural concerns.

  Did they not do something because they were concerned

  about the hoops and the bureaucracy that would be

  involved?  Or what helped move things forward?

            The final one is the systematic review.  What

  we could do, unlike any other place, is once we have

  this research project database, is sample from the

  commission's database and do some type of

  stage-appropriate review which would mimic other

  projects that have been done in the past to do a

  centralized protocol review to see how well this

  localized system actually is working, to interview key

  stakeholders, such as IRB chairs, investigators,

  perhaps research participants, community members,

  community advisory boards, and the like, and then

  conduct site visits.

            Something like this is an enormous

  undertaking.  But if the commission decided to take

  some type of review at some point and to  pilot it, it

  might serve as a pilot for some kind of periodic

  program to say that a future commission, or a future

  group, or anyone wanting to look at this won't be left

  in the position of saying, "How well are the

  protections working," we will know how well the

  protections are working, and not be left to conjecture.

            So, thank you for that overview, and I am

  happy to take questions.

            DR. WAGNER:  Thanks, Jeremy.  Could we --

            DR. GUTMANN:  Thanks very much.

            DR. WAGNER:  Are you doing this one, or am I

  doing this one?

            DR. GUTMANN:  Go ahead.

            (Laughter.)

            DR. WAGNER:  No, you're welcome to it, believe

  me.

            DR. GUTMANN:  No, no, no.

            DR. WAGNER:  Could we go back to that slide

  that had the domains on it?  Is that easy to do, go

  back about two slides?

            DR. SUGARMAN:  Sure.  I think.

            DR. WAGNER:  My understanding is when we

  initiated the empirical study that there was also this

  broader question around breadth and scope and volume,

  and how much is going on, what's involved, and, you

  know, what's the breadth of it, within which we could

  then ask these questions about community engagement and

  human subjects protection.

            Are we going to get that out of this?  Are we

  going to have a sense of -- that we've got some sort of

  a catalog that we can ask these quantitative questions

  about first, before the qualitative --

            DR. SUGARMAN:  So, currently, no.  The kinds

  of questions we can answer are the ones delineated on

  the earlier slide that talked about what we can count.

  What's the nature of the research being conducted?

  Which agencies?  What's the investment?  Where is the

  research being conducted?  Without further linking, we

  are going to have less information about those issues.

            What we could use the database for is the last

  project I mentioned, not necessarily the survey of

  investigators, would be to do a systematic sampling to

  begin to answer those questions.  The concern about

  that is it's almost September, and the Commission is on

  track to report earlier than that.

            DR. WAGNER:  So we don't have key-word

  categories or something that we're going to sort these

  by?

            DR. SUGARMAN:  Correct.

            DR. GUTMANN:  So it seems to me -- and this is

  going to be a comment for you to react to -- it seems

  to me that we have to see first how good a database we

  can get.  And until we can see that, there is -- we

  really have to see that.

            And that means that let's see the extent to

  which we can get the agencies that are doing the

  research that -- for everything we know, and there is a

  lot of expertise around this table, we know quite a bit

  about the kind of research that's been done

  historically and, you know, recently.  We need to see

  what that database yields, and compare it to

  clinicaltrials.gov.

            That is a really important -- to see where the

  overlap is and what we get that isn't in

  clinicaltrials.gov.  And once we see that, and until we

  see that, I don't think we can make other judgments

  about where to go from there.  That is my comment to

  get a reaction to.  I just think, otherwise, we invest

  a lot of time and effort into empirical work that we

  have no idea at this point whether it's going to yield

  insights that we can use, as a commission, which is our

  job, as a bioethics commission, is to comment on the

  ethics.

            So, I think we need -- we really need to see

  what the first empirical project -- the landscape

  project, which seems to me extremely worthwhile.  And I

  would just urge, take this opportunity to urge,

  everybody in those agencies to cooperate fully with us,

  because if there is -- one thing that nobody we have

  spoken with disagrees with is the importance of this

  level of transparency, where we're not revealing -- you

  know, we're not invading anybody's privacy.  What we

  are being transparent about is what the government is

  funding.

            DR. WAGNER:  Yes, Christine?

            DR. GRADY:  I just wanted to mention that

  Jeremy pointed out the idea about the web-based survey

  of investigators, which was a proposal at the empirical

  advisory group that was in addition to but different

  than the landscape project.  And the reason was

  because, to the extent that the commission is

  interested in things like community engagement and

  training, and the effect of the burdensomeness of rules

  and standards, that there were members of that group

  that felt like this was an opportunity in time to get

  investigators to respond to those kinds of issues and

  get data on that.

            None of those questions are being asked in the

  landscape project, so those are pieces of data that we

  might be perhaps well-positioned to try to get in a

  systematic way.  But it is a separate project, so I

  take your point very -- it is very important, so --

            DR. WAGNER:  Back to Amy's comment, Jeremy.

  Any response to that?

            DR. SUGARMAN:  So I think we have -- the first

  part is that analyzing the database we have is

  absolutely essential, and that we will know how the

  database is performing when we do the initial run of

  the fiscal year 2010 data about the research project

  database.  So we will know what we get about that.  The

  link --

            DR. GUTMANN:  Have we gotten all the

  trial-based data from agencies?  Taking Defense off the

  table for a moment.

            DR. SUGARMAN:  Not all.  Most.  Most for

  fiscal year 2010.  So we're starting with fiscal year

  2010, basically as a way to test how well the system is

  working.  We think it will work.  And so the question

   -- we are beginning to get that information in.  We

  can give you a detail of that, if it's --

            DR. GUTMANN:  Now, when you say "most," 95

  percent?  I mean what percent, approximately, have we

  gotten in?

            MS. GROMAN:  So out of the 18 we have asked,

  taking DoD off of the table, we have gotten data from

  some or all data from -- for fiscal year 2010 -- from

  the other 17.  So --

            DR. GUTMANN:  Including NIH?

            MS. GROMAN:  Including NIH.  So --

            DR. GUTMANN:  Which is a huge part of --

            MS. GROMAN:  It is.

            DR. GUTMANN:  Yeah.

            DR. SUGARMAN:  So we will have that

  information.  We'll see how well that performs.

            The next step, if we do make the

  decision -- it sounds like there is a move to doing

  this next step -- we are now going to be exploring some

  new methodologic territory of linking the database,

  which is another test, with clinicaltrials.gov.

            But if there is a move to go forward, I think

  it would be a rich experience.  We don't know what it

  is going to yield, like many empirical studies.  We

  don't know the answer going into it.  But it seems

  worthwhile, given that we will have the most

  comprehensive database available, and comparing that to

  clinicaltrials.gov would promise to be a good use of

  those -- that resource.

            But I would respond to -- so Christine is

  right, the other projects are meant -- were suggested,

  in a way, to inform the second part of the commission's

  charge about how well things are working, requires

  different perspectives on that same picture.  And so,

  if the commission wanted those data, then starting

  those might make sense, because they're not linked to

  the research project database.  But it's really -- it's

  completely agnostic about that, from the perspective of

  the empirical advisory group or staff.

            DR. WAGNER:  Well, this was an enormous

  undertaking, and has the potential to be a great

  contribution.  Everybody is looking for this kind of

  thing.  So thank you both for your work.  And I think,

  with that, we stand adjourned for a lunch break.

            Again, Michelle, Jeremy, thank you so much.

            DR. GUTMANN:  Thank you very much.

 

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.