THURSDAY, APRIL 20, 2006
Session 3: Children and Clinical Research
Robert Levine, M.D., Professor of Internal Medicine, Yale University
CHAIRMAN PELLEGRINO: In keeping with our attempt to stay within the program time, I think we'll begin this afternoon session. We'll be moving from the organ transplantation (discussion) of this morning into a different topic, namely, the area of children's ethics, which the Council has been addressing for several of its meetings.
In this particular meeting we'll be emphasizing research in children and the specific problems that go with it. Our first session will be regarding children and clinical research, and our speaker will be someone who has worked in the field of research and research ethics from its beginnings literally. Dr. Robert Levine, Professor of Internal Medicine at Yale University, also a colleague and a friend.
DR. LEVINE: Thank you for your very laudable, brief introduction. That says who I am, and that's enough.
I'm going to talk about clinical research in children soon, but first I'm going to talk about component analysis and the justification of the risks of research.
Ed Pellegrino, when he called me to come here, asked me to talk about some of the stuff I've done criticizing the Declaration of Helsinki, and the central feature of that criticism that has a bearing on what you're here to do today is the idea of component analysis. This replaces the false distinction between therapeutic and nontherapeutic research.
I want to say right now that every document that relies on a distinction between therapeutic and nontherapeutic research contains errors. When we point these errors out to those who wrote the documents, they are generally embarrassed. It's particularly troubling when they're not embarrassed.
Incidentally, this component analysis is relevant to research involving children. The National Commission adopted component analysis in the course of its debates on research involving children. That's why their earlier reports, research involving pregnant women and the fetus and research involving prisoners, had a number of errors. The fetus regulations were only corrected about seven or eight years ago, and the prisoner regulations are not corrected yet, although SACHRP is reviewing them.
Now, the next slide I will present is the slide I prepared for presentation to the World Medical Association Ethics Committee on the occasion of accepting their assignment to propose a revision or to chair the committee to propose a revision to the Declaration of Helsinki.
I said the document was illogical, relying on this distinction; that it was out of touch with contemporary ethical thinking, such as thinking on placebo controls; and that finally, the document was widely disregarded and had lost much of its authority because people were violating it as a matter of routine.
Let's take a look at two articles from the Declaration of Helsinki as of 1996. All of the things labeled with the Roman numeral two are under their rubric of clinical research or therapeutic research. Number three is the nontherapeutic research.
(Article) 2.6: "The doctor can combine medical research with professional care only to the extent that research is justified by its potential therapeutic value for the patient."
Article 3.2, "Subjects should be volunteers, either healthy persons or patients for whom the experimental design is not related to the patient's illness."
Let's take these two statements and apply them literally. What is forbidden by Article 2.6? All research in the field of pathogenesis, all research in pathophysiology. Since you can't justify these categories of research by therapeutic benefit, then you must do this research either on normal volunteers or on patients whose illness is unrelated to the protocol.
If I wanted to pursue some of Dr. Bloom's work and study catecholamines and the pathophysiology of depression, it would be okay as long as I studied it in people with arthritis, but I'd not be allowed to study patients with depression. That's what I mean by embarrassing. It rules out most epidemiology.
What about therapeutic research? Therapeutic research is an incoherent concept. All research has some components that are not intended to be therapeutic. Research by its very definition is the pursuit of generalizable knowledge, not individual specific knowledge as we get when we're doing a diagnostic evaluation of a patient.
Therapeutic research gives us what I call the fallacy of the package deal. What happens is that people look at a proposal to do research, and if they find something therapeutic in it, they justify the entire protocol according to the standards for therapeutic research.
Nontherapeutic components are justified as therapeutic. I've surveyed the literature of randomized clinical trials, and let me show you some things that were justified as therapeutic research. Repeated coronary angiograms in patients who in the practice of medicine would have one or none.
Repeated endoscopies in patients who in the routine practice of medicine, patients with peptic ulcer, usually would have none, but these were repeated once every week in order to satisfy an FDA requirement that you really show shrinking of the lesion. This was the evaluation of the H2 receptor antagonist.
Liver biopsies done on the place by group for no reason other than to maintain a double blind. Placebos administered into the coronary arteries, that (were part of) the National Heart, Lung and Blood Institute's TIMMI protocol evaluating various clot dissolving agents in the treatment of people with myocardial infarction.
Now, component analysis is different. In component analysis we don't evaluate the entire protocol as therapeutic or nontherapeutic. We evaluate individual interventions or procedures. Here is the language taken from the regulations on research involving children:
"Interventions or procedures that do or do not hold out the prospect of direct benefit for the individual subject."
Beneficial procedures are justified as they are in medical practice. The risk is justified by the anticipated benefit to the individual subject. It's required that the relationship of anticipated benefit to risk be at least as favorable as that of any available alternatives.
I've already mentioned what we do with beneficial procedures. The only limit is the anticipation of personal benefit. In nonbeneficial procedures, however, we have prescribed limits, thresholds, what sort of justification does it take to expose child subjects to increasing levels of risk?
For example, if there is a minor increase over minimal risk, the procedure that presents this minor increase must be reasonably commensurate with those in the actual or expected situation of the child. The anticipated knowledge must be of vital importance to understanding the subject's disorder or condition.
I don't know how many of you have read the case study for tomorrow, but this will afford you an opportunity to figure out what it means when it says "subject's disorder or condition." Just what is a disorder or a condition?
The subtitles in Subpart D are enormously confusing. A little bit of history. When the National Commission finished its report on research involving children, it turned it over to what was then called the Department of Health, Education, and Welfare. The regulation writers produced a set of proposed regulations based upon this report.
The National Commission said, "No, you've got it all wrong," and made them go back and develop a new proposal, and then the National Commission went out of office. So the new proposal came out faithful to the National Commission's report, but the subtitles were written after the National Commission went out of business, and so the subtitles restore therapeutic research.
Research involving greater than minimal risk but presenting the prospect of direct benefit, and then the text says "interventions or procedures that do or do not hold out the prospect of direct benefit."
Unfortunately, too many people working in the field of research ethics or an IRB administration seem to have only read the subtitles. They haven't gone to the finer print that is actually the regulatory standard.
Now I'm going to shift to a more explicit discussion of research involving children. I'm going to talk about some general considerations that are problematic in designing and reviewing research involving subjects, including children, and then I'll offer some considerations that are specific to children.
General considerations. Ever since the Office of the Inspector General asserted that IRBs are overly burdened to the extent that they can't get their work done, this has been a topic of conversation that's near the front of all conversation on research review. I agree that they're overly burdened. That's beside the IRBs are required to do things that don't need doing. They're required to do periodic, usually annual, review at convened meetings.
You can't imagine the number of adverse event reports that come across the desk of the IRB Administrator. There is no reason in the world for a convened meeting to review all of these things. This is something that should be done by the Data and Safety Monitoring Board, which has the advantage of reviewing all of the data and it also knows the denominators for the data, where all the IRB gets is a lot of disconnected reports that probably belong in the numerator.
Review of — I beg your pardon. I just discussed the second bullet. The first bullet, periodic review at a convened meeting. This is also a meaningless exercise. Usually at the first anniversary of a protocol the research has not even begun yet. You're still waiting for the study section or the Council to decide whether you're going to get your money.
But even after it has begun, there's usually nothing that requires the attention of every member of the committee, and finally, documentation that all regulatory requirements were considered. If you review the list of observations that OPRR first and now OHRP came up with in those universities where it closed down the research operation until they got their IRB system straightened out, these are the three things that are most frequently mentioned.
What do I mean by documentation that regulatory requirements are considered? The way this plays out in real life is that you have to say if research involves children, we have made a determination that the risks are not minimal. They're a minor increase above minimal risk. We have made a determination, and you have to essentially repeat each of the regulations as you say, "Yes, we did that. Yes, we did that."
By analogy, it's like every time you stop for a red light writing a report saying, "I did not go through the red light," the behavior should be sufficient without all of this documentation.
What I have come up with is a proposal for modified expedited review. I would have each of these categories of activity carried out by an experienced member of the IRB not to approve or disapprove, but to see whether or not this crosses the threshold of requiring attention of the entire IRB at a convened meeting.
This person would not necessarily have the authority to say this is okay or this is not okay, but rather to say this should be reviewed by the full meeting.
We don't have much empirical information on this, but we have sort of an informal empirical study. Norman Fost, who has been involved in this business almost as long as I have at the University of Wisconsin in Madison, took 3,000 consecutive protocols that required periodic reapproval. He reviewed them according to the old method, that is, having expedited review with one expert member looking at the protocol and deciding what, if anything, ought to be changed before they issued reapproval.
He then took these 3,000 consecutive protocols and turned them over to the fully convened IRB. The number of cases in which the convened IRB made substantive further revisions was zero. That's what I mean by doing things that don't need to be done.
General considerations. Two, "waiver of the requirement for consent, assent and permission." The standard that's represented in the regulations says that the research could not be practicably carried out without such a waiver. We need to have some authoritative statement on what "practicably" means. Some policy makers and some policy enforcers have seemed to believe that means without the waiver of the requirement for assent or permission it would be impossible. Medical record review is customarily done without individual consent, assent or permission. Nobody seems to have their rights violated by this or interests, if you can keep things adequately confidential.
It's not impossible to get informed consent to review medical records, but imagine going out, having the investigator go out and contact sometimes 1,000 or more patients to get consent for looking at the medical records.
The standard minimal risk is the trigger of many regulatory requirements, particularly the trigger for waiving or altering some of the requirements of the regulations. We use this standard to waive the requirement for parental permission, and we use this standard to justify risk in terms of anticipated benefits.
Minimal risk: What does it say in the definition of minimal risk? It says two things. One is that the risks are like those in everyday life, and it says the risks are comparable to the risks of a routine medical or psychological examination.
Now let's see how these are applied to children. Many IRBs around the country say that if a child is going to be exposed to sexually explicit language, this is greater than minimal risk. I think that people who are experienced in talking with children know that they generally already know about language that would shock the investigators. I would say this sort of thing should rarely be considered more than minimal risk, given all of the additional things that the investigator would have to do.
What about the risks of the routine medical examination? The routine medical examination, as most of you know, consists of a deep probing into one's family history, a deep probing into one's social history, looking for all sorts of behaviors that might have a bearing on one's health or on making the diagnosis.
It's my belief that as we interpret whether or not the procedures in a given research protocol go beyond the threshold of minimal risk, we should keep in mind what goes on in a routine medical examination. The OHRP not too long ago reached a determination that if you asked the subject to give sensitive information about their relatives, you not only had to get consent from the subject, but also from the relatives.
This is what I call regulatory excess. The problem is not in the regulations. The problem is in how those who are responsible for implementing the regulations interpret them.
The assent form, you know, the purpose of consent is to empower the prospective subject. You give them information, and they develop the capacity to protect their own interest through consenting or withholding consent.
The purpose of documentation, by contrast, is to protect the investigator and the institution. If the subject comes back a year later and says, "You did not get informed consent from me, you didn't tell me I had a right to refuse," now you can pull a document out of your desk and say, "I not only gave you that information, but I've got a signed receipt for it."
Why then do we need assent forms? Why is so much energy put into developing assent forms? The child is not going to come back and litigate. I believe the main purpose of the assent form should be to give the child the sense of participation. Just like the parents sign a form, let the child also sign a form, but don't get so preoccupied with developing the sorts of assent forms that serve as perfect analogues for the consent form.
Final slide. Research involving adolescents. I knew that I'd be getting to the end of my time allotment now. So what I simply said is in recommending policy for the future, one should pay attention to the guidance of the Adolescent Treatment Network's Ethics Panel. This is a panel established within the National Institute of Child Health and Development.
It recognizes the emerging autonomy of adolescents. Adolescence is just one example of how we, as a society, have painted ourselves in the regulatory corners. We develop rules that described how you could involve children in research, and then some years later we discovered that adolescents with HIV infection would have to go through enormous hassles in order to comply with these rules when what you really wanted to do is give these adolescents the treatment of choice, which often included a drug on investigational status.
We've done the same thing through ourselves with prisoners. I could give you examples in many other categories, but today's assignment is children.
Now, I wanted to stop here or I proposed to stop here, but I told your leader, Ed Pellegrino, that in case anyone was not familiar with the Adolescent Treatment Network's recommendations, I have a few extra slides I can show you. I said I would ask if the group was interested. He said, "Don't ask. Do it."
DR. LEVINE: So what I'm going to try to do now — oh, my God, what have I done? Help.
As you can see, this is a talk that I really designed to give in Singapore a few years ago, but here it is.
Self-authorization by adolescents to serve as research subjects. The default position in the Code of Federal Regulations is that adolescent's assent requires parental or guardian permission unless either the minor is emancipated or the IRB finds that permission is not a reasonable requirement, for example, in studies of abused or neglected children.
This is an inadequate position, and without getting into all of the reasoning, I'll now end. There must be an appropriate mechanism to protect subjects to substitute for the protection usually afforded by permission, and this is the passage in the regulations that the adolescent treatment network picks up on. The waiver may not be incompatible with any laws.
Now, under what circumstances is permission not reasonable? First, the one that's mentioned in the Code of Federal Regulations says that the children are neglected or abused, but the National Commission mentions some other circumstances in which it would not be a reasonable requirement.
Research on diseases or conditions for which adolescents may obtain treatment without parental permission, and that's quite a number of disorders, many of which are the target of ongoing research.
Minimal risk research involving mature minors and children who are designated by their parents as in need of supervision.
Parents are legally or functionally incompetent.
The waivers that are permitted by Subpart A, and that is the general regulations for all people who are research subjects, and the regulations on children explicitly reference Subpart A to say here are the standards for satisfying the regulatory justification for waiving the requirement for consent: no more than minimal risk. The waiver or the alteration will not adversely affect the rights or welfare of the subject. The research could not practically be carried out without the waiver. I've already commented on that one, and whenever appropriate, there would be debriefing or dehoaxing or the sorts of things that are characteristic of research in psychology when you are not given full information at the outset.
I've already said something about minimal risk. So I'm not going to repeat it here. Even in Singapore I talked about the medical examination, probing the most sensitive details.
Now, here are the recommendations of the Adolescent Treatment Network. First, Category 1 research consists of anonymous surveys and other research in which there is no risk whatever. They say you need not consult the parents in this type of research.
Category 2 is research with any risk of physical, psychological, or social injury.
And Category 3 is research involving investigational new drugs or other FDA regulated test articles.
In Category 2, in which there is some risk but no test article, they want to encourage and assist the minor to obtain parental involvement. If I may digress for a moment, much of the discussion until the Adolescent Treatment Network got in on this had a tendency to alienate children from their parents. You would come to the child and say, "We don't have to get your parents' permission. You can authorize this on your own."
And what ATN is saying: no, encourage them. A lot of them would be well served to discuss their issues with their parents, but don't insist.
B, the upper limit for risk of nonbeneficial procedures is minimal unless greater risk is justified according to the standards in the Code of Federal Regulations. And I mentioned those, the procedures commensurate. There's a minor increase above minimal risk, things of this sort.
Category 2 continued. They would like to see in any project that carries out research presenting risk to children or adolescents develop a structure through which they can get consultation with the community. They would like to have a community board that would be overseeing this research to provide advice, criticism, and let them know what is and what is not acceptable.
If I went to them and said, "It's okay to use obscene language in a consent form," in some communities they would say, "Yes, we know the kids know this," and in some they would say no.
Here's a crucial point. The minor is already obtaining health care services either independently or with parental permission, and the research is being conducted in conjunction with such health care services. There are many adolescents who have the go-ahead from their parents to get medical care at a certain institution without reporting on each and every component of that medical care. This is what the Adolescent Treatment Network is talking about.
Now, thirdly, research involving investigational new drugs. A, just like before, "encourage and assist minor to obtain parental involvement. Upper limit of nonbeneficial procedures is minimal. Community consultation and continuing involvement. The minor" — this is just like Category 2. Here's where we began to see a difference.
There should be an advisor or an advocate appointed for each individual patient subject. This is very important. One obstacle to having many adolescents get involved in trials of new treatments or diagnostic procedures is that they simply don't want to tell their parents, "I need treatment for drug abuse. I need treatment for HIV infection."
Many of the researchers say if we insisted upon parental permission or guardian permission, this would devastate our chances to do research in this population, and so I think this is a pretty good set of recommendations. I was part of the — I still am part of the Adolescent Treatment Network. So what I'm really doing is hawking my own product to some extent, but I hope you will also find it reasonable and perhaps take some action on it.
I apologize for the fact that you don't have it in your handouts, but it's in the President's Council's computer, and you can make handouts as you see fit.
Thank you very much for your attention.
DR. LEVINE: I would like to respond to your comments or questions, and I'll even sit down if somebody would shield that projector.
CHAIRMAN PELLEGRINO: Thank you very much.
Dr. Levine's presentation is open to commentary and questions. Any member of the counsel who would like to being the discussion?
DR. LEVINE: It's unprecedented that I could talk for 30 minutes and not create some confusion.
CHAIRMAN PELLEGRINO: Or to be so convincing. I don't see any body language that indicates an urge. Maybe it's the luncheon period.
Thank you very much.
Well, Robby George and Dr. Bloom.
DR. BLOOM: Bob, you talked about unnecessary documentation for the research protocols. Does the same apply to the HIPAA regulations in terms of child research?
DR. LEVINE: You have to follow HIPAA regulations when you're doing research on children as well as adults. The HIPAA regulations are, I would say, in a majority of protocols, not an undue burden for the individual researcher and not an undue burden for the IRB. They are burdensome in other respects, and I wish something could be done to tend to that.
Now, what has happened though is that there has been some what I consider inappropriate use of HIPAA regulations or HIPAA type requirements. I will give you a recent example that I was consulted on.
There's a group of epidemiologists who want to do research in the State of Connecticut looking into certain patterns of distribution and risk exposure for various types of cancer. We have in the State of Connecticut a state authorized tumor registry. That means information goes into the tumor registry on every patient who has cancer, and this is done without consultation with the patient. It just goes there. It's automatic.
And the state regulations say that researchers, bona fide researchers — they have to present some bona fides — will have access to this. However, in the HIPAA era some of the IRBs in Connecticut, indeed, a majority of the IRBs in Connecticut have said, "You can't look at that information without the authorization of the patient's personal physician." That is bringing the whole program to a halt.
The problem is not necessary with the laws of the regulations. The problem is the way they're interpreted. Now, the thing that precipitated their consulting me is that they got information out of a record of a woman with breast cancer. There was a follow-up study which showed that she had the genes that would predict a high likelihood of developing additional breast cancers in her and her family.
They wanted to contact this woman, and the IRB said no, not without the permission of the private doctor. The private doctor did not give permission, and wouldn't you know it? Four or five years later she developed another breast cancer.
So this is not the main reason to authorize access, but it's just one of the aspects of unintended adverse consequences of interpreting these regulations.
Thank you for your question. It would not have been better if I had planted it.
CHAIRMAN PELLEGRINO: Dr. George.
PROF. GEORGE: Thank you, Dr. Levine, for your presentation.
I wonder if you could clarify for us... a concept that I thought appeared in one of your slides, and that was the concept, if I have it right, of emerging adolescent autonomy. Have I captured the proper phrasing?
DR. LEVINE: I think you quoted it exactly.
PROF. GEORGE: Now, is that a concept that concerns an individual adolescent and like all adolescents an emerging control both in terms of ability and in terms of permission of family and larger society about decision making or is that a social concept? Is it a concept about the way in which there has been a shift toward permitting adolescents to have more freedom?
And then once that is clarified, do you regard it as a purely descriptive concept or is it a concept for the purposes that you had in mind that has some normative content, that it should guide policy decisions one way or another when it comes to research involving adolescent children?
DR. LEVINE: The answer to your last either/or question is I intend both, but let me go back a little bit. We already have a national policy which recognizes the emerging autonomy of all people under the age of 21. We recognize that there is no autonomy in the infant.
The National Commission's recommendations said that you have to get assent, but in the age range before assent becomes meaningful, children can register what the National Commission called a deliberate objection.
A four year old, for example, not every four year old, but an individual with cognitive development of the typical four year old can't really understand the abstractions that go into giving assent, but they can be told, "We want to draw your blood, and you know, all these other times that we draw your blood you have to do it because it's for your own good, but this one is not for your good. It's for the good of other people."
And a four year old can say, "Well, if I don't have to do it, I'm not going to do it." That is not assent, but that's what the commission called a deliberate objection. It didn't appear in the regulations, but most IRBs respect this.
Now, what we have from the people who study the development, child development, is we've got some milestones. At about the typical cognitive development of a typical age six or seven, we first begin to see the capacity to engage in some of the sort of thinking that goes into giving consent or assent.
The six or seven year old can easily comprehend, "You don't have to do this." They can easily comprehend, "If you do this, it will hurt." But they're not going to be able to respond to other regulatory requirements like, "In case you get injury, there will be no compensation or disability." So six or seven was one milestone.
The next milestone that was offered was about the age of 11 when they could begin to engage in certain sorts of abstract reasoning that would embrace such things as altruism and so on, and by the time they're 14, probably, if my memory serves — I learned this long enough ago that I do remember it — at about the age of 14 they're able to go through the processes, the components of consent as good as any adult, as well as any adult.
Between 14 and perhaps 18, 21, even though they can go through all of these maneuvers, what they lack is judgment. They don't have a mature judgment. Even though they can go through all of these maneuvers, many 16 to 19 year olds have not yet grasped the idea that they, too, might not be immortal, you know.
So, yes, we already have policy that recognizes all of this stuff, and we spin off concepts like mature minors. There's another category called emancipated minors. I think these sorts of things should be brought up as we're considering what sort of assent we're going to require in any particular protocol.
I think also that there are some protocols where you might say, "We're not going to give you a general rule here. We want you to interview each child to see whether or not this particular child is capable of independent decision making."
And in the case of the ATN, the Adolescent Treatment Network, in certain types of work, particularly the evaluation of FDA regulated test articles, they want to make sure that you've got an individual counselor for each child that gets involved in this, and the main target of that is to help supplement the immature capacity for judgment.
It may or may not interest you to know that when the federal government first issued its proposed regulations for research involving children, it took no position on when children could assent. So it asked the public in the public comment period to help them. They asked either seven or 11 or case-by-case determination by the IRB, and the case-by-case determination is the way it finally came out.
That may be more than you were asking.
PROF. GEORGE: No, I have more to ask.
DR. LEVINE: Oh, okay.
PROF. GEORGE: Actually it's very helpful. I just want to follow up so that I can learn more about it.
So far as the concept is descriptive and within obvious limits, are the milestones strongly socially conditioned? In other words, if we studied different cultures would we find the milestones at about the same place as at six and 11 and possibly then 18 to 21?
And then secondly, within our own culture or generally if there isn't strong cultural conditioning, are there significant milestone differences between the sexes?
DR. LEVINE: Oh, my. First, let me say you've just taken me beyond the bounds of my competence, but I'm going to answer anyway.
What you'll find is that social perception varies around the world. In my work in developing international documents, I have found that the age of consent, the lowest age of consent I've seen specified in national legislation is 12 and the highest is 21. So to the extent that the regulations or the law of a society or of a country reflects social attitudes, there is quite a bit of variation around the world.
I think also the concept of autonomy doesn't work very well in many parts of the world. So it's really — I mean, I can capture it in one epigrammatic statement. At one of our meetings to develop the SIOMS guidelines, there was a Francophone physician from Central Africa who said, "You know, in your part of the world you've got a saying, 'I think. Therefore I am,'" which I thought was really French. He said, "In my part of the world the saying is, 'We are. Therefore, I am.'"
The idea of self-determination in many parts of the world is considered antisocial behavior. Even in Europe, even in like Norway the expression is anyone who tries to rise above the crowd, we hammer them down like a nail.
PROF. GEORGE: I'm certainly not surprised that there would be differences in regulation in different cultures, and it seems reasonable to assume those would reflect different perceptions of ability, but I guess what I was really interested in was the deeper question — and you just might not know the answer to it, but if you do I'd be curious to know what the answer is — whether there are actual differences from culture to culture as to the age at which certain capacities truly do manifest themselves.
So, for example, would the capacity for abstract conceptual thinking be about the same everywhere or might it be different, you know, in Tibet than in Norway or in Albania?
DR. LEVINE: I don't know. I can tell you though you asked also about gender differences. Based upon my totally vicarious experience of research in child development, contrasting, for example, the account of child development of Kohlberg, say, with Gilligan, whether or not people are capable of abstract reasoning at any particular developmental level depends entirely on how do you study them. What do you ask them to do, right?
So I think somebody who follows the Kohlberg model, as so many people who did follow the Kohlberg and his predecessors' model, said that females are, in general, a case of arrested development. All right? That's because they hadn't internalized all of these abstract principles.
And if you read the work of his student, Carol Gilligan, you get a very different vision. You know, so with that lurking in the background, I just don't know how to answer your question.
PROF. GEORGE: A final question that's not related.
DR. LEVINE: Okay.
PROF. GEORGE: And perhaps I was misinterpreting a different slide, but was there in one of the slides a category of anonymous surveys or anonymous questionnaires which were —
DR. LEVINE: Yes, that was the ATN's Category 1.
PROF. GEORGE: And these are regarded as having no risk?
DR. LEVINE: No risk.
PROF. GEORGE: And why would that be? It seems to me it wouldn't be — I can imagine plenty that wouldn't have a list, but I can't imagine saying, you know, in a kind of universal way that there would necessarily not be risk. Certainly some surveys I wouldn't want my daughter to be filling out.
DR. LEVINE: Let me say that one of the badly abused words in our university environment is "anonymous." Many people think it's anonymous if you don't put the patient's or the subject's name in the same place that you put the data, but when ATN talks about anonymous, they mean really anonymous. There is no way to reestablish the link between any personal identifier and the individual subject.
Now, they are talking about utterly anonymous research like going into the medical record room and taking out demographic or other data with no personal identifiers attached, and you have a data set. Use of that data set would conform to their Category 1. There is no way to link any information to any individual human.
By the same token, if you ring up the clinical laboratory in a hospital and say, "I want all of your leftover specimens of blood," you would send a lot of blood. If they asked you to do electrolytes and stuff like that on it, any that you haven't used, give to me. I'm doing research, but I don't want any identifiers of any sort or even you might say, "Give me only the blood from those who had high blood sugars." This is still anonymous.
This is what they're talking about. Now at the time you're interacting with the subject, at this time the things that are going — I mean you can't be anonymous when you're actually interviewing somebody. So you might then say that there's some aspect of this interview that would rise — that could be considered a risk of injury of some sort, social injury, physical, whatever. Then it would come out of Category 1 and move into Category 2.
PROF. GEORGE: I was concerned about matters that would go beyond anonymity. It seems to me reasonable for parents to think there are some questions they don't want their children to be asked about or confronted with, some matters they don't want their children to thinking about. It could be for moral reasons. It could be for religious reasons. It could be because they think that the reflection on that could lead them to behaviors that are dangerous for them and so forth, just the asking and reflecting on the question.
So it seems to me that the content of the questionnaire itself no matter how comprehensive the anonymity could create what could reasonably be interpreted by parents as the risk of harm. Is that not so?
DR. LEVINE: Sure, and it's precisely for that reason that the ATN wants you to have community consultation and community advisory boards, to help you out with that sort of —
PROF. GEORGE: So there would be an evaluation in the ideal circumstance, an evaluation by some community board of the appropriateness even of an anonymous questionnaire —
DR. LEVINE: Yes.
PROF. GEORGE: — because there would be a recognition that even an anonymous questionnaire could create risk.
DR. LEVINE: Exactly.
PROF. GEORGE: Okay.
DR. LEVINE: And you know, the regulations say the community representation in evaluating proposals to do research may be limited to one member of the IRB who has no connection with the institution, apart from membership on the IRB.
But what ATN and the whole AIDS treatment program, AIDS clinical trial group, what many of these programs are doing even for survey research is going far beyond that and saying we don't want just one member who could be outvoted or outnumbered. We want a whole community advisory group.
PROF. GEORGE: Sure, and that would mean that, for example, the idea of parents' rights when it came to questionnaires for children could be a matter that would have to be deliberated about.
DR. LEVINE: It could be, yes.
CHAIRMAN PELLEGRINO: Dr. Meilaender.
DR. MEILAENDER: Yes, Dr. Levine. Two questions. One I think is just sort of very focused, and the other, much broader.
The narrower one. I think there's a sort of puzzle that folks who aren't regularly involved in this have when they think about this whole set of regulations, and I have to admit it puzzles me sometimes, too. On the one hand, it looks as if you've got a very developed, clear set of regulations that tell you what to do, but then, on the other hand, you get a concept like minor increase over minimal risk, and it's very hard to know what that means actually or why we should share a notion of what it means.
And so I would just welcome anything you had to say about clarifying a concept like that. That's the narrow question.
The larger one is related more to stuff that was just in the reading we've given and specifically to what you've said here, but just that general question that, of course, has been around for a long time that you take up there about whether our primary concern in developing a whole regulatory system should be protection of potentially vulnerable subjects, which protection might, in a sense, deprive them of some benefits of participating in research or whether or not primary concern should be, as it were, making sure that the benefits of participating in the research are available to them.
I took you — I'm not sure whether rightly or not — right near the end of the reading we had to opt to tilt in the direction of allowing to profit from the benefits of it, which even if true, I mean, one might raise a question about whether that's the right tilt in the case of a group like children, for instance.
And so I would just on that issue, too, appreciate a little more talk from you about what you have in mind.
DR. LEVINE: Two good questions. Let me deal with them in order. What do we mean by a minor increase above minimal risk?
When this criterion was written by the National Commission, it was conceded that no one really knew for sure, but what we thought would happen and what did happen is that we have developed sort of an analogy of common law, what I call a common sense of the community.
As one case study after another has been published on how are IRBs reaching these decisions, and as they go together and have their IRB meetings, usually the largest ones are under the rubric of the annual PRIM&R meeting, Public Responsibility in Medicine and Research, and they share their experiences, we're beginning to get a pretty good idea of what most people consider a minor increase over minimal risk.
Minor increase is one of a — the term "minor" in this case is what some philosophers will call a dispositional characteristic. A dispositional characteristic is something that suggests a category. In the law we have the reasonable person standard. Some people say, "I want to pin it down. What is a reasonable person?"
Well, you know, in England its the man on the Clappam (phonetic) Omnibus or something like that. If you pin this down, if you freeze it in one place, it loses the usefulness of a dispositional concept, and you have to invent a new term to cover what you were talking about.
The example that the philosopher who gave us this concept, Royal, used was elastic. I say "elastic," and you know what I'm talking about. What if someone comes in and says, "Elastic means that a six inch rubber band will stretch no less than one inch and no more than two inches"? All right. Now you've got a satisfactory stipulated definition of elastic. You have to invent a new term to cover the full range of what you really wanted to talk about when you said "elastic."
I put reasonable, minor increase, things like that into this dispositional category. I think the way we kind of evaluated it is as an IRB is sitting and making a judgment about what is a minor increase, they ought to be thinking about what about other people on other IRBs. Will they consider our judgment reasonable within the boundaries of what we ought to be permitting?
And that's the main touchstone they have for evaluating things of this sort. That's your first question. Is that satisfactory, or you want me to do better or do worse?
Now, what about —
DR. MEILAENDER: Maybe you could do better in one way. Just examples of. There was an example in the reading somewhere that you gave us of bone marrow examination or something like that for somebody who already had it. There was an example that Loretta Kopelman gave years ago in an article about the kids in I think it was the human growth hormone study and the various things that they had to do yearly, I believe, that just to the ordinary lay person might sound like something you'd call more than a minor increase over minimal risk or examining my bone marrow. Even if I've had to have it done a few times for medical reason might sound like it.
How do we know that the case law developing among IRB people reflects what just sitting in probably a less informed way in my living room I might say, "Well, that doesn't sound like a minor increase."
I do think there's still a problem about cases there.
CHAIRMAN PELLEGRINO: Well, Loretta Kopelman wrote her critique of that study. This was a study that had to do with giving human growth hormone injections to children who were short for their age but had no disease. They did not have human growth hormone deficiency. In order to give this stuff, in order to evaluate this stuff, you had to give half the group the growth hormone and you had to give the other half placebo, and this was administered, I think, by subcutaneous injection. Do you remember?
And blurring the two, the other one was the papilloma virus vaccine that had a very similar argument and structure. One or the other of those protocols required that you give these children, eight, nine year old children, two subcutaneous injections a week for a year and a half.
And Loretta and some others thought it was unduly burdensome to give a child two subcutaneous injections of placebo for a year and a half, and so she was part of a committee that was assembled by the National Institute of Child Health and Development, and they reviewed this protocol according to what we call Section — what is it? — 407 maybe of the Subpart D, which is something that meets when you're proposing to do something that presents greater than a minor increase over minimal risk. It requires a full review under the requirements of the Public Advisory Committee Act, and so on.
And they conducted this review, and Loretta was outvoted on it. This is going to happen. I was not a member of that group. So I was not privy to all of the arguments that were presented, but I could see that she had a good point to make there.
Let me get to your other question. There is a tension always between protecting prospective research subjects from harm and developing knowledge of products that will be of benefit for these subjects. There is always a tension, and if I wrote the last part of my paper, which incidentally is in Loretta Kopelman's book; if I wrote the — I think I called the last few paragraphs the epilogue, and if I came out suggesting that I think that developing benefits should always triumph over protecting subjects' rights and welfare, then I did a poor job of writing that epilogue.
I think that they should always be in tension, but what I have to say though is when protection was the dominant approach to evaluating research involving children, we developed what the National Commission recognized as a class injustice. Just because these children were unable to consent, we protected them. We kept them out of research, and from 1962 until the time the Commission was meeting, only two drugs had been approved that had been evaluated in children to the extent that you could say here is our advice on dosing. Here is maybe the children are more or less sensitive.
Why do I pick '62? Because that's the date of the Harris-Kefauver amendments to the Food, Drug and Cosmetic Act. Before that time, you did not have to show a drug was effective in order to market it. You just had to show it was safe.
And so this was not a problem, but as soon as you call for efficacy, that means you're calling for clinical trials. That means if you don't involve children in the clinical trials, you can't provide advice in how to use the drug.
Now, this, I think, I agree with the Commission that it was a class injustice. So I think as we review each proposal to do research on children or any other vulnerable population, we have to be aware of the fact that we're balancing protection versus benefit.
I mean, women got an even worse deal out of the '62 amendments. For many, many years nothing was approved because they ruled out — I hate this expression, but it's what the — women of child bearing potential. One woman said much has been made out of children becoming the therapeutic orphans, but just like children, we as women are being, and I quote, "protected to death." Wish I could have thought of that.
CHAIRMAN PELLEGRINO: Dr. Schaub.
DR. SCHAUB: Robby covered the questions I had. I should go on record with that. Robby covered my question, and it was about anonymous surveys.
CHAIRMAN PELLEGRINO: Thank you.
DR. KASS: Well, Bob, two questions. Less about the presentation, but more about your sense of the state of things with the ethics of governing research using children.
If I'm understanding what's in the documents that you presented and the talk you gave here, certain misconceptions of previous formulations have been in the process of being corrected. The principles seem to be well articulated. The problem seems to be kind of excessive burdens on the IRBs owing to not so much wrong principles or wrong law, but slavish excess addressing of questions that one ought to be able to expedite.
(a) Is that — let me do it sort of piecemeal. Is that a correct assessment? I mean is there intellectually as opposed to making the system run more efficiently — are there large intellectual ethical difficulties that you think need further addressing or have the people like yourself who have been working valiantly in this field for now 30 years or more, have you more or less gotten these things well in hand?
I made it difficult for you by giving you some praise in the course of the question. I don't mean you to be embarrassed by this. As a person who studied this, do you think that we really have the ethics of using children as research subject more or less in good shape and that the problems are simply the refinement of the applications in the IRBs?
DR. LEVINE: I don't see us making much progress in developing the underlying ethical arguments or underlying, if you will, theory with regard to involvement of children as research subjects.
I think what I see now is a turn toward excessive bureaucracy, excessive attention to pointless detail, and I'm going to mention why I think this is happening in a minute.
I think that in ethics generally, the ethics of research involving children was largely formulated in the 1970s, very little by way of advance since then. And the ethics of research involving children is very much the ethics of the 1970s. Some people sneer and call it principalism, but it's very much in that vein.
But if you look at the purpose of regulations, there's something that would have to be enforced by bureaucracies, and you can't get into some of the other types of ethical reasoning that seem to me much more attractive, such things as narratives, such things as what Gilligan would call the care oriented framework as distinguished from the justice oriented framework.
I think I would like to see that sort of ethics become the cornerstone of discussing the ethics of pediatric practice or, for that matter, all medical practice.
I wrote a paper somewhere in the 1980s called "The Teaching of Medical Ethics, Contrast Between What We Want and What We Teach." Everyone wants caring physicians, but we've set up all of our teaching of ethics to create great obstacles to the caring physician.
So I want to see pediatrics; I want to see the ethics of medicine going in a different direction, but I don't see the ethics of research going that way because in the practice of pediatrics, practice of medicine, you're mostly talking about a relationship that can evolve and develop over a period of years, and the researcher, it's not so bad to have what Tolman would call an ethics of strangers for the researcher-subject diad. They are, after all, strangers.
The research defines what he or she wants to do for a certain we call it sample size, and people who meet the entry criteria come in. They relate to each other until they have completed their mutual purpose, and then they go their separate ways. An ethics of strangers seems okay.
Not so for the doctor-patient relationship.
DR. KASS: May I continue?
DR. LEVINE: Please. Oh, one thing I didn't do, Leon. You asked me to say how did it get this way. Do you want me to do that? Why are we so bureaucratic now?
DR. KASS: Maybe a couple of words on that would be useful actually.
DR. LEVINE: That's one of my favorite topics. In the 1970s and 1980s, IRBs — in the '60s even — IRBs were there to look after the rights and welfare of the patients. They were very informal. The IRB at Yale New Haven Medical Center was created in 1961.
I became chairperson in 1969. The first protocol that has a number and a file is 1969. It was very informal in the old days.
During the '60s, '70s, almost all IRB chairs around the country were clinical pharmacologists. In fact, most of them were the folks who trained in the same lab I did, and that's because clinical pharmacologists are generalists. They're not looking at one drug. They're looking at all of these drugs, and development of drugs was seen as a major theme in biomedical research.
And then in the aftermath of the National Commission, more and more of this was taken over by regulatory oversight. Still the system functioned fairly informally with good people trying to do good things until the time that what used to be called OPRR, the Office for Protection from Research Risks, began to get rather heavy handed in making site visits to university research operations and closing them down.
Well, before that time, the people who were running the IRB were people who were devoted to doing this and had a pretty good knowledge of the field, the relevant ethics and regulations and so on, not that everybody on the IRB knew all of the ethics, but every IRB had somebody who did and could explain things to the others.
What happened when OPRR began closing institutions? It got the attention of the provost. Now, the provost, until the '90s, people would say to me, "As IRB chair, who do you report to?"
And I would say, "The Dean."
But the way the Dean knows I'm doing a good job is that he never hears my name. It's not a formal reporting. I didn't walk in once a week and say, "Here's what we did last week." But maybe once a year I would chat with the Dean and say what we're doing.
But any time he heard my name in the role of IRB chair, apart from schedule, this was probably because somebody was irritated about what we were doing.
Now what we have is the provost moves in, and the provost reads the list of observations made by OPRR when they closed Duke University, when they closed Rush Medical Center, when they closed Johns Hopkins twice, and they would read all of these things. In fact, somebody collected all of these observations and counted them up and published them in the Annals of Internal Medicine.
Number one, doesn't do periodic reapprovals at convened meetings. And the provost, not knowing anything about the field, says, "That's important. That's the most common observation they made here."
And so from then came this drive toward meticulous responsiveness to everything OPRR ever wrote in an evaluation letter. All right?
I think also the public is getting concerned the way it did in the early 1970s when we had Tuskegee and Willowbrook, you know, Jewish Chronic Disease Hospital. Anyone in the research ethics field can kind of recite all of the classic cases of abuse.
Now what we're getting is reports by the Office of the Inspector General saying the IRBs are doing a bad job because they're overburdened. We're getting all of these stories about closings of university research operations, and we're getting just like Tuskegee, we're getting the Gelsinger case. All right? We're getting the Kennedy-Krieger Institute in Baltimore led exposure. We're getting the CHEERS study from the Environmental Protection Agency, and there's a lot of scary stuff that's being put out there.
But, once again, the public is not getting a full account of things. They are not seeing the denominator. You know, for every case you hear about, there's 10,000 you never heard about.
Enough. I think I've made my point.
So I think the pendulum of public opinion is swinging back toward where it was in the '70s.
DR. KASS: Yeah, I don't want to preempt comments from others, but could I just follow up? Is there in your opinion something that you would urge upon this Council to do in this area?
And part of the answer to that is is there not a group at NIH who is working on development of some further refinements along these lines to deal with the overburdening of the IRBs and things of that sort?
But in general, this morning we were treated to strong recommendations as to what this Council should or should not say in order to solve a major problem. I'm wondering whether you have not so much a strong recommendation for a conclusion, but a strong recommendation for a piece of this subject that you think deserves the attention of this particular body, or is this something that the field itself is going to be able to work out on its own?
DR. LEVINE: Leon, I have not gathered my recommendations into the form of an overarching recommendation of here's what you ought to do, but given the opportunity to say something now what I would do is ask the Council to take steps to reel in this runaway bureaucracy; to have the IRBs stop dissipating all of their energies in doing the pointless things, some of which I specified for you.
Give them the opportunity to spend some time thinking about real ethical issues. We're finding as a result of some of this that people are becoming less and less willing to serve on IRBs.
I quoted Norman Fost earlier. At the same meeting that he presented his informal survey of the results of review of periodic reapprovals, he said that he actually had two members of his IRB stand up and walk out in the middle of an IRB meeting saying, "This is not why we joined the IRB, doing what we're spending all of our time doing. This is not the thing that attracted me to serve on IRBs."
I've made that statement in public before and somebody said, "That's self-serving. You're trying to make life easier for your friends, colleagues."
It's not self-serving. It's not at all self-serving because the only thing, the only thing that allows the IRB to do what good it can do is to have the dedicated, learned people find it's worth their while to get together from time to time and to discuss the ethics and to some extent regulatory compliance.
I think if you could find a way to restore the status of the IRB and to restore the attractiveness of service on the IRB, that that would be a great contribution. I don't want to seem totally innocent of the realities of everyday life.
Another thing that's making it hard to get people to serve on any committee is the budget crunch. When you tell the members of our clinical departments that you've got a quota, you've got to bring in this amount of money and if you don't, we're going to cut your salary, well, they're not bringing in any money serving on the IRB or the Admissions Committee.
So there's other things going on, but I think it's within your capacity to make a statement about what is within the control of the federal human subjects protection community, I'll call it.
DR. FOSTER: That's very helpful. Everything you've said has been really, really helpful, and I think Leon was driving to the issue where we had thought — he and I talked a little bit about it this morning — well, maybe there's not so much importance about the research in children because of the fact that there's so much already written about it. It's hard to know whether there's anything really new about it.
But the idea about restoring the importance, I mean, to give time to do what has made it relatively safe, which is at jeopardy because of all of these what you presume is largely mindless activities of the bureaucracy, that might be something that we could do something on.
And I'm not myself very enthusiastic about pursuing protection of adults or children anymore. I Mean it has just been going on for so long. You know, you've got the Jesse Gelsinger case, you know, mistakes like that. There are always, but the denominator is very large.
So I think the most important thing that you have said is the last thing. I don't know whether Leon or the rest of the people agree with it, but I would think that would be — and it wouldn't take a whole lot of work to do that, but it would — if it came from a group like this, which is not filled with bioethicists to argue the point that you have made, it might have more impact than if a person like yourself, who has been such a stalwart, to do it.
I just want to thank you for bringing that up, whether we do it or not. I think that's a very important point to bring up.
DR. LEVINE: Well, thank you. You know, the Gausier case is a very good case to consider for a minute. After this unfortunate young man died, everyone said, "Well, if there had been an adequate IRB review, this would not have happened."
This young man died because there was an investigator who told lies to the IRB and who told lies to the FDA. This guy did a double jump in a Phase 1 gene transfer evaluation even though there were adverse events at the dose level from which he did his double jump.
This gives the whole human subjects protection field a black eye because the journalists are saying, well, the IRB could have done it, could have stopped this if it was adequate.
I can go on and on. May I tell one more story about —
CHAIRMAN PELLEGRINO: Yes.
DR. LEVINE: — a problem? How many of you know about the UCLA schizophrenia placebo study? How many of you have heard of it? Astonishing.
When the litigation was filed about two — it was a class action suit with two named plaintiffs who were damaged by being in a placebo or receiving placebo instead of an antipsychotic medication. This was banner headlines in almost every American paper. This was high level coverage on "60 Minutes," everywhere.
I've talked about this in front of groups of IRB people. They all have heard of it. Then I say, "How many of you know what the final outcome of this dispute was?" Nobody.
The final outcome, they asked for $70 million. The final outcome is that UCLA settled with the plaintiffs for $199,000. Why? Because their lawyers advised them that it would cost them $200,000 to win the case. Okay?
All right. We don't need all of this fuss. We'll give you 199 if you go away.
What does this translate to? This translates to not one nickel for the plaintiffs. The lawyers got $199,000 and estimated that their actual out-of-pocket loss was $2 million.
The meaning of this is that these lawyers knew they didn't stand a chance if they argued this in front of a jury, but the point I'm trying to make is that when somebody makes an allegation about our human subjects protection system you get "60 Minutes," you get headlines, and when the case is resolved, you can't find it in the classified ads because it's note even there.
And that's part of our problem.
CHAIRMAN PELLEGRINO: Thank you very much, Bob.
Well, I think we're right on time. So we'll regather now at 3:45.
(Whereupon, the foregoing matter went off the record at 3:30 p.m. and went back on the record at 3:50 p.m.)