FRIDAY, APRIL 21, 2006
Session 5: Children and Clinical Research
Tom L. Beauchamp, Ph.D., Kennedy Institute of Ethics and Department of Philosophy, Georgetown University, Washington, D.C.
DR. PELLEGRINO: Good morning. We've got some members missing but we'll just move ahead. When they hear our dulcet voices, they no doubt will come in, as long as we keep it dulcet. This morning our first speaker is Tom Beauchamp. Tom is a member of the Kennedy Institute of Ethics but more famously the co-author of the most popular book in biomedical ethics in the world, I guess, and of course, beyond that a Yung Scholar as well. But I'm not going to go into all his credentials. As you know, that's available and we'd like to keep our introductions short and I know Tom appreciates that as well.
We've asked Tom, who was a member of the Belmont Report Committee, played a prominent role as you all know, to give us the present status of the Belmont Report, what's happened since then, with particular emphasis on children, our concern this morning. Tom, can you take it from there?
DR. BEAUCHAMP: Is that all right? Can you hear in the back, Marti? Good, okay. Many of you know that I have had a long association with Ed and Alfonso, a member of my department for how many years, Alfonso?
DR.GÓMEZ-LOBO: Thirty or so.
DR. BEAUCHAMP: Thirty, right, and also with Rebecca Dresser, my dear friend and co-author and I believe Rebecca will not be here today, as I understand it and I'm very sorry about that. I'd like to tell you that when Ed called me I was really very worried about what time he was going to call this meeting, because Ed just decided we'd find out when we were free during the day and Ed would say, "Well, I'm here at 5:00 o'clock, how would that be?" I was afraid he was going to ask me to be here at 5:00 o'clock this morning. So this is a very decent hour to get started.
I should also say in the spirit of disclosures of the sort we're supposed to make these days, that I have a long, long, long history with Bob Levine and that history is, in some respects so close that we might even sound like we're speaking from one voice. So it's not as though you get very different characters when you have us around at the same time.
All right, so let's get down to business. I'll stay fairly close to my text. I like to do that because otherwise I depart and stray and digress and I don't want to do that. This morning you've given me a time limit and I'll try to stay within it. I trust you being very good about time, you'll nudge me if I'm going on too long. Okay. Ed and Dan have asked me to discuss my quote "Views on the Belmont Report as the ethical basis for the regulatory framework governing clinical research in this country" and then to generalize beyond that circumstance to the quote "regulatory framework governing clinical research ethics" and then to include my own view of quote "the current state of the ethics of research with human subjects", and I understand that that means in terms of the historical continuity with what happened back in the 1970s. So I will try to stick reasonably closely to this assignment.
I will state what I believe really happened when the Belmont Report came into existence, how it captured some reasonably deep and significant themes in research ethics and how some of this might still be of help today. I will then, at the end, as I understand my assignment, offer you some recommendations based on what I have said. Recommendations are, of course, just that. They're recommendations. So let me talk first about the National Commission.
As is reasonably well-known, the National Commission was established in 1974 by the U.S. Congress with the charge to identify the ethical principles, that was in the charge, that should govern the conduct of research involving human subjects and to develop guidelines for the conduct of research. It was hoped that the guidelines would insure that the basic ethical principles would become embedded in the research oversight system of the United States. Its more than 100 recommendations, the Commission's recommendations for reform, went directly to the desk of the Secretary of the Department of then Health, Education and Welfare and many were codified in federal regulations. You know them all well, there's US 45 CFR 46.
The Belmont Report is not itself written in the style of federal regulations or anything even approximating it and was never so codified. The Belmont Report is a tiny philosophical document for the most part. Carol Levine offers the following sobering but I believe accurate statement of the historical context in which the National Commission deliberated. I quote her, "The Belmont Report reflected the history of the 30 years immediately preceding it. This emphasis is understandable given the signal event and the modern history of clinical research ethics, that is to say Nazi experimentation. American public opinion was shaped by the revelations of unethical experiments such as the Willowbrook Hepatitis B Studies, the Jewish Chronic Disease Hospital Studies and especially the Tuskegee Syphilis Study. Our basic approach to the ethical conduct of research and approval of investigational drugs was born in scandal and reared in protectionism. Perceived as vulnerable, either because of their membership in groups lacking social power or because of personal characteristics suggesting a lack of autonomy, individuals of that sort were the primary focus of the Commission's concern," close quote.
I want to go now — I want to go in a moment directly to the Belmont Report but I know that you are now thinking about children and I want to start with the Commission's Children's Report, still a document, I believe, well worth your reading, and a problem in that report that kept coming back over and over again during the National Commission's deliberations. This problem was never resolved adequately or fully or whatever the term should be, nor is it likely that you will resolve it here during your deliberations. It is a deep problem that will always be around.
But it captures, I think, one of the great questions of research ethics and one with which you must grapple because it's unavoidable. I will not summarize or interpret any document. I want to go to the moral heart of what I understand the debate to have been about and in this way to offer you an understanding of what the Belmont Report is about. I think this problem could easily become the center of debate for you as well. But historical understanding is important in its own right, a view that I have always shared with both Alfonso and Ed. Alfonso, as you may know, is a historian of philosophy and so am I, in our own way. I take historical analysis of that sort with very great seriousness and I think it's important to look at the history of bioethics in much the same way.
So I take fidelity that history is my first obligation in what I'm about to say. In 1975/'76, the National Commission deliberated on the topic of research involving children and eventually produced a volume on the subject, one written and published well before, before, the Belmont Report. It is easily the most important of the Commission's 17 volumes for an understanding of why Belmont takes the shape that it does. Belmont was forged in the context of discussing research involving children. I know that because I wrote it. I know exactly how the context was forged. Without this background, I think you lose something.
By the way, I wrote almost all the Belmont Paper. Bob Levine wrote the section on boundaries in that paper so that you know how the authorship — because it was revised many, many times by the Commission. You know the process well. For about a year, no exaggeration here, the Commission found its deliberations grinding to a halt over a significant moral problem; how much danger or risk could justifiably be presented to children in so-called at that time non-therapeutic or non-beneficial research? On one side of the debate were the apparent defenders of research. They argued that severely restricting the involvement of children would bring pediatric research to a standstill, with, of course, grave consequences for children who would be sick in the future.
On the other side of the debate were the apparent defenders of research — I'm sorry, children. They argued that openly using children with exposure to significant risk when the intervention was not therapeutic or of direct therapeutic intent, was to use persons without their consent to the ends of science. Children would be, in short, exploited without any benefit flowing to them for their sacrifice. During this debate, most Commissioners found themselves somewhere in the middle of the two sides and reluctant to choose sides. So how to bring a very long body of deliberations to a conclusion?
Ultimately the defenders of research would be, so to speak, victorious although I'll be careful about that word in this debate, but with two dissenting votes among the 11 Commissioners and two dissents filed. The most difficult ethical issues for the Commission arose with respect to research presenting more than minimal risk, where there is no immediate prospect of direct benefit to the individual children involved. Some members of the Commission urged that the limit for such research remain at the level of minimal risk. Others pointed out that such a limit would almost certainly eliminate much research that has genuine scientific significance as well as the promise of substantial long-term benefit to children in general.
Ultimately the Commission decided with two members dissenting, that if three conditions are satisfied, research in this most difficult class of cases could be justified. First, the risk must be only a minor increment beyond minimal. Second, the procedures to be used must be reasonably similar to those with which prospective subjects have already had some experience. Third, the research must be likely to yield generalizable knowledge important for the understanding or amelioration of the specific disorder or condition.
Thus, foreseeable benefit in the future to an identifiable class, class, that was the key notion that was landed on, of children was held to justify a minor increment of risk to research subjects. Dr. Robert Cook then Chancellor of the Medical System in the State of Wisconsin, objected vigorously to these conclusions and, as I understood him, he objected to the moral premises underlying them as well. He wrote as follows in his formal dissent. Quote, "In the ethical justification of its recommendation, the Commission can invoke only the principal utility. This, in itself, does not constitute any breach of ethics but it does indicate the perilous nature of the recommendation and the ethical uncertainty of the Commission." Lawyer, Robert Turtle, was even tougher on his fellow Commissioners. He says in his dissent, that the Commission notes that it was, and I quote, "Impressed by reported examples of diagnostic therapeutic and preventive measures but," Turtle says, quote, "that rationale provides no basis at all for segregating children in to separate classes. The rationale is strictly utilitarian. I do not believe a strictly utilitarian rationale can provide adequate justification for a policy creating a doubly disadvantaged class of children, doubly disadvantaged because they're already sick and you — they're sick and then you add an additional risk on top of their sickness."
I agree with the minorities' interpretation, the Commission had reasoned heavily by appeal to considerations of public utility, but this was certainly not the only evaluative commitment of the Commission and it would misunderstand the deliberations and its moral commitments to cast its conclusions in this narrow way. The historical record and transcript clearing indicates that the National Commission, was very concerned throughout its work that it had become too easy in the biomedical world to use utilitarian justifications of research. The Nazi experiments, Tuskegee and Jewish Chronic Disease Hospital case, all the cases that were very much on the horizon and being examined at the time, had left a legacy of being driven by a very utilitarian view of social beneficence, the justified using human subjects on grounds of benefit to the broader public.
That is, a major purpose of the report, of the Commission's Report, was to properly balance the rights and interests of subjects with those of science and society. Considerations of autonomy, justice and risk control were set out to limit utilitarian balancing, explicitly so, that's why these considerations are there, not to promote utilitarian balancing. However, it is doubtful that the question of how best to control utilitarian balancing was ever resolved with the Commission and, in truth, I do not believe that it has ever been resolved by anyone in bioethics. That is the theme, so to speak, of my talk today to the extent that it has a theme that cuts across all of its sections.
Now, to the Belmont Report. When I wrote most of the Belmont Report for the National Commission, what I have just explained to you was the framework of deep issues that was always before me. These concerns had already been laid out by the history of the Commission's deliberations, most clearly, I believe in the children's report. So it was easy for me to grasp the nature and importance of these questions, in other words, people have already done the work for me. I never forgot this context for even a single sentence when I was writing that report.
But before I explain this background further, I should state what the Belmont Report is known for and how to interpret its framework of basic principles. These principles are still today referred to in many contexts as the Belmont Principles and rightly so in my view. The Commission identified three general principles as underlying the conduct of research; respect for person's beneficence and justice. These principles, though often quoted in bioethics have often not been well understood because the historical context has been dropped in the invocation of the principles. I will cite a simple example which will be readily understandable to you and it will also, perhaps, motivate you to find the historical context I'm now drawing out a little better.
On page 19, Roman 19 of your Volume "Being Human," there is a brief discussion of these principles. And I think you wrote that, Leon, am I right or did you all write it together? I'm not quite sure how that worked.
DR. KASS: The report?
DR. BEAUCHAMP: Yeah, the introduction to the report. It doesn't matter. That was my understanding but it doesn't matter. The principles that are there said to be; and I quote, "The major principles of professionalized bioethics." The principles are then explained. They are explicated wholly in terms of the patient, for example, as discussed here, patient autonomy, and also in terms of distributive justice in the health care system as well as in terms of public health. There is no mention of research, not even a hint of it. The oddity of this to me is that these principles were born and bred in a research context. That is where they came from.
The concerns were not about patients, per se, but about subjects. The concerns about distributive justice were entirely about the fairness of the selection of human subjects, not about distributive justice in the healthcare system. In other words, the embeddedness of these principles in the history of research in the 1970s is easy to lose and it has for the most part simply dropped away in the contemporary understanding of what these principles are. And yet, if you're going to understand that Belmont Report, that is the only way, I believe, that it can be understood.
And so if we're to understand the National Commission and what it brought to be codified in federal regulations, as well as the connection of this history to the development of research ethics over the course of the last 30 years, it is truly important to understand not only the principles but what gave birth to them and the original context of their birth. This is what I'm trying to accomplish now. There's a key organizing conception underlying the Commission's presentation of their principles and use. The simple conception is the principle of respect for persons, commands in the research context, obtaining informed consent, be it first party or be it third party. The principle beneficence commands that a risk/benefit assessment be done and that it not be a narrow one but a broad one.
Justice demands that you pay — the principle of justice demands that you pay very careful attention to the way in which subjects were selected so that that selection process be fair. That's in outline, the essence of what the National Commission was about in constructing these principles as the framework of what it had morally done. In this way each principle makes moral demands in a specific domain of responsibility for research. Here is what is noteworthy. Beneficence in the world of biomedical research is very different than beneficence in the context of patient care where it is confined almost exclusively to the medical welfare of the patient. In the research context, beneficence has to be understood and should be understood in terms of benefit to the public and the social imperative, so to speak, of biomedical research. Quite explicitly the principles of respect for persons and justice in the Belmont Report are not in any respect whatsoever to be allowed to be brought under the context of or to be subordinated to public beneficence. That was the architectonic of the document.
Those two principles, axioms and the framework, so to speak, are erected intentionally at the heart of the National Commission's moral framework to serve as moral constraints on beneficence, to be just as important as social beneficence, what some philosophers like to call the ontological constraints. The whole point of this framework is that neither of these principles is to be compromised or lost under public beneficence, medical research, ambitions and the like. The Commission spent a huge amount of time on the place and purpose of informed consent.
In historical context, there wasn't much informed consent at the time or much of an understanding for the notion, for that matter. The Commission insisted that the purpose of consent provisions is not protection from risk. Indeed, that risk/benefit and their balance is wholly the wrong way to understand the place of informed consent.
I've heard it often said in both the United States and Europe that there has been an over-emphasis on autonomy in the frameworks spawned by the National Commission, sometimes expressed in Europe as an American obsession. I think this is false to the historical facts and I think it shows no insight into the history of bioethics. The National Commission was as concerned with personal dignity as it was with autonomy and this is one reason why its first principle is the principle of respect for persons. However, I don't want to carry on about this subject as at this point, it would take us too far astray. Back to the main theme.
We have seen repeatedly throughout the history of research involving human subjects that we cannot avoid coming to some form of delicate balancing of the human values that we attempt to collect and then distill in the form of principles expressed in the biomedical — in the Belmont Report. We then must contextualize balance and specify their significance when we confront particular issues such as those that confront us with children. The Belmont Report was written in significant part to try to insure that we appropriately balance appeals to social utility in the justification of research together with respect for persons and considerations of fairness.
If you lose that perspective, in my view, you lose almost everything of real and enduring importance about the history of the National Commission and much of the history of recent research ethics. The whole of the work of the National Commission, all 17 volumes, came to one fulcrum; how to balance and specify the moral considerations that it called, following the mandate of the U.S. Congress, principles, as those principles could be brought to bear in problems arising from research involving vulnerable groups, that is after all what we were up to: embryos, prisoners, the institutionalized, mentally infirm, children and so on.
I want to say just one more thing about the Commission's framework, by the way, how am I doing on time, before ending this first part of my assignment? The principle of justice in Belmont requires fairness in the distribution of both the burdens and the benefits of research. The Commission insisted that this principle requires special levels of protection for vulnerable and disadvantaged parties but that attention to questions of distributive justice should always remain at the forefront of research ethics.
This principle demands that researchers first seek out and select persons best prepared to bear the burdens of research, for example, healthy adults, and that they not offer research only to groups who have been repeatedly targeted, for example, mentally retarded children. The Commission notes and laments that historically the burdens of research have been primarily placed and indeed heavily so on the economically disadvantaged, the very sick and the vulnerable, owing to their ready vulnerability and availability. Yet, the advantages of research are calculated to benefit all of us in society. The over-utilization of readily available, often compromised segments of the US population was a matter of the deepest moral concern to the Commission throughout its deliberations. The theme of justice and the proper selection of subjects was Belmont's way of saying that since medical research is a social enterprise for the public good, it must be accomplished in an inclusive and participatory way.
If participation in research is unwelcome and falls on a narrow spectrum of citizens because of their ready availability, then it is surely a questionable enterprise. Sadly, the Commission never got much beyond these themes of justice but what the Commission did is still a good start and I will return in a moment to these questions of justice and explain this a little further. I conclude this little historical discussion as follows. The Belmont Principles are in some respect present in every document the National Commission published. These came to be the backbone of federal law in the area. From this perspective, as Christine Grady has observed in her recent book and I quote her, "Probably the single most influential body in the United States involved with the protection of human research was the National Commission." I believe that she is right. Biased as my judgment may be, I think Christine is right about this. But whatever the influence and enduring legacy of Belmont and the Commission's other reports, it is not clear that scientists who today are involved in research with human subjects are really familiar with the Belmont principles or understand them or for that matter actually use them.
When the National Commission deliberated it seemed to many at the time that the general system in the United States of protecting human subjects was in need of serious repair, that research investigators were not educated about research ethics and that subjects were not adequately protected. To some observers, the system still seems today caught in a notably similar state of disrepair, I mean right now. I am not here today to make any judgments about this claim as I'm really unqualified to do so. It takes a different kind of knowledge. I say only this; it would be a shame if you, a great national deliberative body, could not find the time to figure out where we are today and to do as much as possible to remedy defects in the system.
With this general perspective now out in the open and before us, I will turn to my recommendations to you based on what little I do know of the system today, what might be important to you and the history that I have just sketched. So at the risk of reducing myself to a kibitzer that you, perhaps, think I am anyway, I have several things to recommend and in four areas.
First, the research oversight system; some in bioethics have argued that problems with research ethics are largely a matter of getting just right how to adjust the present oversight system for clinical research in the United States, which supposedly protects subjects against excessive risk and exploitation while demanding scientific rigor and promise. If the system fails, this view goes, then to avoid moral problems, we need to patch up or reform the system. This view puts tremendous weight on the oversight system that we have and it is this that I suggest you be cautious about. There is no evidence that research oversight systems have a particularly good grasp on various moral problems of research ethics. And in any event, I don't think the oversight system is the only way to process or to present these problems.
Moreover, in a globalized world, one needs to think globally irrespective of a particular country's system of oversight. There are many oversight systems. I do not, myself, assume anything about the adequacy of oversight systems in thinking about research ethics and I recommend that you not make any such assumptions either. Frankly, many people now believe that the system of protection we've erected in the United States stinks and is in need of deep reform. I take no position on that issue before you today. If you wish to get into this, I would applaud the effort. At the same time, I do not think that moral problems should be confined to this arena. Performing the IRB system and the like could wholly consume your time for months and months. Try to think far beyond this to the underlying moral views and how to present them clearly.
Second area, problems of justice which I said earlier I would return to. The National Commission came into existence in the aftermath of public outrage and congressional uncertainty over the Tuskegee Syphilis experiments, the Willowbrook experiments and other apparently questionable uses of human subjects. The socio-economic deprivation of the subjects who had been enrolled in the Tuskegee experiments and involved at Willowbrook made them vulnerable to overt and unjustifiable forms of manipulation at the hands of health professionals.
What we know today about the recruitment and usage of human subjects is less than I would like us to know, at least I believe it to be. What do we know about that system, including how children are used in it? This is an empirical question that I hope you spend the time to investigate. I am not an empirical scientist but I suspect from a survey of the literature that less is known than people often believe. Since I am no expert, I say only this; consider what we know about economically disadvantaged persons in the United States today. Several issues are apparent. Are those persons, when used as subjects, vulnerable to morally objectionable forms of involvement in research? What kinds? Where, where does this occur? If so, what is it that renders them vulnerable? Is it morally permissible to trade off some level of risk and research for some benefit such as money or health care?
Answers to these questions are not readily available and they are difficult. It is also not clear which populations, if any, if any, should be classified as vulnerable. Discussion of issues about the vulnerability of research subjects, so-called vulnerable groups, have historically focused on embryos, fetuses, prisoners, children, psychiatric patients, the developmentally disabled, those with dementia and the like. Until recently, relatively little attention had been paid to populations of persons who possess a capacity to consent but whose consent to participation in research might, nonetheless, be compromised, invalid or unjust. Prisoners have been the most frequently mentioned class of such persons but there could be many other such populations, especially among the economically disadvantaged. It is known that persons of this description are involved in some research in North America though the extent of their use, I believe is not well understood. It is also known that persons of this description are used in other parts of the world, sometimes in so-called developing countries.
The scope of their use there is even less well-understood and reported and I believe that to be the case with children as well. I recommend that you look into such questions with some care but these are not the only issues of justice that you may wish to consider. One of the major problems of research is the potential injustice that occurs when a benefit is generated for the well-off through a contribution to research made largely by disadvantaged members of society including potentially many children.
Research, as currently conducted, places a minority of persons at risk in order that all, sometimes only the well-off, might benefit, but how the burdens of research involvement, if they are burdens, if they are burdens, a much under-discussed problem, are to — how those burdens are to be distributed across society has never been established in any authoritative code, document, policy or moral theory known to me.
Nothing in federal policy, to my knowledge, speaks to it. The notion of equity, commonly invoked in these discussions, is poorly analyzed and even more poorly implemented. As a result, it remains unclear and controversial when a practice becomes a morally objectionable marginal form of coopting the economically disadvantaged for the benefit of the privileged. I strongly recommend then, that you do your very best to understand this situation and provide remedies as it is possible for you to do so. Considerations of justice suggest that society should take a far greater interest in these individuals than it often does, not because they have become or might become research subjects but because of their status as economically disadvantaged and exploitable members of society at many levels. There are here many unresolved questions. For example, there are questions about the disproportionate use of the economically disadvantaged, emphasize disproportionate there.
It may be appropriate to have selection criteria so that the research enterprise does not so frequently use the economically disadvantaged or any vulnerable population for recruitment of subjects and that it set, as a goal, that the percentages of the economically disadvantaged subjects in research be restricted while insuring that research subjects are drawn from suitable diverse populations. This was a great concern to the National Commission but frankly, we never adequately resolved that problem or, in my view, addressed it. I'll say something more about that in a minute.
This does not imply that it is always unfair to recruit even heavily from an economically disadvantaged population. I suggest only that certain patterns of recruitment and enrollment introduce problems of distributive justice. Diversity in a subject pool might protect against several conditions that might otherwise too easily be tolerated in the research process; for example, that studies would be conducted in dreary, inhospitable and even inhumane environments. That the same subjects would be repeatedly used in such studies and that a dependency on money would set in so that subjects would become dependent on studies for their livelihood. A related problem is whether it is problematic for individuals to be repeatedly involved in multiple studies, a problem that presumably would be exacerbated as financial inducements are increased to be involved in those studies.
Let me conclude about this part about justice with this confession to you and that's what it is. I find it somewhat painful to admit but it needs to be said. The idea of justice in the Belmont Report and throughout the National Commission's work is deeply under-analyzed. The problems are not well-developed and the answers are thin. This is my own personal failure as well as the failing of the Commission and I take full responsibility for it.
What I am recommending to you is that you take care not to find yourself in the situation I now find myself before you today some years down the road. These are powerfully important issues and they are difficult ones.
The third area I want to say something about, informed consent; among the great worries about research at the very heart of what the National Commission did or worries about inadequate or compromised consent, the typical concerns center on whether the consent is adequately informed of whether subjects have the ability to give an informed consent. I'm not concerned now with the ability to consent. That won't be where my comments go but rather with the process used in the research context to obtain consent and the ways in which consent can be compromised.
The most frequently mentioned problems of consent in research are concealed data and inadequate discussion with trial participants about risks and inconveniences, a common complaint. Some practices of obtaining consent are clearly shams, where consent is invalid even if a written consent document exists. However, there are many more subtle and difficult problems of informed consent, including ways in which information is presented initially and then throughout a clinical trial. Without appropriate monitoring of consent, even an initially informed and legally valid consent, can become uninformed and invalid at least morally. I don't want to get here into the business of constructing an appropriate framework for obtaining and gaining consent in research but I do want to appeal to you to keep practices of informed consent at the center of your investigations of research ethics. The research enterprise and many of its parts still must learn that informed consent, properly so-called is very often not a single event of oral or written consent, but a supervised multi-staged arrangement of disclosure dialogue and permission giving that takes place far beyond the point of adequate oversight of a protocol.
I am not convinced that this model is deeply embedded in the research system in the United States still today or elsewhere and I recommend that you look carefully at the problem, including issues of consent or perhaps assent as we used to say in the National Commission, by children themselves. Research that proceeds without a voluntary and informed consent is simply an inexcusable disregard of the right of subjects.
A final comment, I want to say something about the political context of contemporary bioethics since you have to operate in them, in some context of bioethics. I have a longstanding disagreement with some of even my best friends in bioethics regarding whether or not National Commissions, Advisory Committees, Presidential Councils and the like have been and/or must be political bodies. Crudely stated, the argument to this goes roughly as follows.
The political views of Commissions reflect their political appointment sources which always have some political edge to them, whether it be the President or Congress or whatever. Whenever an issue turns in a political direction, it is reasonably predictable what the outcome will be. There might be something to this view, especially if it can be made subtle so that it is not cast too broadly. Discussion of it needs to be placed in historical context and made specific to particular issues, but this can be done by an historically sensitive scholar. However, I do not, myself, believe that this position is historically accurate or that things need to be in this way as some people have told me they are.
I believe deeply that such a characterization of the appointments to and the deliberations of the National Commission is false to the historical facts. I believe it is historical imagination getting in the way of real historical understanding. But now, pertinent to your context here, it is surely good advice that you protect yourself against the possibility of such an interpretation of your work, most especially in the context in which your charge is the protection of the interests of children and research subjects in the United States.
Few things could be more distasteful even disgraceful than a political arena for such discussions. I do not believe that Ed Pellegrino would tolerate it for a minute. Indeed, I do not believe that he would tolerate it in any arena. The problem here, of course, is that there is, I quite agree, a sort of political dimension to the work that you do. You can see that clearly emerging right now, for example, in discussions of the TGM-1412 case of the six London Hospital patients so tragically effected by an experiment gone wrong. This is real human tragedy and what happened in London spanning across four countries has to be sorted through very carefully.
We do not yet know, I believe, what happened here. It is too early for judgment. But listen now to the cacophonous voices reaching for judgment and speaking to the press with their opinions. Look carefully at what has been said by so-called bioethicists and others about this case. Watch the political jockeying that is every day in the newspaper. It will pay you dividends to watch this and not to make similar mistakes.
What I am recommending to you is that you take every precaution not to find yourself in a comparable circumstance. I think that is not as easy as it sounds, although I'm personally pretty confident here about Ed Pellegrino's leadership skills. So what I've been trying to tell you is how it was at the National Commission and that's the way I deeply believe that it was. This was not a political body ever. It was an anguishing grinding struggle over the importance of scientific research and the imperative to protect people from undue risk, but the National Commission left many stones unturned.
There is a great deal for you to look into and much hangs on it. Do it well, indeed, do it excellently. I'm sorry, I couldn't avoid that last line, given so many Aristotle scholars around the table. Thank you, Ed.
DR. PELLEGRINO: Thank you very much, Tom. Dr. Beauchamp's talk is up to discussion and I have Dr. Gomez-Lobo, Alfonso.
DR.GÓMEZ-LOBO: Thank you very much, Tom. That was a very clear and affirmative exposition. Let me start with an information question. And it has to do with the following; I was wondering if we're so far away now from Tuskegee and Willowbrook, et cetera, that some of the historical assumptions may have changed. What I have in mind is this; yesterday, we heard basically from Dr. John Lantos the following and I'm sorry I didn't have a chance to have him give us a broader view of this, but his claim was nowadays that the participation in a research project may not only not be as risky as it was but even beneficial and his claim was that in pediatric research subjects, that seems to be the proper word, would fair better or did, in fact — do, in fact, fair better than people who might have qualified for the research and didn't. So there is this presumption that because of the care, because of the concentration of efforts, probably as a result of Belmont Principles, the scene may have changed and I would like to hear more about that as a point of information about what's going on in research.
DR. BEAUCHAMP: I'm sure things are better and I would like to think they're better because of documents like the Belmont Report. I would enter a discussion of this with some skepticism about the point of view being quite as sanguine as you just laid it out as being John's view.
I think actually we know very little about this. How much does anyone know about what actually goes on out there? We tend to pay attention to things like hospitals, academic research context, clinics and universities. One of the things that drops out of sight, would be very easy to drop out of sight with you, for example, is corporate America. Stunningly little discussion of what goes on in corporate America, but there may be — and even in the case of a single CRO, there may be at any one time, as many as 100,000 subjects arranged for by the — at least these are the figures that have been given to me.
There's a huge number of human subjects involved in the research enterprise in this country. How much do you actually know about that and I think the answer is relatively little. I surely hope that things have been made much better in the last 30 years but this is an empirical question. I can't really answer it, because, I mean, I'm not an empirical scientist. I haven't empirically studied this and that's the kind of question that I see it. I think, though, when you see a case like the TGM-1412 study that I just mentioned, you see what can erupt at any time in the research context. Things can go very badly wrong. And you may even not be sure why they went so wrong in that context. So though I'm sure there are many, many studies where people are benefitted by being in the studies and, of course, you can certainly imagine to be not only in the case of the United States but in many other countries as well, I'm sure there are such circumstances. I would ask — I mean, I would be very reluctant to frame it in that way.
I would ask, yeah, but what about the other circumstances?
DR. PELLEGRINO: Peter, Dr. Lawler.
DR. LAWLER: So I was — to follow up on what Alfonzo said, I was thinking about the happy conclusion that Dr. Lantos gave us yesterday that if you're in a heavily regulated environment you're safer. So I will be safer later on this afternoon when I'm flying back to Atlanta than I am sitting here right now because then I will be in a more heavily regulated environment. I mean, there's some truth to this. The studies do show, so to speak, the safest place you can be in the world is on a plane flying, okay. But this seemed — I agree, your skepticism concerning this finally, that this doesn't simply resolve the situation so we can now put burden in quotes, "the burden" of being a research subject because it's not really a burden because we should all want to be research subjects. It couldn't be this simple.
So if I understood the beginning of your presentation correctly, there is an irreducible conflict between what's best for the individual, what's best for the person, and what's best for scientific research. There is no way of coherently resolving this on the level of ethics so all you can do is balance it in such a way as to minimize the problem.
DR. BEAUCHAMP: Be constantly on guard for it, yes, that's right.
DR. LAWLER: Right, so in other words, there's no coherent ethical resolution to this problem given that both of these interests in isolation are valid, the interests of the individual person and the interest of science to progress which everyone benefits from.
DR. BEAUCHAMP: Yeah, if I can comment on that critically, I'm often astonished at the literature of bioethics and the attempt to what we sometimes call hierarchically structured principles or something else. I'm just astonished at the idea that — this is not a knock at you — beneficence should be above everything else or that justice should be above everything else or autonomy above everything else. It's not the world and never will be. I'm sorry.
DR. LAWLER: Right, so that seems very reasonable. So your concern about justice confuses me a little because —
DR. BEAUCHAMP: Confuses you?
DR. LAWLER: Confuses me. I think lurking in your presentation is this premise, that citizens have a duty to advance scientific research. That premise must be there because you were against giving incentives to people to participate in scientific research because that then would have too high a proportion of the disadvantage —
DR. BEAUCHAMP: I'm sorry, I didn't say either of those things actually, but go ahead.
DR. LAWLER: No, no, the — well, I think you did implicitly because diversity is a goal in terms of subjects. How exactly would you achieve this given that if you offer incentives, those who are disadvantaged would be more likely to accept the incentives?
DR. BEAUCHAMP: Well, I have a 30-page unpublished paper on the subject that I will send to you if you would like to see it. I think it is an extremely difficult problem. It is way more — there's vast literature on undue inducements and undue influence and that kind of stuff in bioethics, most of which I find so confusing or incomplete that I really don't know quite what to do with it. And I don't think it's something that we can really get into.
I'd be happy to e-mail with you and, you know, talk about it at some length. It really is a subtle and difficult problem, more — way more nuance, I think than most people have appreciated but no, I don't hold either of the views that you're attributing to me, that you thought you saw in my talk, sorry.
DR. LAWLER: Well, then I'm relieved, thank you.
PROF. GEORGE: Dr. Beauchamp, I want to take you to those comments that you made at the end on the — well, really I think you're warning against the vulgar politicization of advisory councils and ethics councils and so forth and I thought your sound and subtle distinction between that and the sense of working in a political environment that was inevitable and not of itself something that needed to be rejected in any way.
A council like ours or any of these councils, our predecessors, other councils does work in a larger environment where there are particular elements of the society who are paying particular attention. I think in the case of a council like this and our predecessors one is particularly aware of the academic world and particularly the bioethics academics who are interested in the work of the Council, commenting on the work of the Council and so forth and also the media. I mean, we'll open the newspaper on a given day after a meeting of the Council and you know, there might be a report, something in the Washington Post perhaps, something in the New York Times and so forth.
Now, plainly you're right, I don't think anybody would disagree that a council like ours really needs to be careful to avoid politicizing itself or being politicized in that vulgar and bad sense. My question is not about that. It's about the steps that I think you were implicitly urging us to take to avoid not only that vulgar politicization, but the appearance of vulgar politicization. Now that, too, sounds right to me, since the effective work of the Council depends to some extent not only on doing our job well, but being perceived by opinion shaping elements of the society as doing the job well.
But I would just ask for your reaction to the following thought; it's not as if there are neutral observers standing apart from a political environment and free of political convictions and concerns of their own who are then assessing the question whether the Council is politicized in the vulgar sense or is doing its job well or is doing its job less well than it should because of vulgar politicization. Rather there are political realities that are describable about the people who are paying attention and writing and seeking to shape opinion.
And my own experience with this Council is that in the end there's very little that you can do that will enable you to avoid an allegation of vulgar politicization where at least as in this case, you're talking about a council whose members are appointed by a President who holds very publicly and defends certain views which are just out of step with major constituencies who are paying attention to the Council and commenting on it in the public media, whether people who are themselves professional pundits and commentators or people who are in the academic world and particularly with the world of bioethics.
Now, I guess I would not suggest that that doesn't mean that the effort should be — that doesn't mean that the effort should not be made, and by the effort I don't mean simply the effort to avoid vulgar politicization, I think there's no doubt that I mean, just as a matter of morality and virtue, that should be avoided. I'm also saying, I guess, I don't think it means that the effort to avoid the appearance of vulgar politicization shouldn't be made, but I simply have my doubts about how much can be done to avoid that.
Now, I could go through and cite examples and give you chapter and verse and so forth, but I think you know what I'm talking about and I'm wondering if you just have any reaction to that thought and any concrete advice about whether there are — whether I'm wrong and perhaps there are some steps maybe things that we haven't thought about that could be taken on the appearance question. Have I made myself clear?
DR. BEAUCHAMP: Yeah, I guess I'd like a more richly drawn description of — it's almost as if you're saying at one point in your comments you think it's difficult to transcend the political context and I'm wondering, I guess, what the underlying interpretation is there and what you think about that? Do you think that the context is so political and you have to have arrangements with such and such, the media, with let's say the White House or whatever, such that you're unable to transcend that political — to use the language of neutral observers was it, to become a kind of neutral observer of your own situation and/or the political context? Am I following you right? Is your view that strong?
PROF. GEORGE: I don't — perhaps I haven't made myself clear. Whatever we do is going to be observed and commented on by people who are interested in it.
DR. BEAUCHAMP: That's a given.
PROF. GEORGE: Okay, that's going to be a given. That is, itself, part of a larger political world, the elements of that commenting society.
DR. BEAUCHAMP: That's a given, too, but not a world that you have to, so to speak, conform to. You have to conform to its expectations and —
PROF. GEORGE: It's certainly true and we haven't. I think it's fair to say we haven't conformed to anybody's expectations. The Council has been, you know, quite publicly very divided on many, many issues. We have managed to achieve sufficient consensus to get out some reports that are valuable but in very many cases, what the reports are doing is not defending a single view, but putting forward the best arguments that we think are available for competing points of view and the reason we're doing that is that those points of view are represented on the Council.
But even those efforts seem not to be able to persuade significant commentators on the work of the Council that the Council is avoiding the kind of vulgar politicization that you're, you know, rightly warning any council against. And I think that's because people have political views and people have political agendas beyond the — beyond the Council and there's really very little we can do about that.
DR. BEAUCHAMP: There's nothing you can do about that. You're stuck with that problem. I mean, you just have to remain true to what you know are the right values, the right kind of information that you should assemble, be sure you have proper discourse and something that I personally think is very important is not to be in any way intimidated by or unduly influenced by any kind of political constituency and after that, what more could anybody really expect of you?
If people then draw the conclusions that they wish to draw because they have an agenda of some sort or an ideology or whatever, there's nothing you can do about it. They're free to say whatever they want to say.
PROF. GEORGE: Yeah, I think that's a good answer and I think it's in a way a very important answer as well, because it strikes me that there is a danger, your last comment really highlights it, that a concern to avoid the appearance of politicization in a certain sort of environment, might very well lead you to take steps that are out of line with your substantive mission to give the best possible advice on bioethical questions that we're facing. And so, while the concern for the appearance of doing a good job is important, you can't let that deflect you from actually doing a good job.
DR. BEAUCHAMP: That's right.
DR. PELLEGRINO: Dr. Eberstadt and then Dr. Meilaender.
DR. EBERSTADT: Professor Beauchamp, I realize that statistics and quantification may not be your metier but it's —
DR. BEAUCHAMP: That's for sure.
DR. EBERSTADT: — but it's kind of my thing so let me torture you just a little bit or avoid torture as the case may be. When we talk about minor increments beyond minimal in terms of acceptable risks for subjects, patients, is there, in practice any sort of literature that would indicate what is taken as acceptable and what is taken as not acceptable risks in practice? A subsidiary question there. I thought we had a very illuminating and valuable discussion yesterday about the differences between harms and wrongs.
I can see how harm is amenable to quantification. I'm not sure that wrong is amenable to quantification in the same sort of way. Any thoughts about that? And finally, just as a general observation, the mortality levels for children in most developed societies are vastly lower than mortality levels for adults. Does that suggest to people who are looking at the question of minimal acceptable risk for children the standard for children in absolute terms should be a much higher threshold than for adults?
DR. BEAUCHAMP: A three-part question, I guess. On minor increase beyond minimal, I would be embarrassed to try to make an answer to your question when one of the world's leading authorities is sitting in the front row back there. He knows more about this than anybody that I know. But I will tell you this; it was the view of the National Commission that it was not our job to try to fill in exactly with great precision what that means. That was the job of the IRB. That was the whole point of coming up with the conception in the first place, which is to say, if you want to put it, you know, rather negatively, nobody really knows what that means. It has to be decided at the IRB level in the research context.
On harms and wrongs, it's always been a very important distinction to me. It's a subtle distinction. I think it's beautifully analyzed in the work of Joel Feinberg, if you want to know sort of where I am on that issue and I guess I would agree with what you said about that.
And the other was about threshold of —
DR. EBERSTADT: Minor increment beyond minimal, acceptable, minor increment beyond — should there be a stricter standard for children?
DR. BEAUCHAMP: The way the National Commission went at that was to say we ought to use healthy adult populations first. Now, is that the right view? I don't know. I don't know that that is always the right view. So to harden that principle may be the wrong way to go. Still, you've got to see children in most contexts as more vulnerable than a healthy adult capable of consent, knowledgeable about the situation, and so I don't really know what much more to say about it than that.
DR. PELLEGRINO: Gil is next.
DR. LAWLER: Although his question was literally because children are objectively less vulnerable the standard should be higher. They're more likely to die just hanging around —
DR. BEAUCHAMP: That doesn't mean they're less vulnerable, yeah.
DR. PELLEGRINO: Gil, Dr. Meilaender.
PROF. MEILAENDER: I won't ask anything statistical, so —
DR. BEAUCHAMP: I wouldn't expect that of you.
PROF. MEILAENDER: — so we're okay on that.
DR. BEAUCHAMP: Maybe something Lutheran?
PROF. MEILAENDER: Not for today, I think. We'll see what your answer is, you know, and where it takes us.
Two questions. The first is just a kind of — it may just be a personal curiosity for me but when you were talking about the importance of seeing the Belmont Report embedded historically in a context where the issue was research and so forth, I mean, I don't dispute that but I just have the feeling that there was a sub-text that I was missing in a way. I mean, after all, there is that well-known book "Principles of Biomedical Ethics" that has some kind of historical relation to the Belmont Report, which —
DR. BEAUCHAMP: But nothing what people usually think.
PROF. MEILAENDER: Well, but which isn't confined to research ethics, right, and a somewhat similar structure morphs out in a way to deal more broadly. So I mean, if I was missing something, if there was something really important about locating the Belmont Report in that historical context, I'd just appreciate it if you could say a bit more about why you emphasized that so much. That's the first thing.
The other sort of less just kind of personal inability to get something, the notion of this sort of unresolvable and irreducible tension between respect for the individual and concern for the well-being of society, and the just sort of inability to get over that, there would be sort of two things I'd say about that. One, you — when you were talking about it in your talk, you used at one point, I wrote it down, in terms of these several different principles and some constraining others, you used the phrase "balance and specify the moral considerations." I would have thought that balancing and specifying are not the same thing in the way one goes about dealing with sorting out conflicting principles. I never — I never know what balancing means actually. I have some sort of idea what, you know, Henry Richardson means by specification and so forth.
So if you could illuminate the balance metaphor a little bit, I'd like that. And the other thing is, I don't know why we should necessarily think this tension is irreducible and unresolvable. It depends a little bit on what you think a person is and what you think a society is and what the relation between the two might be. And on some views of the relation between the person and society, there wouldn't be — the tension needn't be left to stand. Certain views might clearly, you know, give primacy to the claims of the individual and the well-being of the society would just have to suffer a bit.
Other views would think of the person as more a part of a whole, who's good had to be taken up in that. So the tension is only unresolvable, I think, if one's just kind of willing not to push farther on some what I admit are sort of quasi-metaphysical kinds of questions, but if we push on those questions, we might well get some kind of answer to it or so it seems to me.
And I just wonder if you think I'm mistaken about that or if I'm right but you'd just prefer not to push on those questions or sort of how you come at that.
DR. BEAUCHAMP: Well, that's a heck of a set of questions. It would take a lifetime to get to all of those adequately but let me say something about it. The main point that I wish to make about the three principles is that in order to understand those principles and what is the meaning of those principles, it has to be understood in the historical context. It cannot be understood in any other way, I believe, and understand that because it's been presented as a framework of principles. Understand that framework of principles requires a historical understanding. Of course, you could take respect for persons, do what a lot of people do, which is to give it say a Kantian spin or a Kantian interpretation. This is most certainly what the National Commission did not do. In fact, I tried that with the National Commission. I tried to explicate Kant and build it into the document. They said, "We don't want that. We want plain talk so that the entire American public understands it." So that really the larger point that I was making is that that framework is a historical framework.
If you don't understand it in its origins, you just abstract it out and you say, "Well, here's what respect for persons means," and then you go and you read Ed Pellegrino and you say, "Well, this is what beneficence means," and then you go and read Jack Rawls or something like that about "This is what justice means." You just won't get it. Right, you don't understand what's going on. Moreover, there's been a tremendous amount of confusion of the very sort that you elude to, which is between the principles of biomedical ethics and the principles that are present in the Belmont Report. People are so confused then that what they've done is to interlock the frameworks. Of course, they don't interlock, they're incoherent actually. So those are the kind of concerns that were in the back of my mind. I've taken a long time to sort of unravel that.
On balancing and specifying, it's a hard question. I've struggled with this. I've argued with Henry Richardson about it. I have argued with David DeGrazia about it. I have argued with Jim Childers about it. It is a difficult problem. I agree with you. I can tell you Jim Childers agrees strongly with what you were saying but it's not easy to get out and the reason it's not easy to get out is because you get more and more into the theory of what happens with specification and it looks more and more like you're balancing. That's the logic of the problem.
In simple terms, I think of it in this way; it think balancing occurs in judgments that are made in specific context and it is something that we have to do and we don't have time to specify, we don't have time to think about the consequences and the implications of the kind of thing that goes on in specification. We have to make judgments and we do the best that we can to balance the different considerations that are present in that context.
Specifying, I think of largely in a policy context. We are able to sit back and think and figure out well, what if this and what if that and what if the other thing and try to articulate what the policy would be under those different contingencies. But it's — I think it's a deep problem and I most certainly don't think that I've resolved all the questions.
Now, metaphysical questions; gee, I'm really reluctant to say anything with Alfonso in the same room with me. It might come as a surprise to you to know that most of my own teaching career has dealt with metaphysical questions. These among them, but many other metaphysical questions as well of the sort that say Alfonso and I would deeply share. So I have the deepest kind of respect for pushing in the direction of metaphysical questions. At the same time, the most influential philosopher on me, and I don't know on Alfonso, but the most influential philosopher on me is David Hume. And Hume asks you to approach metaphysical questions with a certain — a certain carefully edged skepticism and I do. I am very skeptical to go directly to the concept that you mentioned which is a concept of person. I'm very skeptical that that concept can be explicated, theorized about, and presented in a way that resolve, I think, exactly the kind of questions that you're talking about.
Now, there may be other ways to go at those questions but I am very skeptical that metaphysics persons will do it. That's another long paper as to, you know, what the skepticism is and how it would be manifest, but it's the long and the short, I think, of the answer. I don't discourage anybody ever from engaging in metaphysical thought. Just understand that it may have limits. It may have deep and sharp limits and not be able to get you, especially in the area of ethics, to the conclusions that you want to reach.
Is that okay, Alfonso, or am I making a fool of myself?
DR.GÓMEZ-LOBO: I think the key metaphysical issue is the issue of community rather a common good rather than persons in Gil's question, if I understood it correctly, right?
PROF. MEILAENDER: I would have said the relation between the human being and the community. I mean, on a type of person of language here, actually I should prefer human being language, but one might find arguments suggesting — actually, we could find some in Luther by the way, but I won't trot them out right now, suggesting that there's a sense in which the human being transcends the community and cannot be understood to the whole extent of his or her being to be a part of that community. That kind of approach might begin to suggest a certain way to resolve the tension and not leave it unresolvable, for instance. I mean, it's just an example, so I would have said that what I had in mind was, yes, it is community but it is the relation between the human being and the community and how to understand that but it does seem to me that different ways of understanding that might lead to different resolutions of this tension rather than just leaving it to stand alone. That doesn't mean it will persuade everyone but I never worried about that.
DR. PELLEGRINO: Dr. Hurlbut and then Dr. Kass.
DR. HURLBUT: Why don't you go first?
DR. KASS: Just a couple of comments and then a question, just sort of try again on something that was in Gil's — part of Gil's question. I appreciate the historical context of Belmont Principles. That's a very valuable addition to our consideration. I also share — sympathize with your frustration about how a careful piece of work has been misunderstood, confused by others and abused. It's an occupational hazard.
DR. BEAUCHAMP: We all have our sensitivities.
DR. KASS: And you can fill in the blanks as to why I and members of this Council might have some sympathy for that concern. We are in part responsible for the way in which we are misunderstood and in the field of bioethics generally the Belmont Principles, not by us primarily but by many people, have been taken, expanded and used beyond the research context and there are also people who try to push almost all kinds of ethical questions into the research analogy. So — but I think — I mean, on balance, I think this is a very valuable revisiting and clarification of what those principles are about.
Second, I appreciate that the principles are intended not as a licensing of a simple utilitarian approach to research with human subjects but in fact, to introduce non-utilitarian considerations. I appreciate that but I think part of the reason why the charge that this is somehow in the service of utilitarian, of a utilitarian consideration in the end, comes through the balancing — through the metaphor of the balance. The metaphor of the balance presupposes commensurables, presupposes some kind of measure, presupposes an adjudication amongst competing goods, but the — you don't balance the right against the good on the same scale if there really is such a thing as the right.
And there are people who think that there are certain kinds of inviolable — that there are certain kinds of conduct that simply should not be done and cannot be balanced, this would be an argument, cannot be balanced by certain considerations of social good or what have you. So I guess I wonder — I wonder really about the sort of underpinning of this notion of the need to balance and whether that doesn't finally contribute to a view which will eventually lean harder and harder against those kinds of restraints in the name of the good and in which you don't sort of say no matter how much good comes from this, this is — you're somehow traducing something deeply important about human beings.
I didn't put that very well, Tom, but I think you get the gist of what I'm talking about.
DR. BEAUCHAMP: Leon, you always put things well. I think that the essence of that last part which might be the most important part is about what I've referenced in the language of the paper as deontological constraints.
DR. HURLBUT: Right.
DR. BEAUCHAMP: This is maybe, maybe the most important discussion and philosophy of the 20th and 21st Century as ethical theory unfolded. And I would be very, very reluctant to try to say anything about how you resolve it. I do not think it is a simple question and no one has ever convinced me that there are clear deontological constraints that one must follow wherever they take you irrespective of the consequences and that's where the balancing will come back in. As the consequences are upped and frankly, as what might be at stake under the rubric of these deontological constraints are reduced, there may come a point where you say, "Well, even though ordinarily it would be very wrong to do this, in this case I think it is the right thing to do."
Now, I know these are very abstract notions and philosophers divide heavily over this. I do think the notion of a deontological constraint is a terribly important one and you could probably tell that that's the way I'm representing how I understood the National Commission as putting forward views about justice and respect for persons as precisely as deontological constraints. I don't know how to push harder on that without getting deeply into Thomas Nagle's theory of this and that sort of thing, which I don't think would be the right way for us to go.
DR. KASS: Could I just a small thought? Let's assume that there are constraints which are not always but for the most part, and that you might imagine yourself saying that, you know, the rule against adultery is more or less absolute but if it's necessary to provide an heir to Queen Elizabeth I to avoid civil war, we make an exception, et cetera.
But would you then be thinking of what you're doing under the metaphor of balancing? I mean, are you sort of creating a — or is the very notion of balance a kind of homogenization of the various considerations such that you can get them on —
DR. BEAUCHAMP: Yes, I would think what you're doing is balancing. The problem there is you have to articulate some kind of an account of what it is to balance and what it is that are the balancing considerations in these cases. Yes, I think that's in part what you're talking about exceptive case kind of circumstances or counter-examples or something like that and playing that game, which I personally think is a very important game or strategy or whatever in moral theory, I believe it inherently involves you in balancing considerations all along the way. You can't ultimately escape them.
PROF. GEORGE: Could I follow up very briefly on this, just to get clear on the distinction between specification and balancing? I can understand, we could argue it every step of the way but I can make sense of the idea that it's specification at work when we move, for example, from the principle of respect for persons to the norm against direct killing of the innocent. It also strikes me as pretty clear that what we're doing is specifying when we move from the norm against direct killing of the innocent to the principle of noncombatant immunity even in justified wars.
Then if we get into a debate about exceptions to the principle of noncombatant immunity, I can understand how that would be a debate between people who say, "Well, you can't balance a principle like noncombatant immunity off against other considerations," you know, even prevailing against a wicked enemy and so forth and so on, and how other people would argue, no, no, there you do have to balance. I mean, noncombatant immunity is for the most part and usually but there comes a point at which.
But I thought that at one point you were saying that the idea of specification itself becomes problematic, so that in moving from say — now these are my examples, not yours obviously, from respect of persons to no direct killing of the innocent or from no direct killing of the innocent to noncombatant immunity, what looks like specification does, itself, even there involve balancing.
DR. BEAUCHAMP: That's right. It may be very hard for you to articulate — I mean, specification is progressive specification in the way in which I believe you are suggesting. So as you progressively specify, render the principle or the norm more and more specific, it may be impossible to do that without bringing balancing considerations into the focus of what you're doing, at the very base of what you're doing. That's what I was suggesting.
PROF. GEORGE: Okay, I mean, I understand the position. I would want to argue against it. It would be an interesting argument there.
DR. BEAUCHAMP: Yeah, these are hardly settled matters.
DR. PELLEGRINO: Dr. Hurlbut?
DR. HURLBUT: Several times you mentioned the questions and problems associated with international considerations and I just wanted to generalize this discussion a little bit to a few practical and theoretical problems just to get your insight from your experience, a reflection on these kinds of problems. The Belmont Report is, by its own admission, your writings, provides general guiding ideals and sort of a general analytic framework for reflection and it seems to me that at the best, that will work well within a context where there's a certain cultural center of gravity to draw your thinking down to consensus.
But now we're facing the strange problem of what you might call the outsourcing of ethics, a troubling dilemma of not just US scientists being lured overseas to do work that is proscribed in the United States, but also the hints of corporate commercial interest by outsourcing clinical trials and so forth. And I think there's a really big problem brewing here and the problem is exacerbated by our own inability to find tangible kind of structure or skeleton to our own ethical thinking.
I know it's a vague question but I'd just like your reflections on this. It seems to me the three major zones where we are going to have to face up to some difficult dilemmas and the first one is the one that's the least troubling and the most troubling to me, it's the question of when you have a possibility of doing a clinical study in a population that is so wracked by disease and so under-served that you may be bringing the only medical care they have to the community, the only supervision. You know what I'm talking about.
DR. BEAUCHAMP: Yes.
DR. HURLBUT: So that's one thing. And especially with regard to children, you can imagine that you could do both a lot of good and a lot of harm in that context but then there are the practical problems associated with competitive commercial advantages, but overarching all of this is the more metaphysical concern with the question of pluralism and I just would like your — I know that's another very big set of questions but can you give us some reflections on those concerns?
DR. BEAUCHAMP: Tell me what worries you the most about them. Is it competitive commercial concerns or what worries you?
DR. HURLBUT: Well, I'm not accusing anybody of anything in saying this —
DR. BEAUCHAMP: No, I understand that.
DR. HURLBUT: — but I'm worried that if one country has different standards for what's acceptable practice and research ethics that they can test drugs more quickly and more efficiently on poor human subjects, unhappy situations. There are rumors of such things and I'm not saying they're true, but if it's true that drugs are being tested on prisoners of in orphanages for example, that would provide a very troubling foundation. It would be a very efficient foundation for later testing in above board transparent ways and it might increase the efficiency of your success in filtering drugs, but it wouldn't be good for human dignity and it wouldn't be good for individuals who are subjected to it.
And likewise, I can imagine situations where scientists go places where they're allowed to do things and I mean, we're now arguing in this country over the stem cell issue and the 14-day limit on the use of embryos but there might very well, I'm not saying any of our scientists want to do this, I'm not making any accusations, but there might very well be some people somewhere who want to use embryos later than 14 days if it could be done and it probably can be done technically some time. So, I mean, from a purely scientific standpoint, we would both want to know the science of embryo genesis after 14 days, plus it might turn out to be an efficient way to get cells, more advanced cells, tissues and organs for therapeutic use.
So I just — I know this is a very vague question, but it worries me a great deal that we're now becoming a global civilization. We're gravitating toward a globally standard material culture, but we don't seem to be able to find a moral culture of similar — of a similar uniformity.
DR. BEAUCHAMP: I guess I think what you are doing is rightly placing on your own shoulders a certain responsibility for creating such a moral environment because major public policy groups or bodies such as your own really do have that responsibility. I think it's built into the fabric of what you do. You started, I think, with the international domain. Let me first say that Bob Levine is one of the most distinguished figures in the world in international research ethics and again, I'm sort of reluctant to move into territory where he knows so much more than I do and he's sitting in the front row here.
But I've watched some of these things and been to some of these places and I've come increasingly to the view that there are a lot of American biases that — American biases, that's not — some are American biases and some are biases that come from institutions and so on that we should be careful about.
For example, a lot of people seem to reflect in conversations I've had with them, the view that well, of course the United States, we're very careful about human subjects and we have such a great health care system that will back up anything that goes wrong, and so on and so on. And in these other countries, well, of course, they don't — I'm not so convinced about that, that that's just even factually the case, that the healthcare systems in many, many, many countries in which the research is done quite as bad, and in fact, might be even better in some cases than the opportunities that we can provide them in this country.
So when you go to disadvantaged populations, there are disadvantaged populations in virtually almost every country of the world, I wouldn't leave the United States out. I wouldn't be so worried about say a population in China or India or something like that thinking that somehow inherently they're more disadvantaged than populations here. I worry about them in all of these places and what happens.
Another bias that has disturbed me over the years and might deserve a little bit of reflection on your part does come through your language of competitive commercial, I believe was the wording that you used. My own view, for what it's worth, I just give it to you for what it's worth, is that people who come from universities have a deep bias in thinking that, of course, their work is completely free of bias and so on when they do research and so on. And then there are these commercial interests where things get out of hand. Is this sort of, should I say hubris, Leon, would that be the right word to use in this context? It's certainly a kind of presumption here that I've personally not found to be borne out.
I should have to tell you, I was not raised in an academic home or environment. I was the son of a CEO of a healthcare corporation, a non-profit one, but that was the way in which I was raised and I came to deeply respect that culture. Now, things do get out of control, but they also, I think, also in our academic work and the research that we do, things can spiral out of control and we can misuse populations and so on. So I would be careful in understanding the context, careful in what your expectations are for certain groups and the like, but I don't have any cosmic solutions to these problems. They are not atypical, just a lot more complicated, I think, than people usually think they are.
I certainly do think that your reservations are well-taken. I guess I would just try to broaden them, contexturalize them a little bit.
DR. HURLBUT: I certainly think that the issue of commercial competition has its parallel at the level of individual ambition for which academics as well as other human beings are vulnerable.
DR. BEAUCHAMP: Exactly.
DR. HURLBUT: But still, and I wasn't even accusing commercial interests from our country to export their problems or outsource their ethics, I was just saying that regardless of who the originating source is for the decisions to do research of a certain type on — say on children, you can see how unless you — when you dispense with ethical concerns, you can do — you probably could do science and biomedical science particularly more quickly and efficiently but that's — the whole point of ethics is to put into the equation the more fundamental concern that to my mind has primacy actually, the protection of —
DR. BEAUCHAMP: I couldn't agree with you more. Sometimes that gets a little out of control. We think ethics always has primacy over everything else. I'm not so convinced that's true, but, yes, there is kind of a basic primacy to —
DR. HURLBUT: Well, if you see ethics in a broader context rather than just the frosting on the cake, but the very substance of the cake itself, the thing that knits the pieces together, the coherence of life, then it's — then it does trump everything else because without it, you have nothing.
DR. PELLEGRINO: Dr. Foster?
DR. FOSTER: I'll be very brief, I know we're at break time. I'm concerned about two things, Tom, about the practical things that you've talked about. It's certainly is clear. I've read the Belmont Report and all of those things and I get different insights from what you've said and the problems, you've hinted that there's sort of a loss of understanding about many of these problems of human research.
And I should say, I've never done human research. I mean, the most sophisticated animal I ever worked with was you know, a mouse or a rat, you know, or something, so I don't do this kind of research. But there is a huge increase in — with the NIH emphasis on translational research. You're going to see more and more human research coming along which means more and more problems we have to say. And we don't — in the first place about the risk; in 2004 there were 550,000 papers published in the 4,000 journals at the National Library of Medicine archives. Okay, that's one paper a minute, okay, just to keep up with risk.
I saw something this week that in all my life I have never seen and there are three cases in the literature of something that's such a common disease that every physician knows about it and it comes up, I mean, you can't predict always what risks are going to happen even with the Internet. So people who work on these things have a huge burden to try to comprehend — let's say only one out of 1,000 of those papers are important, that's still 500 major papers you have to learn about and so forth.
So there's not much time to spend on learning more about deep ethics and so forth. I mean, I don't understand a lot of the dialogue that goes between — you know, about specification and all these things, you know, and I don't think the thousands of people who are actually doing the research and the IRBs on which they serve have the slightest inclination to study in deep detail the sort of things that would be very important to you and to other ethicists as a discipline which should be but in terms of just running a clinical trial, I mean, it's not possible to do the sort of informed consent that you have described that's ongoing, reaching in great detail. Many times people can't understand the simplest thing.
If you've ever tried to talk to a very sophisticated professor of medicine and get them to take a drug that they need desperately, like a statin or something like that, and they've read the PDR and they've read every possible negative thing that can happen and you can't get them to, you know, give consent to do that. How am I going to do that to somebody who's got a grade school education? You know, I mean, so there's the problem in — there's a problem in carrying out this research is that there's not an inclination or the time for either all the knowledge — I mean, you have to try to do it. I mean, one of the things I know a lot about is diabetes.
I can't keep up with just the papers on diabetes mellitus, you know, that go on. So the question I want to ask to you, isn't it sort of unreasonable to say, as we've heard in some of these things, that we need to deepen, broaden the time and intellectual expenses in trying to do things more perfectly and more in accord with what the wonderful report Belmont meant? I mean, is that really possible? I can tell you very frankly as a chairman of medicine who deals with people who do all these things all the time, I think that's absolutely impossible and unrealistic and one of the thrusts we heard yesterday, we need to try to diminish the safeguards that are coming up with the IRBs in order to get a better assessment of what the true problems are. That's — and I don't want you to answer much about that except — because the time is up, but that's what really worries me. It's in seeing people who are actually doing all these things and on occasion you get heinous mistakes and some of them are deliberate, you know, that people have done wrong things but I don't know how — I don't think we're going to have a discussion in IRBs like we've had here this morning. I just don't think there's a chance in the world that that will happen.
DR. BEAUCHAMP: Dan, I don't have much to say about that. I agree with you wholly and completely. You are a — you're still a Donald Seldon Professor of Medicine; is that right?
DR. FOSTER: Sixteen years and I've passed that on.
DR. BEAUCHAMP: You've passed that on, okay. Well, let me say there are only a few Donald Seldons. Donald Seldon, for those of you who don't know him, is one of the great Professors of Medicine of our time, Dan's predecessor and he has the capacity to consume all of this literature and process it, but he is just about the only person I have ever known in my life who can do so. I agree with you wholly and completely, and therefore, the system has to find some accommodations and it has to be streamlined in certain ways.
One of the things that we looked at when I was on the Institute of Medicine Committee looking at responsible research, was how do you take what is clearly a broken backed system, the IRB system, and try to reduce the workload to straighten it out so it can do its work properly. That in itself is just a huge, huge chore and so what you've got here is you very nicely lay out the tip of an iceberg that we have to deal with.
DR. PELLEGRINO: Dr. George.
DR.GÓMEZ-LOBO: Can I make a — sorry, thank you. Just a quick remark, but isn't that precisely the reason to have something like distribution of labor?
DR. BEAUCHAMP: Absolutely, absolutely.
DR.GÓMEZ-LOBO: Because of course, the clinical physician, of course, can't do all of this but there has to be someone thinking through these problems it seems to me.
DR. BEAUCHAMP: No, no, let me give you an example. One of the things that we talk about a lot at the Institute of Medicine was the problem of conflict of interest by contrast to the work of IRBs. Quite a few institutions throw several kinds of conflict of interest onto the IRB. The IRB should not deal with conflict of interest. That should all be handled, done and over with before they even begin to process things. In other words, you should have a separate conflict of interest committee.
Now, I personally think that conflict of interest is a deeply under-analyzed notion not well attended to particularly non-potential conflicts of interest. That, then becomes itself — can become itself a major preoccupation. Most certainly IRBs in my view should be shielded from it. So yeah, of course, I'm going to agree with you.
DR. PELLEGRINO: Dr. George?
PROF. GEORGE: Yes, I've been —
DR. PELLEGRINO: Last comment.
PROF. GEORGE: Yeah, and thank you for the third opportunity. Sorry to take us beyond the limit but just I'll try to do this quickly because Dr. Beauchamp's comments as I've been reflecting on them brought a question to my mind about how we think about the common good and how we think about what you've been referring to as deontological constraints.
In professional bioethics, I think a lot of people conceive of the common good or social benefit in utilitarian terms. So they think that there's a common good and they've got a conception of the common good and that conception is a utilitarian conception. Now, for bioethicists for whom that is just part of a larger or more comprehensive utilitarian approach to ethics and to life, that's the end of the story. But there are other people in professional bioethics who aren't satisfied with the utilitarian story as a comprehensive approach to ethics and, therefore, they see a need for constraints on thinking of things in utilitarian terms and ordinarily those constraints are matters of protecting individuals who could be sacrificed in a way that would be unethical for the sake of the common good where the common good is conceived in utilitarian terms and where they're prepared to conceive the common good in utilitarian terms.
And then you have a debate between utilitarians and non-utilitarians. But of course, there are alternatives to thinking of the common good in utilitarian terms. I mean, there could be, for example, a purely deontological theory. I think it would be problematic. There's no doubt about that, it would be problematic but there could be such a way to try to solve the problem that you've identified as the problem that nobody has solved and probably nobody will solve, thinking about individual interests and the common good.
But in addition to that option, there is the kind of conception of the common good, I would say a non-utilitarian conception of the common good, that's part of a longer tradition, you know, going well back before the great founders of utilitarianism, and which still has very articulate exponents today, I mean, some in this room like Gil and Leon and Alfonso. I mean, the way their thinking and I, myself, try to think about the common good is to begin with a non-utilitarian conception of it and on such a conception, rightly or wrongly, there may be all sorts of problems with this, but on that sort of conception, what you are talking about and what a lot of professional bioethicists talk about in terms of deontological constraints on the pursuit of the common good, are really conceived as aspects of the common good. So there's not a contrasting of individual interests or rights with the common good, so we would never think with Ronald Dworkin in terms of rights being trumps on the pursuit of the common good conceived in utilitarian terms.
But rather we would see those individual interests, there's... one might even say individual rights, as aspects of the common good itself. So that there's an attempt, at least at seeing individual interests and the social good in more of a harmony and when we would restrain ourselves from certain courses of action that might otherwise be regarded as ways of advancing the common good, we wouldn't be, even on our own understanding, sacrificing the common good for the sake of deontological constraints or the individual or individual rights. We would be pursuing the common good, because the common good itself included a concern for the dignity and worth and rights and welfare of the individual.
DR. BEAUCHAMP: I think I agree with you completely. Let me only add that sometimes you find your situation is one of strange bedfellows. And it should come as no surprise here, I believe, that that's so. My own greatest teacher in philosophy, including in matters of this sort was David Hume and Hume's fundamental commitment was to the common good. But the moral theory doesn't really turn on the common good. The moral theory unlike what many people have said about him turns actually on the theory of the virtues. And so what's buttressing and supporting the over-arching view of what is in the common good is that ethics originates only in the bosom of a conception of common good as fostered in a society is the importance of virtue.
Now, the funny thing there is so many people have interpreted Hume as a utilitarian. I personally think that's a difficult interpretation to get out of him, but what you're saying could easily be explicated along the lines of Hume's view. What you're saying could easily also be explicated along the lines of John Stuart Mill's view in many ways. One possible way to understand Mill is his doing exactly what you just laid out. Now what a lot of people want to say, "Oh, yes, but he contradicts himself along the way. He screws up."
Well, maybe he did. Maybe he did, but I think he has in his philosophy in mind exactly the objectives that you have in mind and by the way, sometimes when I hear Edmund talk about beneficence and the common good, it sounds an awful lot like Mill to me, talk about strange bedfellows. But that's at any rate — I mean, we need to frame these things in terms of what people really held and what these frameworks are in a broad way committed to.
PROF. GEORGE: Yeah, I take your point actually about Mill. I remember reading years ago when I was doing my own doctoral work, a very interesting book by John Gray, one of John Gray's early works trying to harmonize the Mill of "On Liberty" with the Mill of "Utilitarianism."
DR. BEAUCHAMP: Yeah, that's exactly what you have to do, yes.
PROF. GEORGE: Yeah, and I thought it was actually a persuasive case.
DR. PELLEGRINO: Thank you very much, Tom, and the commentators. We're a little bit overtime so that we'll have our break and return at 10:30.