Transcript, Meeting Thirteen, Session Two


April 30, 2013


Washington, D.C.


Ruth Schwartz Cowan, M.A., Ph.D.              
Janice and Julian Bers Professor Emerita
History and Sociology of Science
University of Pennsylvania

Robert C. Green, M.D., M.P.H.                  
Associate Director for Research
Partners HealthCare Center for
Personalized Genetic Medicine
Associate Professor of Medicine
Division of Genetics, Brigham and
Women's Hospital and Harvard Medical School

Mildred Cho, Ph.D.                              
Associate Director and Professor of Pediatrics
Stanford University Center for Biomedical Ethics

Download the Transcript


DR. GUTMANN:  So our first panel this morning, we're going to hear about the ethical challenges of emerging technologies quite generally and how these challenges can inform our understanding and approach to the topic of incidental findings.  We'll hear from each speaker for ten minutes and then we're going to open the session for questions and discussions.  A quick reminder.  When you speak, make sure you've turned your mike on and the red light will come on.  And we will begin with our first speaker, Dr. Ruth Schwartz Cowan, who is the Janice and Julian Burrs Professor Emerita at the University of Pennsylvania.  Special welcome, Ruth.

Prior to that, Dr. Cowan was the director of women's studies and the chair of the Honors College at the State University of New York at Stony Brook and where she currently is professor emerita of history.  Dr. Cowan is the author of six books and numerous articles.  She's also been a Fulbright scholar, a Guggenheim fellow, Phi Beta Kappa lecturer, and Sherman Fairchild distinguished scholar at Cal Tech.

She is active in and the former president of the Society for the History of Technology and serves on the editorial boards of Social Studies of Science and Science and Culture.  And Dr. Cowan is currently working on a history of American women engineers, a study of risk communication with regard to prenatal diagnosis for Downs Syndrome, a revision of her textbook on a social history of American technology, and a history of the National Academy of Science.  And in her spare time, she is agreed to speak with her.  Welcome, Dr. Cowan.

DR. COWAN:  It's really a pleasure to be retired.  This is what Penn faculty do when they’re retired.  More than 30 years ago, I started a research project on the history of amniocentesis.  And this project arose from my own experience 33 years ago, almost to the day, of amniocentesis.  And it emerged also from my ethical concerns about what I had just been through.  And in the case ‑‑ in the course of doing this research, I came to the conclusion that the tools that historians of technology have developed over the years have ‑‑ should be and can be and have been applied to by the study of bioethical issues.  It's not just the tools of philosophy and they're not just the tools of law, but also the tools of history.

And ‑‑ or the findings of investigations in history.  The insights of historians of technology are really important to understanding the bioethical quandaries that arise when new technologies emerge.  And so that's what I'm going to focus on this morning.  I'm going to make 5 points very quickly, but I'll try to illustrate them, also quickly, with salient examples that come out of my research on the history of prenatal diagnosis.

So salient point number one.  Salient insight.  All technological changes and all scientific discoveries that lead to technological changes have unexpected outcomes.  They have expected outcomes.  That's why they get to be technologies.  But they also have unexpected outcomes.  So here's an example.

In the late 1950's, a French physician, Jerome LeJeune, used a new technology which we have come to call karyotyping, to examine the chromosomes of his patients who had Downs Syndrome.  He was hoping for a particular outcome.  He was hoping to find a therapeutic intervention or a cure for Downs Syndrome.

 A decade later, however, he was appalled to discover that there was an unexpected outcome of his research.  He noticed that his discovery was being used not to treat or to cure, but to prevent the birth of infants with Down Syndrome.  And he acted on his appalling discovery.  That is, he was one of the principle scientific objectors to abortion for fetal indications.

So that's point number one.  All technologies that we've studied have had unexpected outcomes.  All right.  Point number two.  A technology needs to be part not just of a technological system, but of a sociotechnical system in order to have any outcome at all.  A simple way of understanding this is to say that a nail needs a hammer and the hammer needs a carpenter in order for there to be some outcome of that technological system.  Sociotechnical system.  Because there's a carpenter in it.

Karyotyping is a system just like all the others, which consists of osmotic solutions and microscope and skilled technicians in order to have an outcome.  So we can say that the problem that LeJeune discovered as a result of his discovery that his work on Trisomy‑21, his discovery of Trisomy‑21, was being used to prevent the birth of patients like the ones he was caring for, was really an unexpected outcome to him because his discovery had become embedded in a sociotechnical system of prevention, not a sociotechnical system of diagnosis and treatment, which is what he had expected.

Third insight.  Inventors and discoverers are really not able to predict unexpected outcomes.  They are not experts on what they didn't expect to have happened.  Because they are and they must be focused on the outcome they are hoping for.  They don't pay much attention and they may not even be able to pay attention to the outcomes that are unexpected.  They cannot envision any other sociotechnical system in which their discoveries might become embedded, is another way of putting it.

Here's another example.  In the 1950's, a Scottish obstetrician‑gynecologist by the name of Ian Donald spent almost a decade fiddling ‑‑ the jargon word for fiddling is developing ‑‑ an ultrasound device.  He was fiddling with ultrasound devices that were being used in Scottish shipyards to detect fractures in steel plates, not visible to the human eye, fractures in steel plates.  He was developing this technology so that it could visualize uterine cysts and tumors in his patients who were so obese that X‑rays could not penetrate the dense adipose tissue in their abdomens.

Took him almost a decade to get what we now call gynecological ultrasound perfected.  He never once during that decade suspected that a decade later, his invention of gynecological ultrasound would become embedded in the sociotechnical system of obstetric ultrasound.  And that pregnant women would soon start asking for copies of the ultrasound images of their fetuses to show to their families and friends.

That's an example of what happens when a particular technology becomes embedded in a technological system for which it was not intended.

Fourth insight.  Every participant in a sociotechnical system has ‑‑ and I emphasize these modifiers ‑‑ multiple and legitimate interests in that system.  And not all of these interests are financial, which means that not all of them can be captured in conflict of interest statements.  A researcher, for example, may have a financial interest in the research that she's doing, but she may also have an altruistic interest in the research.  That is, she's doing the research in order to provide better health care for people.

She also may have professional interests in the research.  That is, this research will enhance her reputation.  It will enable her to get more grants and keep her laboratory going.  It will defeat somebody else's hypothesis ‑‑ we do that sometimes.  So there are multiple interests that a researcher has in the research that she is doing and not all of them are financial.

A clinician may have an interest in ‑‑ and undoubtedly does have an interest in reducing the suffering of her patients ‑‑ of his patients.  I switched gender in my notes.  A clinician may have an interest in reducing the suffering of his patients at the same time that he has an interest in improving the bottom line of his practice.  And both of these are legitimate interests, and they might on occasion conflict with each other.

A manufacturer of diagnostic devices may have an interest in increasing the profits derived from those devices, partly to please the shareholders in her corporation, but partly to fund development of better diagnostic devices, which is what, in fact, manufacturers do sometimes or often with their profits.

The patient may have an interest in getting better healthcare.  Undoubtedly does have an interest in getting better healthcare at the same time as he may want to pay less for that care.  So there are interests that are financial and there are interests that are not financial, and many of them ‑‑ probably most of them are, from the perspective of the participant in this system, legitimate.

This point leads me to my concluding relevant insight that I have derived, and I hope you will agree is worth deriving from our studies in the history of technology.  There are no disinterested, truly objective parties to bioethical disputes that arise when new biomedical technologies emerge.  Everyone has an interest of some sort in the development of those technologies and the use of those technologies in a system.

And that means, I believe, that sound bioethical policy requires first, careful delineation of those interests.  That's what historians can do.  And second, thoughtful balancing of them.  Which is what you get to do.

DR. GUTMANN:  Thank you very much.  Our next speaker ‑‑ we're going to have all speakers and then open it up for questions to anybody.  Our next speaker is Dr. Robert Green, who was the Associate Director for Research at Partner's Healthcare Center for Personalized Genetic Medicine and an associate professor of medicine in the Division of Genetics at Brigham and Woman's Hospital and Harvard Medical School.

He is a medical geneticist and physician scientist who directs the Genomes to People research program and Translational Genomics and Health Outcomes, and he co‑directs the NIG‑funded PGEN study, which is the first prospective study of direct-to-consumer genetic testing services.  He is a principle investigator of the MEDSeq project, the first NIH‑funded research study to explore the use of whole genome sequencing in the clinical practice of medicine.  Dr. Green also co‑chaired the incidental findings working group that recently published recommendations for the management of incidental findings in clinical exome and genome sequencing.  Welcome, Dr. Green.

DR. GREEN:  Thank you very much, Madame Chairman and commissioners.  Ladies and gentlemen, it's a true pleasure to be here and speaking to you.  I'm going to take you on a quick overview of some of our research over the past 12 years that pertains to these issues and slow down a little bit at the end and tell you a little bit about our ACMG recommendations that were just released.

My disclosures are shown there.  I do work with some of the direct-to-consumer companies, as you'll hear, in a collaborative arrangement, but do not accept any funds from them.  My work is primarily supported by NIH research, for which I am deeply grateful.

The path to genomic medicine is currently working predominantly in understanding the biology of genomes and the biology of disease, but the leading edge of this is already in the advance of science to medicine and improving effectiveness of health care.  My program in ‑‑ which we call Genomes to People, does clinical research.  Data‑driven studies in genetic epidemiology, randomized clinical trials on the impact of genetic information and comparative effectiveness research.  The very first question we got involved in almost a dozen years ago was, is genetic information dangerous?  Which is one of the contentions that's at the heart of the incidental finding controversy.

And just to review for those of you who are not aware, we used as a model ApoE genotyping, which does indicate an increased risk of Alzheimer's disease, a very scary disease and currently untreatable and unpreventable disease.  When we started this in the year 2000, this was a very frightening proposition.  We were lambasted on all sides for actually disclosing this kind of information to individuals, but we approached it through a randomized clinical trial that ended up demonstrating that when you compared the disclosure of ApoE, even E‑4 to people who had not had it disclosed, there were no group differences, anxiety, depression, or distress on validated measures.

We looked at this in many different ways, but it turned out that people really wanted this information, and those who sought it out would do it again.  That they attached a great deal of value to it, both financial and other types of value to it.  That they ‑‑ in contrast to some prevailing narratives about genetic risk information, they actually did something with the information.  Even ‑‑ they tried to do exercise, vitamins, medications to reduce their risk.  Sometimes they did things we might not necessarily agree with.

People who learned they were at greater genetic risk actually went on the internet and tried to purchase unregulated supplements more commonly than people who learned they were E‑4 negative.  And as you might imagine, people who learned they were E‑4 positive, at increased risk for Alzheimer's disease, were 5 times more likely to purchase long‑term care insurance.  A very rational response, but one that scares the pants off of the insurance industry.  And ‑‑

DR. GUTMANN:  There are no disinterested parties.

DR. GREEN:  Exactly.  Remind me to tell you about presenting this at an insurance company outing.  So now I'm going to show you some less published ‑‑ some newer data that hasn't actually been fully published yet.  One of them we called dosing the disclosure, which is what happened when you try to imagine genetic information presented in the context of an ordinary medical encounter.  We actually did a randomized trial where we condensed the entire protocol to something you might do in a doctor's office.  A brochure that informs you, a blood draw, and a question and answer session.

And I'll show you one slide that suggests that the condensed protocol was not more distressing than the extended protocol, and that there was very little difference when the condensed protocol was applied by genetic counselors versus medical doctors.

We also did an entirely separate randomized trial we called diluting the disclosure, which pertains to incidental findings.  And in this situation, we got all these folks who said, yes, I want to learn my risk of Alzheimer's disease.  We randomized them, and then in one arm of the study, we surprised one group with a risk of heart disease.  As it turns out ‑‑ that ApoE is also a genetic risk factor for heart disease.

Now, this was fascinating, because we took these individuals, randomized them in this way, and as you might imagine, when people found out they were at both increased risk for Alzheimer's disease and increased risk for heart disease, they were more likely to do something about it.  In this case, to say they were going to exercise more.  And you can see that on the right side of the slide.  The E‑4 positives were much more likely to say they were going to exercise if they found they were at risk for both Alzheimer's and heart disease.

But this next slide was truly surprising.  On the left side, you see all the people who are E‑4 negative, and if you're E‑4 negative, you're about the same, regardless of which arm you were randomized to.  If you find out you're E‑4 positive, you have a bit more distress, obviously, finding out that you're at increased risk for Alzheimer's disease, although this is all below clinical cutoffs.

That distress is substantively mitigated by finding out that you're also at risk for heart disease.  Now, on the face of it, that makes no sense.  You just found out you're at risk for two diseases rather than one.  But there's something about finding out either two pieces of information or finding out one piece of information you can't do much about plus another piece of information you can do something about that seems to mitigate the overall distress.

 And perhaps this is a clue to why there hasn't been more distress when people receive these large panels of information.  So we're exploring these data and we think it's quite fascinating.

Now, we got funded a couple years ago to examine direct-to-consumer experience, which, after all, is a kind of naturalistic experiment in incidental findings.  You may have something you're particularly interested in when you order direct-to-consumer testing, but you're really not thinking, probably, about all 70 of the things you might get back on the health panel.

And for this we work with two companies, 23andMe and Pathway, and we gave customers a survey before we got their results, a survey after they got their results, and a survey at six months after they got their results.  And this is the first study where the customers who were participating in the study also agreed to give us all their genetic information.  So we were able to see in this study what they thought they were getting, what they actually got, and what they thought they got.

And we're just getting this data in now.  I can only give you a slide or two for a teaser, but I think it's fascinating.  Here among the first the thousand people who have gotten to six months, about 27 percent reported that they've already discussed their results with their doctor, versus only 3 percent ‑‑ sorry geneticists ‑‑ have discussed them with an actual genetic specialist.  As with other surveys, about 10 percent said these results prompted them to make an appointment with a medical professional and a further 10 percent actually had some sort of test, exam, or procedure as a result of the information they learned with their direct-to-consumer genetic testing.

Now, what were the predictors of having this test exam?  Well, obviously there's sort of a tautology here.  Talking to your doctor about this was a strong predictor of whether you were going to get some sort of test or exam.  But look at some of the others.  People who perceive that they did get above average risks were 9 times more likely to get test exam or procedures.  People who already thought they were in good health were more likely to do this.  People who had higher anxiety were more likely to do this and people who had a pre‑test intention to sort of integrate this with their care were more likely to get this.

This is important because one of the key questions was does any kind of incidental finding in any of these contexts lead to increased downstream costs in medical utilization?

We've been ‑‑ one of the leaders in the CSIR Research Group, where we have the MedSeq projects, which is a randomized clinical trial in patients with disease and in healthy patients, looking in each group at using whole genome sequencing to see basically what happens when you try to use whole genome sequencing in the practice of medicine.

I'm going to skip ahead because we've created a one page results report.  If you can imagine all this complexity, with Heidi Ream of the LMN at our place we have developed a one page report that we think, with a little bit of training, even a primary care doc can utilize.  Here is the top half of that page.  You can see there's a section on monogenic disease risk.  Here's the bottom half of that page.  Carrier risks are identified as well as pharmacogenomic associations.

The question of incidental findings has been plaguing us and is the subject of today's Commission.  We actually took that on with a specific survey of a selected group of leaders in genomics, and we got some very interesting results.  We were able to get concordance for a small set, but look at the difference between these three specialists.  The specialist on the left thinks it's a pretty good idea to use ‑‑ to report known pathogenic mutations, truncating mutations, but thinks it's a terrible idea to report back missense mutations.

The specialist in the middle thinks it's a really good idea to report back known pathogenic mutations and sort of an okay idea to report back truncating and missense.  The specialist on the right clearly studied at the school of George Church.  Believes it's a great idea to report back known truncating and missense mutations.  So what, ultimately, is the right analogy for this?  Is the radiology analogy right?

I just wind up with these two slides on ACMG recommendations which came out.  Along with my co‑chair, Les Biesecker, we developed incidental findings, a list ‑‑ a specific list of conditions and genes which we recommend that anyone ‑‑ any laboratory doing whole exome or whole genome sequence report back not to the patient but to the ordering clinician, who can put that into context.

These recommendations diverge from current genetics practice in two important ways.  We suggest systematically reporting these positive findings without trying to seek the patient's preference about these very low‑probability results just as we would in the rest of medicine, and we suggest reporting the same list, regardless of the age of the patient.  And it's beyond the scope of this talk, which I need to stop, but we believe this converges these recommendations with the rest of medicine.

So I'll stop with this slide in honor of Dr. Wagner's interest in restoring antique cars up in the upper left.  And it's the "caldies fappie faldier a va piel" (ph.), the first handmade car, which then gave way to where we are now in genome sequencing, the early manufacturing.  And as we look to the future, where an entire car may be able to be grown in a genetically engineered petri dish.  Thank you.

DR. GUTMANN:  Thank you, Dr. Green.  Both Dr. Cowan and Dr. Green have given us a lot to think about, and our last speaker on this panel is Dr. Mildred Cho, who is the associate director and professor of pediatrics at the Center for Biomedical Ethics at Stanford University.  Post‑PhD, Dr. Cho was awarded a Pew fellowship in Health Policy at UCSF.  Dr. Cho was also a fellow at the Center for Healthcare Evaluation at the Palo Alto Veterans' Affairs Medical Center.

Prior to a position at Stanford, Dr. Cho was assistant professor of bioethics in the Center for Bioethics and the Department of Molecular and Cellular Engineering at the University of Pennsylvania School of Medicine.  Her research interests include the ethical and social issues of genetic research, stem cell research, genetic testing, and relationships between academia and industry in biomedical research.  Welcome, Dr. Cho.  It's wonderful to have you here.

DR. CHO:  Thank you.  Thank you for inviting me.  So I'm going to talk a little bit about some of the ethical frameworks and practical challenges of thinking about incidental findings.  I'm really going to only talk about this in the context of genomic research, but I think you already have a very good idea of how these may or may not apply to the other contexts.

So I think some of the ethical frameworks that have been put forward for looking at incidental findings really follow the frameworks for looking at findings in general.  What do you do with diagnostic results?  What do you do with medical information?  And a lot of these focus, I think appropriately, on sort of judging and weighing the risks and benefits of giving the information, of not giving the information, and what they may or may not do to an individual's autonomy.

In the context of incidental findings, those principles have to be weighed against sort of the different responsibilities of the individuals involved in the different contexts, which you already have outlined.  Research, clinical, direct-to-consumer testing, also public health settings.  So there are many different contexts.  And I will note that there are discussions going on now in different states.  I know in California they're starting to talk about using whole genome sequencing in a public health screening context.  So this is a very timely discussion.

I think Dr. Cowan already very nicely set up this idea that there are no ‑‑ there's no one who doesn't have an interest.  I think the policy framework that you use to look at an issue and look at what drives the issues are all based on different values.  And so I think it would be ‑‑ it's useful to look at ‑‑ underneath and look at what values are driving these different sort of policy frameworks and decisions that are made on the basis of those.

We have established diagnostic technology assessment frameworks.  How do you look at a diagnostic technology that are evidence based.  But we also are facing sort of the social ‑‑ socio‑technological networks and the social pressures that make us susceptible to technological imperatives as well.  And we always have to ask ourselves a question, just because we have a technology, does that mean we should use it?  And in what situations?

We also have, I think, a changing background of how data are used.  And to the extent that many of the technologies you're looking at generate data, and what you're really looking at is how data are managed, owned, and to what extent should it be kept private, we have sort of different models now of how data should or should not be managed.  And there's sort of sometimes a clash between sort of an open‑source model of data versus sort of a more protective model of data.  As well as commercial interests.

So I think if you look back on ethical frameworks that have already been developed in genetic testing, there's been a pretty strong consensus on several items.  First, that genetic testing should serve the best interest of the patient, that reporting information for which there is evidence of benefit should guide which information gets reported.  That the benefits should outweigh the risks.  That one should respect an individual's right to know and also the right not to know.

So I think there's been over the years pretty strong consensus about this.  And I think there's ‑‑ just on the slide here, I have a number of different reports over the last almost 50 years from various professional societies, task forces, and other groups that all have sort of reiterated these same principles.  However, in recent times, in the last decade or so, we've begun to see an ethical shift away from that consensus.

So I think it's important to think about incidental findings in the context of this sort of rumblings of an ethical shift that happened in the mid‑2000s, where benefit was now seen in a broader context, not just as benefit to an individual but benefit to a family.

That's a completely different framework.  And so I think this shift was made in the context not of whole genome sequencing, but in the context of other technologies such as tandem mass spectrometry that has been used in newborn screening and now has resulted in the increase in the number of tests that are mandated in statewide newborn screening programs.  From an average of about 5 per state to an average of about 50.

So now here we are in the era of whole genome sequencing.   And this ethical shift depended on “developing new technology that contained costs by screening for virtually all target conditions with one test system," and that is a quote from an article that was in the previous slide.

You can see this describes whole genome sequencing pretty well.

There was this ethical shift that emphasized now the best interest of the family as opposed to the individual, and in the case of something like newborns' screening, that means a particular child, and de‑emphasizing autonomy.

I think some of this ethical shift is also reflected in the recommendations, the recent recommendations on incidental findings that Dr. Green is the lead author of and hs already mentioned.

I don't want to go over the recommendations again, but I think it's worth looking at some of the ethical arguments that underlie the recommendations ‑ to actively search, to not offer patients preferences about receiving results, and not to make distinctions in age for reporting, for example, late onset adult conditions associated with genes.

First, there was an argument, I think, an articulation of the fiduciary duty that clinicians and laboratory personnel have to prevent harm, but also then a recognition that performing tasks to mask analysis of a whole genome ‑‑ because you are not going to give somebody their data for their entire genome ‑‑ would be unwieldy and perhaps placing unrealistic burdens on laboratories, and that the amount of genetic counseling needed to discuss unrelated conditions might be overwhelming.

Although it did acknowledge that the patients do have the right to decline clinical sequencing.

As well, there was an argument about respecting the parental rights about children's health, but you could also argue that not providing preferences to receive results is also a challenge to autonomy.

In addition, there was an argument made that findings should be reported to children because their parents do not have ready access to sequencing at this time, and perhaps that might become different as costs drop.

There is sort of this ‑‑ you can see this argument about benefitting a larger group of people, other than the individual whose clinical case is in question.

I'm not really going to go over the analogies that were in the report, but I do think it's well worth the Commission's time to look very seriously at the ways in which different technologies are analogous or not analogous in this context.

I think the most important thing is that the recommendations acknowledge that there is insufficient evidence about benefits, risks, and costs of disclosing incidental findings to make evidence based recommendations at this point.

I think one of the major issues that I think is salient in whole genome sequencing is the idea of looking at the risks and benefits at this time with the state of the technology at this moment.

I would argue that the potential harms may not come from when somebody gets information, if it's correct, but that this information may be actually incorrect, and there is actually a high probability or not insignificant probability that whole genome sequencing information is actually not correct at this moment.

The FDA and NIST are working together to try to develop minimal standards for sequencing quality, but there are studies that show that there are a wide variety of sequencing platforms that have different results.

When you send the same sample to two different labs, the concordance, for example, of a single nucleotide variance may be 88 percent, and other kinds of variants like insertions and deletions may be as low as 26 percent, meaning there are hundreds of thousands of variants per genome that are not in concordance when sent to two different laboratories if they are using different platforms, or even two different laboratories using the same platform.

Analytic validity is not assured with this technology at this point.

There is also inter‑lab variability of the humans who are interpreting the information, and I think it is very important to understand that interpreting the information is very dependent on the data that we have already from other genotype/phenotype correlations.

You can't interpret a new genome unless you know what happened with all the other genomes before it, and a lot of that information is actually proprietary and not accessible to most labs.

I think that is a very important policy point.  Independent of patents, patents on genes and ownership, people who are collecting genotype/phenotype information are able to keep that information proprietary, which makes it very difficult for clinicians to interpret new genetic information.

Also, if genomics are used to test what are essentially low risk or asymptomatic populations for very rare conditions, the predictive value of those tests will go way down to the extent that the false positives may actually exceed the true positives.

I think it's important to consider how interpretation of genomic information specifically is very resource intensive.  I think that challenges the idea of what an incidental finding actually means in genomics.

The data do not present themselves sort of in black and white.  They have to be extensively analyzed, and require broad expertise.

There are also opportunity costs.  I think there is data that has recently come out suggesting that follow up costs may be in the hundreds to thousands of dollars per variant that comes out, that is reported.

I think there will be a lot more evidence of that and information about that in the coming months that is published.

I will just conclude by saying that the decisions about limits or obligations to report incidental findings should be based on established ethical principles and evidence, and the technology is changed, but the ethical principles, per se, may not have.

If there is a reason to change those ethical principles, that should be very carefully considered, but evidence about harms and benefits are lacking.

DR. GUTMANN:  Thank you very much, Dr. Cho.  Terrific trio to begin with.  We are open for questions and comments by Commission members.  Nelson, why don't you begin?

DR. MICHAEL:  I was very intrigued by this discussion of lists that you raised, Dr. Green, and Dr. Cho's comments probably solidified my thought process on this.

I haven't read the guidelines, and I apologize.  I will be doing that later on today.

Are you saying there is going to be some sort of panel that would look yearly at those polymorphisms that would be recommended to be passed along to clinicians?

If that's the case, who sits on that panel?  The selection of those genes, just in terms of how many polymorphisms you are going to provide to clinicians, what type of polymorphisms, what disciplines would be included on that panel?  Obviously, not just medical geneticists.

There are a series of questions.  One possibility, and you raised this yourself in your own presentation, is if you know that you have a polymorphism that makes you potentially more susceptible to disease that currently we can't do much about, at least not medically, although one could do lifestyle preparations, but there might be things you could do for cardiovascular risk, is that something you would be more interested in?

This is a really nuanced set of conditions.  I'm thinking about why would a gene like a CCR5 polymorphism that would triage your risk for HIV acquisition ‑‑ that might be something that you want to obtain in Irish that live in Chelsea and Revere, in your city, but probably would be of little value in Chinese that live in Palo Alto, because the polymorphism frequency is so low.

I think it is an interesting idea, but how would you distill it to practice?

DR. GREEN:  That's a great question and I think an astute observation about what would happen if you had access to the entire genome, and you had to try to bring to bear some sort of judgment on this extraordinary wealth of information, pharmacogenomic common complex variants.

We didn't do that.  In our ACMG recommendations, we started asking at this moment in time, what are the absolute minimum number of conditions and variants that we believe you wouldn't want to miss.

Really, this list is only made up of things like what we believe are fairly penetrant, cancer predisposition syndromes, sudden cardiac death syndromes, and very few others.

It is 24 conditions and 57 genes.  There is only one pharmacogenomic thing on there, which is the hyperthermia with anesthesia.  There are no common complex variants on there, such as in the GWAS studies.

They are rare.  We are not sure how frequently they would be found, but we imagine that between 1 in 100 and 1 in 200 people would ever flag positive on one of these, which is why it is a short list and a low probability.

The question about false positives is a reasonable question, but I'll stop there.

DR. GUTMANN:  Just to make your answer more complete, either you or Dr. Cho could answer this, the absolute minimum, on what criteria or criterion do you base that absolute minimum?  It sounds like a really great absolute ‑‑ this is the absolute minimum, but what's the foundation on which you determine or what are the criteria you determine for what the absolute minimum is?

DR. GREEN:  Professional organizations have used their collective clinical expertise to design recommendations for practice.  That is an established practice.

They are not all thoroughly evidence based, and this one we admitted up front was not fully evidence based, but the criteria we used were on the rare side of things, the more common of the rare variants, conditions that had relatively high penetrants, as best we could tell from the evidence in our clinical experience, conditions in which there was a period of time you could make an effective intervention and make a clinical difference, and conditions in which there was basically the group collected and the additional reviewers that we collected and the Board of the ACMG agreed should not be missed.

There is a continuum starting from the most obvious thing that no one would ever want to miss to things that everybody agrees are completely ridiculous, and there is no break point in that continuum.


DR. WAGNER:  I'd actually like to follow that with a question to Dr. Cho.  We have sort of just offhandedly said that incidental findings are those findings that come about, things we weren't really looking for.

Now we have someone proposing a list of things that one ought to be looking for.  My question is what does it mean not to be looking for something?  You are trying to hang some parameters on that.  Your presentation seemed to be at a broader level than just genomics.  That is why I thought I might pick on you first, but of course, all the panelists.

What does it mean not to be looking for something?  Is that really the right definition we should be using?  In fact, Lisa's presentation suggested that an alternative definition should be not what the test was ordered for as opposed to

What is an "incidental finding?"  I guess that's what I'm asking.

DR. CHO:  I'm actually one of those people that doesn't believe that incidental findings is very helpful in the context of genomics, partly because I think this idea of data sort of popping out at you and being unexpected doesn't really reflect, I think, the way that genomic data have to be analyzed, because it has to be interpreted in a way, and you have to decide what things you are going to look for, especially when you have a massive amount of information.  You have to filter that out.

DR. WAGNER:  We can't do a parallel thing in imaging?

DR. CHO:  Well, I think some radiologists would argue that some of the imaging technologies are similar, in that it's a bunch of digital stuff, and it doesn't just sort of pop out on the screen, it has to be determined how the data go from the digital form to an image that can be read, and then it's further interpreted by a human.

I think all of this stuff is very interpreted through a filter that is pre‑developed, so that you can bias the results in ways that you determine before you even look at the data.

I think some of the work that has been done recently at Dr. Atkinson's institution in Kansas City illustrates sort of one approach that they used for whole genome sequencing in newborns that had very severe life threatening conditions.

One of the things they did was to establish a way of sorting through the data and only selecting particular things that were likely to be associated with the symptoms that were being expressed in a patient at that time, and not looking at anything else.

I think that sort of represents one way of approaching the problem that doesn't necessarily lead you to look at a bunch of other stuff that isn't relevant.

That does sort of fit your definition or the definition before about sort of what are you ordering a test for, why are you doing it.


DR. FARAHANY:  I think it's a fascinating idea that there could be a minimum list.  I imagine it would be incredibly challenging with lobbying groups, et cetera, trying to actually get theirs on there.

I want to go in a slightly different direction.  One thing that you noted was this isn't about asking patient preferences, and Mildred, you put up there the idea of right not to know concept, that maybe we're moving away from.

I'd like to understand that.  Is there such a thing as a right not to know, and if there is such a thing as a right not to know, does that include a right of others not to know or just my own right to remain in the dark?

How is that taken into account in the kind of model that was proposed here about not taking into account patient preferences because I gather if you are giving the information to the physician or the person who's ordered the test, then patient preferences are taken into account, which would imply that there's no right of others not to know, just your own potential preference.

I was hoping each of you could speak to that concept, if it's relevant.

DR. GREEN:  Sure.  I think the model that we really looked to was the way that medicine is practiced in other domains.  If you're seeing a dermatologist for a rash and the dermatologist sees a mole beside that rash that looks like a melanoma, that dermatologist has a fiduciary duty to see that.

If you have a chest x‑ray for your heart and there's a lesion in the lung field, that radiologist, systematically with their eyes, not with the computer, but with their eyes, systematically has to examine that x‑ray.

There is really no debate that they would not report back that finding if it was suspicious to the ordering clinician.

I think the point of view we came from with a lot of thought, over 14 months, and a lot of diversity on this ACMG working group and then with further thought and conversation with the ACMG Board, was how do we de‑exceptionalize whole genome and whole exome sequencing in the future practice of medicine.

And with that regard, we thought we had to start somewhere.  That's the analogy.  It's not really about preference in a very low probability event.  You can't imagine that dermatologist saying before I examine your rash, let me tell you that I could find cancer, I could find a boil, I could find a pimple, it doesn't really have good meaning.  Same thing with the x‑ray.

DR. GUTMANN:  Our job is to also figure out why.  Isn't that, to put it simply, because we agree, or is it the case to the extent that we agree, that that physician has a fiduciary duty to do that?

DR. GREEN:  I believe so.

DR. GUTMANN:  In the presence of a stipulated fiduciary duty, which goes unsaid because it is the background condition, that we generally agree on that follows.

Then the question is how extensive is this fiduciary duty?  How far does the analogy extend?  Do direct-to-consumer agents have the same fiduciary duty?

I'm asking you that question, do they?


DR. FARAHANY:  Let me just follow up on that, which is if the dermatologist is looking at my skin and notices that I have freckles which may predispose me eventually if I spend a lot of time in the sun toward developing a melanoma, but I don't have melanoma next to my rash, you wouldn't think every single thing that I might be predisposed to given the pigmentation of my skin or anything else like that, would be included within the fiduciary duty of the physician to report back to me.

DR. GREEN:  I agree.

DR. GUTMANN:  That's two different things.  One, do you have the fiduciary duty.  Two, what does it extend to?  With general practitioners, the history ‑‑ when there were general practitioners, the instruments were less discerning, but the idea of the fiduciary duty was extensive.

It may be over time with direct-to-consumer, our sense of the fiduciary duty has narrowed while diagnostic techniques have broadened.  We are agents.  We have, you have, some influence, if not power over this.

What we are asking you is what do you think the extent ‑‑ who has the fiduciary duty and what is the extent of it.

DR. GREEN:  The panelists think from an ethical perspective.  The ACMG working group and the ACMG Board did not address direct-to-consumer.  My personal opinion is direct-to-consumer companies do not have the same fiduciary contract with their customers as a physician does with their patients.  That's my response to that.

DR. GUTMANN:  Dr. Cho?  Dr. Cowan?

DR. CHO:  I agree with that, about the fiduciary duty.  I think Henry Richardson in his writing's talks about sort of different models, entrustment models, as a contrast to a contractual model, which I think is more in line with the DTC kind of modus.

The fiduciary is very different than the contractual.  The researchers may be occupied in different space, too, which may be more in line with an entrustment type of model.

I do think the analogy is not quite exact.  It doesn't fit perfectly for genomics, you know, with the mole on the face.  I think the mole is there.  You see it.  Presumably, the dermatologist has the expertise to recognize that it's maybe a melanoma as opposed to just a dot.

Genomics is not the same in that it would be more like having to have someone take off all their clothes as opposed to just noticing there is something on somebody's face that you can see when they walk in.

It's sort of recommending that findings be searched for actively.  That's different.  I think that goes beyond the traditional fiduciary duties, to do things that you wouldn't ordinarily do in the course of a clinical encounter.

I think once the finding is there, it may fall within the fiduciary duty to report that, but I also think that part of the melanoma example that may not fit with the genomics is this issue of the analytic validity.

With the melanoma example, you have a sense of what the prognosis is going to be.  You have a sense of what could be done to prevent it.  You have the sense of the accuracy of your own diagnosis.  I think none of that really holds with whole genome sequencing.


DR. COWAN:  Thank you for asking me to respond to this.  I have two observations.  One is not as a historian but as a patient.  It seems to me it is now standard practice for a dermatologist to ask you to take all your clothes off.


DR. COWAN:  In some sense, the fiduciary duty to observe has been enlarged and possibly it's been enlarged because there are new technologies of observation, or at least a new system of observation for dermatology.

The other thing I observed, which I am speaking as a historian about, in both Dr. Cho's and Dr. Green's presentations, there are shifts going on in fundamental bioethical principles.  At the same time that you, Dr. Cho, so wanted to assure us all that the fundamental principles remain the same, but they don't.

The fundamental principles, for example, about whether or not somebody has a right to know or a right not to know, seem to be shifting.

DR. GREEN:  May I respond just very quickly to one thing Dr. Cho said?

DR. GUTMANN:  Sure; absolutely.

DR. GREEN:  It is tricky with these analogies, they are like dueling analogies, but if I can just return to the melanoma analogy, it isn't typically that a doctor would say this is or is not a melanoma.  They would see a lesion, and a probability would form in their mind as to whether this is a cancerous lesion, a pre‑cancerous lesion, high probability, low probability.

Really, in all of clinical medicine, we're dealing with probabilities just as we are with genetic variance.

DR. GUTMANN:  Good point.  Anita?

DR. ALLEN:  My question/comment really follows up on Nita's question and on what Dr. Cowan just said about the right to know and shifting bioethical standards.

My question was inspired by your presentation, Dr. Green.  You were emphasizing that this report suggests that doctors should receive the information and then clearly they would transmit it appropriately to their patient, but that makes me wonder hard about this idea of the right to know and about the shifting standards, which today, I think, suggest that patients really should be more in the driver's seat than that somewhat old fashioned notion you just described suggests.

In particular, under our current health privacy laws, all patients have a right to access their medical data, and they have a right not only to the narrative their doctor gives them, they have a right to see the MRI, the lab report about their melanoma lesion, they have the right to see the x‑rays, the brain scans.

So how can we in this context, which is kind of a legal obligation, to give it all up if the patient asks for it, how can we deal with the problem that I think motivated your group to think the data ought to go in the first instance to the doctor?

The patient is going to get it anyway, right, if they were at all curious.  What do you get by first transmitting data to doctors, and how do you reconcile that with this idea of a patient's right to know?

DR. GREEN:  One word:  contextualization.  The most fundamental thing you get when it goes first to the doctor is the doctor's ability to soften, to warn, to reassure, to put it in the context of family history, to seek phenotypes that they might not have otherwise have thought of, to get additional studies or not, to follow a patient.

All of that is critically important to the interpretation of a genome, not simply handing a hard drive or even a report to somebody.

I do think people will ask for the hard drive.  I think there will be enthusiasts who do that.  I think they will be self selected information seekers.  I think that is a lot of where our research into genetic disclosure has taught us, that there are information seekers who will go get this.

I predict that for the vast majority of human patients who end up getting sequenced in the next ten years, they will turn to who they have always turned to, their clinician, and say doc, help me understand what this means.


DR. ARRAS:  A couple of issues, I just want to continue gnawing on a couple of bones that have been thrown out there.

One is in your opinion, Dr. Green, how many garden variety primary care providers are capable of providing that sort of contextualization right now?

DR. GREEN:  I will be able to give you some answers, empirical answers on that, when we complete our MedSeq experiment, in which we are giving primary care doctors six hours of orientation to genomics, and then we are letting them loose with whole genome sequencing and actual patients.

DR. ARRAS:  It's reassuring to be told an answer will be forthcoming.


 DR. ARRAS:  That's more than most philosophers will ever give you.


DR. GUTMANN:  Let me ask ‑‑

DR. ARRAS:  I have a second part.

DR. GUTMANN:  Go ahead.

DR. ARRAS:  Dr. Cho, on this issue, this trend that you're describing, of an increase in respect for the welfare of the family, and on the other side, a decline perhaps in the salience of autonomy, did you mean that to hold only in the realm of say pediatric genomics or would that be a statement that you would make across the board?

I'm wondering like how much of a concern for family well being might be also described as a concern for family autonomy?  The parents deciding what kind of family they want.

With regard to the decline in the concern for autonomy, it does seem to me that all this emphasis on market driven genomics indicates a resurgence, if anything, of autonomy.

These are my genes.  I want to know about them and I have a right to know about them.

DR. CHO:  I think this broadening of the concept of family autonomy is something that I think has been argued not just in genomics.  It's been something that's been coming up in pediatrics generally, and there has always been an undercurrent of that in genetics because of the involvement of whole families and implications of the results.

I don't think that's entirely new but it is new that it sort of is coming up in the policy frameworks.

I think there is a sort of empowerment argument that's made for direct-to-consumer genetics and genomics that is sort of along the lines of autonomy, but I also think getting back to Nita's question about the right not to know, I think there is also an issue about not reporting things, for example, to children for late onset conditions.

There has been a pretty strong consensus that is something that should not be done because of the autonomy issues there.

DR. GUTMANN:  We are running over.  I want to give Dr. Cowan a chance to respond to this.  Steve Hauser will ask a question.  I am going to have one question here, it's actually an important observation from a member of the audience, and then Dan and Christine will have first dibs at the next questioning. Otherwise we will start of running woefullybehind

Dr. Cowan?

DR. COWAN:  I have a question, not a comment.  It's a question for Dr. Cho.  Your presentation focused on the benefit to families, but your slide read "Benefit to families and society."

I'm wondering if you would comment whether it is even possible to understand what the benefits to society are, let alone to practice.

DR. CHO:  Right.  That was a quote from an article where this broadening of benefit was presented ‑‑ broadening of newborn screening was presented as a benefit to society.

DR. GUTMANN:  Dr. Hauser?

DR. HAUSER:  Thank you.  Dr. Green, I am going to try to extend John's question into right to know or right to explore.  Your data is in fact very tantalizing when you describe the impact of revealing the personal genomic information and the resultant ten percent frequency of additional tests or procedures performed.  Obviously, that's well beyond the minimal list of disclosed conditions that are in your recommendations.

We know that clinicians often act through art as well as evidence.

I just might ask if you would comment on this, but also if you are ready ‑‑ I understand this is a work in progress ‑‑ to tell us about the types of procedures, give us some idea of the scope, iatrogenic complications, et cetera.

DR. GREEN:  Thanks, Steve.  I don't have the data on what procedures they did yet for our direct-to-consumer.  That data was just analyzed yesterday.

I think the minimum idea here is important.  It's great to be in this position because we have gotten really criticized from both sides.  People have said ‑‑

DR. GUTMANN:  It means you must be doing something right.

DR. GREEN:  People have said how dare you want to look for this minimum list and promote it.  A whole other group of people are saying how dare you leave out all the other stuff in genomics that could be of tremendous value to people.

The thinking is really minimum list.  What is it that you just wouldn't want to miss?  We actually said this in our deliberations, you wouldn't be able to sleep at night.

Dr. Biesecker, who is here in the audience, actually has data through his ClinSeq project of finding some of these cancer predisposition variants in people who never suspected they had them, and what sort of impact that had on their health care, and how that played out.

It's a work in progress.  We specifically recommended two things that haven't gotten quite as much play.  One is these be amended at least annually as the science progressed, and two, as was done when bone marrow work came out, that there be some sort of national registry that tracks basic outcomes in people who receive incidental findings, so we can follow them and see if in fact we are doing good or harm.

DR. GUTMANN:  Let me read a comment from a member of our audience, Alyssa Forrest.  Alyssa where are you?


DR. GUTMANN:  You correct me if I misread your handwriting.  Alyssa is a visiting scholar at the Feinstein Institute and a doctoral student at Molloy College.

It is my opinion that under ‑‑ this is a question for anybody who wants to comment ‑‑ it's my opinion that under Richardson and Belsky's partial entrustment model, the fact that a research subject entrusts a dimension not the whole of his or her health to the investigator when he or she agrees to participate in a study is sufficient to require the investigator to report incidental findings to the subject, even if the findings do not reveal potentially serious health risks?

I take it that it is a question in the form of a comment.  Do you agree that if a research subject entrusts a dimension and agrees to participate, that is a sufficient reason or responsibility to report back the finding?

DR. CHO:  Henry told me he wasn't able to be here today, so I don't want to put words in his mouth.

My interpretation of his entrustment model is that would depend on other factors, that there are other remediating factors that come from the nature of the relationship and also the nature of the vulnerability, for example.

Christine, maybe you can comment on that.

DR. GRADY:  There are multiple dimensions that he builds into the model, the scope of the relationship, the strength of the relationship, the vulnerability of the person, that modify the extent to which you owe, whatever you owe.

DR. GUTMANN:  We have a lot of thanks to Drs. Green, Cowan and Cho.  We will hear back from you at a panel later.  For now, thank you so very, very much.

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.