Transcript, Meeting 17, Session 7

Date

June 10, 2014

Location

Atlanta, GA

Presenters

Herbert S. Lin, Ph.D.
Chief Scientist, Computer Science and Telecommunications Board
National Research Council of the National Academies
 
Robert McGinn, Ph.D.
Professor of Management Science and Engineering
Professor of Science, Technology, and Society
Stanford University

Download the Transcript

Transcript

DR. WAGNER:  And I think we move on without a break at this point.  So let me ask Dr. Lin and Dr. McGinn to come forward.  They are going to help advise us and help us discuss frameworks for evaluating ethical and societal issues raised by emerging technologies.  So we are still talking about DSLRs, I guess, in a sense.

Let me introduce first Dr. Herbert S. Lin.  He is the Chief Scientist at the Computer Science and Telecommunications Board of the National Research Council of the National Academies where he has been Study Director of major projects on public policy and information technology.  Dr. Lin has published on issues in cognitive science, science education, biophysics, and arms control and defense policy.  Previously Dr. Lin was a professional staff member and staff scientist of the House Armed Services Committee where his portfolio included defense policy and arms control issues.  Welcome.  Eager to hear from you. 

DR. LIN:  Thanks.  I'm very appreciative of being here.  The reason that I'm here is that I was the Study Director of this study, which was a study commissioned by the Defense Advanced Research Projects Agency (DARPA) to look at a framework for addressing ELSI issues in the context of emerging and readily available military and other national security related technologies.  We had the usual illustrious committee of people responsible for this project.  There's a variety of technologists there, there are philosophers there, neuroscientists, there are a variety of bioethicists and so on. 

The short version of the report is this:  That the military technologies pose ELSI challenges that are significantly different from those that you find that arise in a civilian context.  And of course the study of ELSI in the context of civilian science is a lot more advanced than it is in the military context.  And we had to -- we concluded that to address ELSI issues we had to do two things.  The first thing was up front we had to do the best systematic effort we could in anticipating ELSI issues and recognize that whatever we did there was going to be incomplete.  The second was that you had to plan to reassess as the work progresses and adapt and modify if necessary.  That's really important, because a lot of times you proceed and you proceed on the assumption that nothing will go wrong, and that's almost always not true.  And then we found it was possible to adopt lightweight processes for agencies and researchers that are interested in addressing ELSI issues, that is, that the methods that we have for addressing ELSI issues did not have to be overly burdensome and onerous. 

We had a number of findings, some of which are relevant I think to the commission's work.  We found that it was hard to anticipate the ELSI issues that would come up at the start of any development.  None of these are particularly earth shattering but I think they framed what the committee was doing.  Sustainable policy required attention to ELSI issues over the long run.  Many technologies, sometimes technologies get shut down because people don't pay attention to the ELSI issues and you lose the potential benefits of that. 

Public reaction is obviously an important part of this in terms of the support that a given project receives. 

And this last one I think was really important, that any approach that we took to promoting consideration of ELSI issues had to address not just sort of the top-level policy pronouncements, but how the program, how these policies were implemented at the ground level.  That was really, really an important framing point. 

Recommendations.  We had five recommendations that senior leadership of funding agencies had to explicitly and openly make ELSI a priority.  We talked about five processes that would help agencies keep an eye on ELSI issues, an initial screening and reviewing for proposals, monitoring of projects for emerging issues, engaging with the public as needed, and review of this stuff to make sure that we hadn't gone off track.  We talked about the importance of sensitizing program managers, building external expertise, and providing assistance to researchers to actually engage with this stuff who often don't know what to do.  Consider ELSI issues, what does that mean. 

We thought that it was possible to address ELSI concerns in the systematic way and also impose minimal burden on agencies and researchers, and I think that was a key point here. 

In general, ELSI issues have two components, two important components.  One is to identify the important ELSI issues and what the competing perspectives are; the other is to weigh competing perspectives of determining course of action.  We only focused on the first.  The second is basically out of scope for us. 

Technologies of concern.  We focused on emerging and readily available technologies.  Emerging, that is unfamiliar and new.  Rapid change of progress.  Readily available means everybody can get at it in some way; that it's no longer limited to major nation states.  You have to consider these technologies because there is an uncertain evolutionary pathway.  Because everybody can get at them you don't know what's going to come out.  Furthermore, many of the regimes for control are based on states, and on state control, and to the extent that individuals and groups, non-state groups, can get their hands on them complicates your efforts.  In this context we have to worry about high connectivity, rapid diffusion of knowledge, globalization, and no government monopolies. 

We talked about two domains for consideration.  One was what we called enabling technologies.  These are technologies that facilitate and drive and are essential to other technologies in multiple application areas.  And then we talked about application areas, which might be defined by a common problem that many applications might help to solve and draw on many different technologies. 

We basically talked about this, how do you anticipate the ELSI issues up front.  Use as many possible sources for insight, be as systematic as possible using a certain structuring framework.  We described the certain framework, which is in the report.  Engage as many people as possible.  But then this business about planning to reassess and adapt and modify as necessary is really a key point in all of this. 

To emphasize, initial analysis is going to be incomplete because you don't know how the technology is going to evolve and people use technologies in all kinds of interesting ways.  Anticipating unanticipated ELSI issues, that's an oxymoron, but it's still a useful thing to do, because if you think about the issues in advance, it gives you building blocks that you can redeploy.  There's a saying that no military plan, no plan survives first contact with the enemy, but nobody would suggest that it's a stupid idea to plan for a battle, because it brings together a lot of information that you could redeploy and reshuffle quickly.  So it's about planning to adjust in midcourse. 

Some additional considerations may be helpful to the Commission.  We struggled with this question of whether we should do a top-down or a bottom-up approach.  A top-down starts with philosophical ethics and applies the different technologies and so on.  The advantage, you get systematic coverage and a consistent template for analysis, but an artificial consistency is the result of that and not really in accord with how people make decisions.  So we decided that a purely top-down method wasn't going to help. 

You could do bottom-up, start with different technologies, consider ELSI issues in different contexts and in the abstract.  You get fine data analysis but it's sort of an ad-hoc process.  So in the end we did both.  We start with different technologies, consider ELSI issues, look to philosophical ethics for additional insight. 

Some of the challenges that we faced.  I'm sure that you faced some of them too.  It's a very complicated topic.  You want to do everything.  Every question that you ask, every answer that you develop to any question gives you another question.  You want to boil the ocean, and you have to resist that.  It's important to have real scientists involved in this to ground the debate, and real world considerations are really important.  It's really important to note that policy makers want intermediate options other than stop and go.  They are very unhappy if those are the only two choices that you get, so the answers that you get are often yes, but, no, if, unless, and so on.  And the question of minimizing bureaucracy was a really, really big deal for us. 

In the end we decided that our recommendations would be of the persuasive kind rather than the mandatory kind.  We'd say that agencies, if they want to do this, here are things that you can do, rather than saying agencies must do.  The question about high-level intentions versus how it appears in the field, talked about that.  What really struck us was that it was really important to just keep on asking the questions even if you couldn't come up with answers.  There's a saying that "It's better to know some of the questions rather than all of the answers." 

And one thing that we found in this was really important, I think.  We felt that human judgment in programmatic decisions was really important.  Good judgment was really essential, and that you could have all the processes in the world, but if you didn't have good human judgment on this, you were going to fail.  More important than any specific oversight mechanism. 

And I beat it by 15 seconds. 

DR. WAGNER:  Indeed, that is well done.  Thank you very much.  Look forward to conversation with you in a moment, but first we want to hear from Dr. Robert McGinn.  He is Professor of Management Science and Engineering and Professor of Science, Technology, and Society at Stanford.  Dr. McGinn is a member of Stanford's Science Technology and Science Executive Board and is coordinator of the School of Engineering's Technology in Society Requirement.  Dr. McGinn is an ethics investigator for the National Nanotechnology Infrastructure Network, and is also Social and Ethical Issues Coordinator for the Stanford Nanofabrication Facility.  Thanks for being here. 

DR. McGINN:  So I wanted to start off just by making you aware that I'm going to use a few abbreviations for the sake of fitting in my comments.  ESI sometimes for ethical and social issues, ET for emerging technology, NT for neurotechnology, and neurotech-enabled brain research.  So I may use those expressions in what follows.

So a framework for exploring ethical and social issues related to emerging technologies can't be elaborated in ten minutes, obviously.  The comments that follow concern emerging technology related to ethical and social issues and the specific case of neurotech- enabled brain research.  I think that any such framework should incorporate a life cycle approach. And in order to help identify ethical and social issues in all ET developmental stages including funding, R&D, testing, decision to release to market, diffusion, and use.  In such a framework several ethics-related considerations merit attention. 

First, opportunity costs.  Politicians love to fund technologies with PR potential, and funding applicants sometimes resort to hype.  Funding decisions skewed by such factors may have ethics-related opportunity costs.  Worthier proposals may go unfunded, existing harms persist, and potential benefits remain unrealized.

Second, design.  The design of a biomedical ET may affect its safety or durability as well as the ease with which unauthorized folks can access information that it holds or transmits.  As with orphan drugs, whether orphan devices should be designed to obtain brain information only from small groups of patients, for example, those with a rare brain disease, can be an ethical issue.  High-tech biomedical ET design choices can engender ethical issues, just as did the dreaded low-tech short-handled hoe that caused many California farm workers debilitating back injuries.  And I'll call your attention to the description underneath this picture.  With a short-handled hoe workers could see the weeds and crops better.  It's a positive spin put on this.  Unbelievable. 

Third, consent.  Experimental bioemerging technologies must undergo testing, raising the issue of whether test subject consent is voluntary.  Critical is whether a conflict of interest is created by leaving approval of testing protocol and administration of risk disclosure and consent scripts to parties standing to benefit from expediting testing or testing many subjects. 

Fourth, evidence.  Regulatory decisions to release an ET to market can raise ethical issues.  The key is whether an agency generates its own evidence of release readiness or relies upon evidence from a firm eager to secure approval.  The latter scenario invites evidentiary misrepresentation, especially if the firm is economically distressed.  Regulatory reliance only on firm-provided evidence may heighten the risk of harm to eventual users. 

Fifth, risk evaluation.  Another ethical issue pertinent to the premarket release stage involves divergent evaluations of risks by participants in regulatory hearings.  Suzanna Hornig distinguishes rationalist and subjectivist approaches to risk evaluation.  In the former, typically favored by technical experts, risk evaluation hinges on the probability of harm from the emerging technology, something held to depend solely on its design features.  Those using this approach often make the idealizing assumption that the ET will work in practice just as it is designed. 

Subjectivist, so called -- I think it's an unfortunate term, quite frankly, subjectivist approaches, which I prefer to call contingency approaches, they focus on social contingencies such as the track records of the firms building, operating, or maintaining the emerging technology, how well trained their workers are, how rigorous the regulation of ET or use is expected to be, and how comprehensive risk communication processes are likely to be.  Unfortunately regulatory bodies and technical experts that adopt a rationalist approach tend to view subjectivist risk evaluations as technically uninformed and worthy of dismissal.  The ethics-related upshot is that it can be a premature release to market based on a partial, overly sanguine evaluation. 

Six, access.  Most emerging technologies undergo diffusion into society by giving access to those obviously able to pay the going market price.  Rarely an exception is made because of societal belief that access should not hinge on that particular ability when dealing with matters of life or death.  In California belief that access to 911 emergency services should not be limited to those able to afford regular monthly telephone service is reflected in a universal service tax that we all pay each month to subsidize dial tone access for the poor.  An ethics-related issue for future biomedical emerging technologies is whether any qualify as matters of life or death or whether the criterion should be expanded. 

Seven, harm and trumping factors.  In assessing the effects of an emergency technology's use, a robust notion of harm should be employed, not one limited to financial, economic, and physical damage.  When the Finnish Skulte lab people adopted the snowmobile in the 1960s, they focused on, understandably, the time it saved them in transport compared to the reindeer sled; however, they overlooked the fact that, unlike the reindeer sled, the snowmobiles could not be safely and effectively operated by the elderly.  The younger snowmobile drivers gained status but the elderly, long the transmitters of vital reindeer sled knowledge to the next generation, quickly lost that role, became functionless, and absorbed serious blows to their psychic wellbeing.  When deciding to do an emerging technology cost benefit analysis, it's vital not to accept uncritically the widespread belief that if projected benefits exceed costs, then it's ipso facto ethically or obligatory to proceed.  In some cases the nature, magnitude, or distribution of costs might make it prudent to decline the ET's greater benefits in order not to incur its lesser costs.  I call this the thanks- but-no-thanks position on a positive cost benefit analysis. 

Certain factors arguably deserve to trump a standard positive cost benefit analysis and to justify declining to implement the technology.  One candidate trumping factor is violation of the roles difference principle.  China's Three Gorges Dam Project illustrates what can happen when that factor is disregarded and conventional cost benefit analysis prevails.  I'm of course referring to the million people plus who were forcibly relocated for the sake of the dam. 

Turning now just very briefly in the two or three minutes that remain to neurotechnology-enabled brain research, some neurotechnologies enabled I think will likely enable research that yields brain knowledge with real therapeutic value; however, others could be developed that enable research that yields knowledge of people's brains that allow them to be graded and ranked -- I want to repeat that -- graded and ranked based on their physical properties.  This, if it happens, would invite invidious interbrain comparisons.  Absent policy to prevent improper access to and divulgence and use of such information and comparisons, such research could indirectly cause harm by undermining self-worth or by facilitating brain-based discrimination by employers, educational institutions, or insurers.  Would neurotech developers be ethically responsible for such outcomes? 

To see, let's first ask whether scientists are ethically responsible for harmful outcomes of downstream technological applications of their research.  Some of you may know of Leon Lederman, the distinguished Nobel physics laureate, and I love his famous quote, which is:  "We scientists give you a powerful engine.  You, society, steer the ship."  That is a quote I think everyone should internalize.  I strongly disagree that researchers are never responsible for such outcomes.  While firms and government institutions may be primarily responsible for the negative results of some applications of neutral research requirements, researchers can also be responsible.  They can't always plausibly plead ignorance of the risks posed by making their powerful engines available to society for use as society chooses.  Researcher failure to consider that the ships her engines are used to power will be piloted by dominant social groups with problematic track records is sometimes negligent. 

With neuroenabled brain research the situation is reversed.  In this case we have neurotechnologies being developed instead of scientific research.  In the technical applications it's a matter of neurotechnologies being developed that enable certain brain research.  A neurotechnology may enable diverse brain research inquiries, some proving benign, others yielding information about brains that is potentially harmful to some of their owners.  In assessing the acceptability of developing neurotechnologies for brain research, it's imperative to take into account the features of the sociocultural context in which the research results will be used.  Is the prevailing matrix of political, economic, and cultural forces cause for concern regarding what might be done with the results of certain projects?  After World War II, Topf & Sohne engineers were rightly held ethically accountable for designing, building, and installing 66 neutral incineration ovens for use at extermination camps operated by those who contracted for the ovens. 

My final thought then is this:  There is a key ethics and public policy looming.  Will, as I suspect, the development of neuroenabled brain research be hastened by vigorous across-the-board public support under the assumption that proper fixes can be found downstream if any unexpected problems ensue, or, as I hope, public support for neuroenabled brain research would be judicious and contingent upon enactment of a policy aimed at deterring harm-risking brain information disclosure and use by doctors and by social institutions.  Thank you. 

DR. WAGNER:  Dr. McGinn, thank you, and Dr. Lin as well.  I don't know if someone wants to lead on this or -- well, let me ask.  Dr. McGinn, I think your reminder that when we talk about technology in neuroscience, it is not the case that we imagine technology to be the result of neuroscience research but rather the reverse; that we expect technology to open insights into our understanding through neuroscience. 

In our prior work we talked about the importance of integrating ethics at the research level.  It seems to me that's what you were touching on at the very end.  Two questions:  What is it you would have our ethicists who are embedded and integrated in future neuroscience research, assuming that can be adopted as a norm, what would you have them -- what questions would you have them ask researchers? 

And then a second question is you seem to be very concerned about the harms that can be done.  Shouldn't we be similarly concerned about and motivated by the opportunities? 

DR. McGINN:  Absolutely. 

DR. WAGNER:  In fact, is it not also an ethical responsibility, an ESI issue to aspire to something? 

DR. McGINN:  What was the end of that question? 

DR. WAGNER:  In addition to having ethics input help insure against potential risks, isn't there also an ESI responsibility to pursue opportunity?

DR. McGINN:  Absolutely.  Thank you for those two questions.  I'll just address the second one first.  It is definitely true that my comments tended to sort of privilege or prioritize attention to harm, and the reason I do that is because I think we're probably not at much risk of lacking attention being drawn to opportunities.  I think there's a need for balance here, calling attention to the mix of results that could result from these given innovations.  I mean I -- and this is particularly so in the contemporary society which you people are being asked to make recommendations, i.e., ones in which there's big money at stake, in which there are many gains to be had by people who secure those funds, promotions, bigger laboratories, et cetera.

By the way, on this point I'll just sort of insert a brief comment.  In the studies that I've done of ethical issues that are related to the work in nanotechnology, I made a distinction between three levels at which such issues could exist.  At the micro level, that is to say, in the laboratory in which we worry about falsification, fabrication, and plagiarism, the standard research misconduct; and then of course the macro level which we're also familiar with in society at large.  But it turns out that the domain, the societal domain that over and over again has proven the one that researchers are least attentive to, is what I call the mesodomain, which means the domain in which there are relationships between the technical practitioners on the one hand and representatives of society at large.  So it could be between researchers on funding agents, members of the media, in legal arenas and such and so. 

A relatively small number of researchers recognize that there are ethical responsibilities that are incumbent upon them in that particular domain.  So that's one thing I would stress to them. Sorry.  That's a brief response to one of your two questions. 

DR. WAGNER:  Let me go to Dan.  By the way, I think you would be gratified to know that among our recommendations is that PIs themselves might be challenged to state what some of the ethical questions are related to their research. 

DR. McGINN:  Can I have a brief comment on that?  I have found over the last basically nine years I have been designing, giving, and analyzing surveys to nanotechnology researchers on the frontier, apart from the substantive findings, one of the things that I have found is that even confronting -- I mean, it turns out that at Stanford, for example, it is now an ironclad requirement that anybody who wants to go and work in a nanotechnology laboratory must complete a very extensive questionnaire devoted to asking them for their views about ethical issues related to nanotechnology.  Even the very encounter of exposure to that is I think edifying and useful.  It opens up, it creates a kind of psychological or intellectual space in which they -- and my finding is actually rather encouraging in many respects.  Given the opportunity to be exposed to these questions, many people are very receptive.  But if they are locked into a unidimensional space and told these things are outside their job pay grade, et cetera, then they won't do it.  But given the space, they are often very receptive and forthcoming about it. 

DR. SULMASY:  Perhaps related to that -- and I think it was very important that you mentioned the responsibility to think about the downstream uses of technology.  I loved, for instance, the one misuse of invidious interbrain comparisons.  It's a nice quote.  But also the sort of ways in which the technology might be otherwise misused.  I'm sort of reminded of the old song, you know, "Once the rockets go up, who cares where they come down.  It's not my department,” says Werner von Braun. So there's a sense in which many scientists do think it's about the dispassionate, disinterested pursuit of the knowledge.  And so I was wondering whether you have any concrete suggestions for, besides giving surveys, perhaps, which is encouraging to know that helps a little bit, but to think about the downstream, to have the scientists themselves think about the downstream consequences.  Do you have concrete suggestions or does Dr. Lin have concrete suggestions for how to sort of include that in the thinking of the scientists themselves? 

DR. LIN:  Our report suggested that funding agencies could just require people to comment on what they thought, the researchers, project proposers, on the ELSI issues.  This was regarded as a minimally burdensome thing to do.  You wouldn't have to do anything with it.  We're not saying funding would be contingent on the content in some way necessarily, but it could be.  The point is that getting more eyes on the problem about identifying ELSI issues could help sensitize decision makers more about what the downstream implications were.  So I think that we are in accord with the way you were describing engaging researchers in a minimally invasive way. 

DR. McGINN:  Engaging researchers in a minimally invasive way, as opposed to the patients. 

In answer to your response -- well, first of all I'm thinking back to in 2003 when NSF revealed its nanotechnology initiative.  And the reason why I got involved in this is very simple, because part of the RFP said that any person, any group of universities applying for money from NSF for nanotechnology lab research must have a substantial component dealing with social and ethical issues in their proposal.  But for that, with all candor, and I'm going to be really candid here, I doubt much attention would have been paid.  So it does help when decision makers and people with the purse strings actually make it clear. 

Let me go a step further, and this is going to be pretty, I don't know, politically incorrect.  You probably know that in 2008 we passed the America Competes Act, and it says that starting in 2011 all people who are recipients of NSF funds, faculty and/or graduate students, there must be opportunities available for them for ethics training.  Well, part of me wishes that I were in the position of having to do follow-up on how that is being met, because what I'm finding is unfortunately -- well, for example, there's a California law that requires that every two years we must go in for affirmative action, every faculty member and manager.  Fine, and I do that every other year.  You have a choice, either you do an online tutorial in an hour and you check the boxes until you finish it -- how much do you learn there?  Or you go in as an alternative where you go in for a three- or four-hour presentation by a theater group.  I'm saying the same scenarios, and then you think about them and they have discussion, et cetera.  The way in which this NSF Competes Act is being satisfied is by these one- or two-hour tutorials online.  I don't have empirical evidence but I have serious doubts as to whether it's making a real significant dent.  If the requirement were that the recipients of this must in fact take a serious course in which they are exposed to a range of ethical consideration and concerns that might face them as researchers, that I believe -- I teach such courses, but I think that can have a more seminal effect on them and make a difference. 

DR. SULMASY:  Could I just follow up for a second?  I appreciate these sort of general suggestions about consciencitizing to ethical responsibilities, but particularly about this sort of sense of what the responsibility of the scientist is, whether it's disinterested, dispassionate pursuit of the truth no matter what the consequences, versus sort of saying no, these downstream things are important to you.  Are there particular suggestions about that knotty issue, which I think is a real one even for the people who say, "Okay, I'll pay attention to the issues.  I'm not going to harm anybody who is in my study," but really thinking about the downstream consequences. 

DR. McGINN:  Part of the reason for being of the enterprise that we call ethics is to get people to take into consideration the interests of those who fall outside of their local little group that they are most like, either gender-wise, nationality-wise, et cetera.  And people tend to give more attention to either those of their gender, race, neighborhood, country, and those pay attention to the effects which manifest themselves in the short term. 

To answer your question directly, I think one of the ways I think one can expand the domain of parties is by case studies.  And I require my students to work in pairs to generate their own original case studies and make sure they do pay attention.  They are held accountable, frankly.  If they only pay attention to really narrow subjects and leave out of the count the interests of groups that are significantly affected, even those effected may be, (a) intangible, (b) indirect, and (c) downstream, they are held accountable for that.  Case studies are really a powerful thing. 

DR. WAGNER:  In the interest of time to get these final two questions in, I'm going to

move us along.  Nelson and then Christine.

DR. MICHAEL:  Before I ask my question I just want to say that after 26 years of wearing this uniform we recently have switched in sexual harassment training in the Army from didactic kinds of sessions and online training to much more interactive hours long small groups, and I can't sit here and show you a Power Point slide on what an impact that's made, but for me personally it's made a huge impact. 

So the question I was going to ask you is in my own field of HIV/AIDS we have adopted increasingly over time a model where advocacy groups from those individuals most likely to be affected by the science sit not only on advisory boards for the funders but actually sit on advisory boards for those groups of us that execute the clinical trials.  Now, that's already on top of the communities that are involved in ethical review committees, community advisory boards, et cetera.  But it goes back to I think the presentations that both of you gave. 

I think that Dr. Lin more explicitly stated where you have a constant sensing function of your greater community during the entire life cycle of the research and you are constantly asking yourselves more and more questions so that decision makers and the project managers themselves don't end up getting too far downstream before you realize that you are going to encounter an ELSI no-go kind of scenario. 

So in your view with that kind of model that we use in my own field of having in a very loose sense the community of individuals that might be affected, the public, if you will, more formally engaged in decision making for where the dollars go, and once the dollars have gone in terms of how they are executed. 

DR. LIN:  In the context of the study for which I was responsible dealing with military technologies, the thought did come up do we include Al-Qaeda in a review board.  The answer was no.  I mean, military technologies are different.  Some of them are explicitly designed to harm people and destroy things.  That's not the case in any other field.  It's one of the key differences.  Nonetheless, one of the things that we were concerned about, one of the items that we think people ought to think about, is how will an adversary perceive what we're doing.  And some thought to that might -- this is a good example of you want options between stop and go.  It may be go but you want to consider the impact it might have on adversary perceptions and so on about your politics.  For example, deployment of a certain technology even though militarily effective might have severe political consequences that you might not want to do. 

The point is you have to have a process whereby those issues and trade-offs can be surfaced, and I think that's where we made the point of sort of accounting for how the adversary is thinking is important. 

DR. McGINN:  I do believe it's a good idea for you to include those groups, and I think that idea is generalizable.  And one of the reasons I think it's a good idea is because I think technical practitioners are often subject to, as I was saying before, using Hornig's distinction between rationalist and subjective approaches, to very narrow truncated notions of risk evaluation.  There are many cases I could cite to you involving things like where an incinerator is going to be located in Australia, things like that, or even in the United States, the would-be location of the Ward Valley low-level radioactive waste disposal site in California in which it was a battle between those who wanted to just look at the design of the facility that they wished to get approval to locate in a certain place and claiming that because it was the newest technology, that any of the earlier problems that they had were just not relevant, and other people who wanted to call attention to what I call the social contingencies having to do with things like the track records of the companies involved, the training of the workers, the degree of rigor of the regulators, and the openness and comprehensiveness of the risk communication, things like that.  So the presence of other such groups on these upstream panels or whatever considering the propriety of going forward I think can only be a positive because it would help combat the potential for a parochial, narrow decision making, whether it was environmental impact statements or risk assessments or things like that. 

DR. WAGNER:  Christine, our final question for this session. 

DR. GRADY:  I don't even know if I need to ask it.  It follows a little bit on Dan's question. I was struggling with the description of the processes that you think are important to include in the ELSI framework, the description of lightweight.  I know that you qualified that by saying they can't be too burdensome or bureaucratic, and I think that's correct.  But I think there's this interesting problem, and that is that there's this tendency to think of talking about or learning about the ethical and legal and social issues as something extra and therefore it has to be sort of low-burden, lightweight, don't get in the way.  Even if you require it, you have to be careful about what's required or you get the, people have told us in the past, the mind-numbingness of the online training courses, if they even pay attention to them. 

So I guess I just wanted to say out loud I think lightweight -- I understand why you say it, and no criticism of you, but we all have to be careful to be clear that these are important things.  Low burden is right, but importance is clear. 

DR. McGINN:  Is there time for a 15-second comment in response to that? 

DR. WAGNER:  Sure.

DR. McGINN:  I think something extra versus being integral to the process is one thing.  Since we are about to end, I just wanted to add one little thing.  Since our previous panel made reference to the well-known distinction between therapy and enhancement pertaining to brain science and such, or interventions, I just wanted to underscore again the fact I think there is another sort of category of ethical issues besides the standard laboratory ones of consent and such, and that is what I call sort of diagnosis and the potential for grading and ranking.  I think that's been understudied so far.  My reference to my remarks when I started pondering what I wanted to say, given that I'm not knowledgeable about brain science, in the kind of society in which we live I would want to keep a very careful eye on if we were to develop a potential for brain science inquiries that had the ability for us to make rankings of brains and then make social policies on the basis of those. 

DR. WAGNER:  We may have an opportunity to bring that up in our next session because we will invite our other presenters to join you two.  Very brief break.  Don't go far because we are going to reconvene sharply at 11:00.

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.