Transcript, Meeting 14 Session 6

Date

August 20, 2013

Location

Philadelphia, PA

Presenters

David Chalmers, Ph.D. 
 
Professor of Philosophy and Co-director of the Center for Mind, Brain, and Consciousness
New York University
Distinguished Professor of Philosophy and Director of the Centre for Consciousness
Australian National University
 
Walter J. Koroshetz, M.D.
 
Deputy Director
National Institute of Neurologic Disease and Stroke
National Institutes of Health
 
John C. Wingfield, Ph.D. 
 
Assistant Director for the Directorate for Biological Sciences
National Science Foundation
 
William D. Casebeer, Ph.D. (U.S.A.F., Retired)
 
Program Manager
Defense Advanced Research Projects Agency

Transcript

DR. GUTMANN:  Good morning, everybody.  If I could ask everyone to take a seat, I'd really appreciate that, so we could get started.  It's wonderful to see everyone here this morning.  I am Amy Gutmann.  I'm President of the University of Pennsylvania and Chair of the Presidential Commission for the Study of Bioethical Issues.  And I'm really delighted on behalf of myself and our Commission Vice Chair, James Wagner, who is President of Emory University to welcome you back here to the Smilow Center for the second day of our Fourteenth Meeting of our Commission.

Before we continue to make this meeting official, I would like to note the presence of our designated Federal Official Bioethics Commission Executive Director, Lisa M. Lee.

Lisa, would you stand so everyone can identify you?

Thank you.

And also for those of you who were not here yesterday, a reminder that one of our members, Doctor John Arras, is joining us by phone. And we're very pleased to have him because it was impossible for him to get here physically, but he is here in every other way.  So he will be participating by phone.

Before we move our -- beyond our deliberations on incidental findings, I want to add a potential finding -- a finding and a potential overarching recommendations to our deliberations that actually arose because of the deliberations yesterday and were provoked and suggested in my mind by them and follow-up Anita Allen began this line of discussion and the audience members chimed in and there was follow-up afterwards and I want to as a matter of the public record put it out there and have any quick reactions by Commission members because I think this will be a potential finding and recommendation in our report.

And I'll read what I jotted down this morning because our time is limited, but I do want to spend some time on it.

Where we left yesterday there was concern which the Commission has consistently shared based on our principle of justice and fairness about the inequities in access to counseling and testing that may or may not yield direct incidentals, secondary or general findings relative to one's health.

          It would be hard -- and so this our potential finding recommendation.  It will be hard to exaggerate the importance of equitable access of access to counseling.  Doctor Hauser made this point before as well as after testing that may or may not yield incidental or secondary or general findings relevant to your health.

So consider the most well-known recent testing that yielded lifesaving results: that of Angelina Jolie's BRCA1 and BRCA2 testing for a rare form of breast cancer that has a high probability, if not -- if not prophylactically treated of being fatal.  The test is costly.  The greatest inequalities, however, are not connected to the testing, per se, but to the quality counseling as to whether and what test is of high quality and worth obtaining.  Who is likely to be -- what woman is likely to be at high enough risk to make it worthwhile being tested.  And then what to do with the information gathered.  Information is not knowledge.

That is yet another reason why I am recommending and would like a brief discussion here at the Commission why we should have another overarching recommendation that says there's no real substitute in micro-regulations.  There's no real substitute for access to high quality cost effective healthcare.  In particular, access to counseling and quality follow-up care.

And I open it up to any members of the Commission for follow-up comments and then I assure our panelists we will move on.  But I think it is important to put this on the public record.

Dan?

DR. SULMASY:  Yeah, I just say it's almost really a corollary if we're going to suggest that people have -- you know, if it's going to be actionable and, you know, presumes that people will be able to act upon the information and so I think it's very reasonable.

DR. GUTMANN:  Anita?

DR. ALLEN:  We don't know, of course, what will be the best path for people of different economic classes and races and situations to get the testing and healthcare they need.  And one of the reasons I raised this about equity is that we are hearing that some people may be turning to these alternative direct to consumer modalities and we must be, therefore, especially be concerned about the quality of those modalities.  Because if they're not good then the people who will turn to them will be harmed.

DR. GUTMANN:  Right, right.

Barbara?

DR. ATKINSON:  I just wanted to say that I agree and I think the counseling is important.  But the post-treatment is extremely important and that's been a real problem in getting breast cancer treated in somebody who doesn't have insurance is practically impossible.

DR. GUTMANN:  Well, that's why I think it's really important to include both before and after.

Good, okay.

We will move on and before we do I'd like to just go around and ask each member of the Commission to introduce herself or himself and we'll begin.  We might as well begin with Anita since you just spoke.

DR. ALLEN:  Thank you, good morning. I'm Anita Allen.  Vice Provost for Faculty here at the University of Pennsylvania and also professor of law and philosophy.

DR. HAUSER:  Steve Hauser, Chair of Neurology at UC San Francisco.

DR. FARAHANY:  Nita Farahany.  I'm a professor of law and philosophy and professor of genome sciences and policy at Duke University.

DR. GRADY:  Christine Grady, Chief of the Department of Bioethics at the National Institutes of Health Clinical Center.

DR. ATKINSON:  Barbara Atkinson, Emeritus Executive Vice Chancellor and previous Dean at the University of Kansas Medical Center.

DR. SULMASY:  Dan Sulmasy, the MacLean Center for Clinical Medical Ethics, the Department of Medicine and Divinity School at the University of Chicago.

DR. KUCHERLAPATI:  I'm Raju Kucherlapati, Professor of Genetics and Medicine at Harvard Medical School.

DR. GUTMANN:  Thank you, and thanks to all the Commission members.

Today, we begin work on a charge that I received in a letter from President Obama last month as part of the new BRAIN research through advancing innovative neuro-technologies initiative.  The full title that a few people will remember because we call it the BRAIN initiative for short.

President Obama asked the Bioethics Commission to play a critical role in ensuring that neuro-scientific investigational methods, technologies and protocols are consistent with sound ethical principles and practices.

Specifically, the President asked us to identify proactively a set of core ethical standards both to guide neuroscience research and to address some of the ethical dilemmas that may be raised by the application of neuroscience research findings.  We are well-positioned to address the implications that will arise as we as a Nation embrace advances in neuroscience.

We, as a Commission, will engage with the scientific community and many other stakeholders to bear a variety of perspectives on this emergeng field and this engagement begins this morning.

We have a great group here today to present to us.  For those of you in the audience who were not here yesterday I'd like to take a moment simply to explain how we will take public comments.  We got terrific ones yesterday raising the standard for today high which I'm sure you will meet.  At the registration table out front there are comment cards.  All of our staff members also have comment cards.  We ask that you write down any comments you have on these cards, on one of these cards, hand it to any Commission staff member and they will bring them up to Jim or me depending on who is moderating the session.

Would all of our staff members please stand up so you can be identified.  There you go.  Thank you very much.

Jim, do you want to say a few words?

DR. WAGNER:  A very few, as a matter of fact.  This is exciting.  You know, neuroscience and brain studies, in particular, bring us closer to this intersection of who we are and how we function.  I want to make sure we understand the responsibility here.  I know we all do but I think it bears saying out loud that we should applaud the fact that it is understood that the foundation of a new initiative like this, ethics is expected to play a role to guide that research.  You know, after all, this Commission has sat and looked at historical studies and wondered what in the world were they thinking.

          And we have a chance to what we do today to help ensure that years from now no one will look back and say what in the world were we thinking.

So I just want to remind us of the context of this.  So I'm excited and looking forward to this new activity.  In particularly, with guys like you.

DR. GUTMANN:  Well said, thank you, Jim.

Our first session will provide very important background and context for our work in response to the President's charge.  We will hear from four speakers about ethical issues associated with the BRAIN initiative and ongoing work in neuroscience.  And here to start us off is Doctor David Chalmers.

Doctor Chalmers is a professor of Philosophy and Co-director of the Center for Mind, Brain and Consciousness at NYU.  He is also Distinguished Professor of Philosophy and the Director of the Centre for Consciousness at Australian National University at ANU.

Doctor Chalmers is known for his work on consciousness, particularly, for his formulation of the hard problem of consciousness and his arguments against materialism.

He co-founded the Association for the Scientific Study of Consciousness and is author of a wonderfully readable book, I must say, the Conscious Mind:  In Search of a Fundamental Theory.

Welcome, Doctor Chalmers.

DR. CHALMERS:  Thanks, Doctor Gutmann.  And thanks to the whole Commission.  It's really a privilege to be here.

So I am a philosopher interested in the connection between the brain and the mind.  I think all of us are interested in the brain, primarily, because it's the physical organ of the mind.  That's to say it's the physical locus of mental functions like seeing, hearing, feeling, thinking, deciding, learning, remembering and consciousness, the subjective experience of all of that from the first person's point of view.

And the point of the BRAIN initiative, I take it, isn’t primarily to study the brain in its own right fascinating as that may be. It's to ultimately understand the mind.  To use the brain to understand the mind both so we can treat mental disorders and so we can come to better understand who we are.

So I guess one big question is is the BRAIN initiative going to explain the mind?  Wonderful question for a philosopher.  I suppose the short answer from my point of view is well, no and yes.  No, it's not going to explain everything about the mind.  But yes, it's going to tell us a whole lot and enough to make a big difference for practical purposes, enough to raise really important ethical issues.

So I thought I'd might just to step back for a moment and look at the current state of neuroscience and what it tells us about the mind from a philosopher's point of view and then look at how the BRAIN initiative might affect that and the ethical issues that it raises.

So there's a huge progress in cognitive neuroscience in the last couple of decades, the field that studies the brain basis of mental functions.  There's been a number of drivers in that, but one of the biggest has been the development of brain imaging techniques such as fMRI, functional magnetic resonance imaging, which has enabled us to non-invasively get a measure of which areas of the brain are active at a given time and which are associated with different mental functions.  So there's been a lot of progress.  But there's also been some serious limitations.  I'll mention three.

One limitation is just the limitations of the spatial resolution of current imaging methods.  FMRI measures blood flow rather than directly measuring neural activity and it doesn't have the resolution to measure anything like the activity of a single neuron.

In fact, it turns out to measure single neurons you got to do something like put electrodes into the skull.  Huge progress has been made that way. There’s some really interesting studies, but there are big limitations.  To do that we're limited to nonhuman animals and some occasional surgical patients with, you know, cooperative surgeons.  So that's one serious limitation.

Another limitation in current neuroscience is the absence of a unifying theory of brain and mind.  Here, I think, there's a real contrast with something like the Genome Project where we came into this with both a solid theory in molecular biology of the molecular basis and a solid theory of the connection between molecular biology and genetics.

In neuroscience, now, we have nothing really analogous to that.  You know, we don't have a well-established unifying theory of how the brain works and we certainly don't have a well-established unifying theory of the connection between states of the brain and states of the mind.

A third limitation connected to that is the philosophical mind/brain problem.  It’s an ancient problem in philosophy, the mind/body problem.  What's the connection between the body and the mind?  These days it's basically become localized to the mind/brain problem.  How is it that this organ inside our head, this three pounds of matter, somehow gives rise to its -- states of seeing, feeling, hearing, thinking, and consciousness from the first person point of view?

That's a philosophical problem which is becoming a scientific problem.  But I think it's fair to say that there are pretty deep philosophical puzzles at the core that are so far unsolved. And although it’s an area that generates a lot of controversy, I think it's fair to say that the consensus here in the field right now is that we’re not even close to having a solution to that problem right now either from the philosophy or from the science.  And that poses limitations, too.

So the bottom line then right now in current neuroscience is we have a developing science of correlations between states of the brain and states of the mind, fairly coarse grained correlations for now.  We don't really yet now have a science of explanation fully explaining mental states in terms of states of the brain.

Okay, so what is the BRAIN initiative going to change here?  Well, the BRAIN initiative, as I understand it, the point is to provide a set of tools for dynamically monitoring neuron by neuron activity in the brain starting with relatively simple organisms and eventually in primates and in humans.

Suppose this succeeds.  I don't know what the time frame is for this.  I think it's going to be something on the order of decades in humans.  But suppose it succeeds and we have the ability to monitor the neuron by neuron state of the brain at the second by second dynamic level.

           Well, I think then we can expect much more – of course the spatial resolution will be much better.  So we'll get much more complex brain states measured allowing us to get much more detailed correlations between states of the brain and states of the mind.

           It may help a theory.  I mean, merely mapping the brain isn't going to give us a theory of the connection between the brain and the mind.  That requires a whole lot of extra work. The hope is, I take it, the BRAIN initiative will provide the tools to do that but I think there's room for caution.  After all, we have mapped the nervous systems of certain very simple organisms -- the researchers who have done that will tell you even after doing that we still don't really understand how the brain of those organisms work.  So there is some room for caution there.

As for the third limitation, the philosophical mind/brain problem.  Well, there's a lot to say about that.  But again, it's not obvious that simply mapping the brain is going to tell us how is it that the brain supports seeing, feeling, thinking and consciousness.  There's going to remain philosophical puzzles.  One of which you can pose by thinking about -- just say you give a blind person a complete map of the brain in its neuron by neuron state, the brain of somebody seeing color.  Will this tells them what it's like to see red?  You can argue no.  So there's some kind of explanatory gap between the state of the brain and the conscious experience.  Maybe mapping the brain is going to provide us with new insights that it takes to solve this philosophical problem, but it's not obvious. If you ask me, my money is that even after mapping the brain, some elements of that philosophical mind/brain problem are still going to be with us. So that's the bad news.

The good news is, I think, the BRAIN initiative still has the potential to give us a really good science of correlations between the brain and the mind.  Studying brain states in the kind of detail that the BRAIN initiative promises in the context of behavior and mental function opens up the possibility of correlating really complex and specific states of the brain with complex of specific states of the mind, thinking and feeling and so on.  Not just correlates of seeing and thinking and feeling, in general, but of seeing the Eiffel Tower and thinking of your mother and so on. That won't solve all the mysteries of the mind, but it will be enough for many practical purposes and enough to pose some of the most serious ethical challenges.

So to close, maybe I'll just mention one of the relevant ethical challenges here which is the challenge of privacy.  Mental states are traditionally private.  Known most directly to their subjects and only communicated when subjects -- through behavior.  Brain imaging is gradually changing that.  We can now look at brains directly.

           There's a famous recent study where a subject diagnosed in a vegetative state is being put in the brain scanner and people are being able to make inferences about certain states of consciousness and apparently that person that people thought was previously incapable of such states.

So the BRAIN initiative can really change this.  Imagine we have the capacity to monitor really detailed complex and specific brain states and we have a background account of mind/brain correlations that will give us the ability to monitor detailed mental states.  And now the ethical issues, I take it, are obvious. You’ve been talking about incidental findings here.  Well, here’s an incidental finding: you’ve got you’re research subject in a scanner, you read off their brain state.  It turns out they killed someone or they have a memory of killing someone or it turns out they’re planning to kill someone.  Okay, well, what do you do with that? Maybe that's science fiction.

           Similar issues are raised by getting at the neural correlates of anger or depression or sexual attraction.  Do there need to be limitations on the use of brain imaging methods in light of these ethical issues of application? Well, that's one for you. So I’ll turn it over to you now.

DR. GUTMANN:  Thank you.  Well, I will pass right now because time is limited among other reasons, right.  Thank you very much.  It was very lucid and a great way to begin.

Now, I'd like to turn to Doctor Walter Koroshetz who is Deputy Director of the National Institute of Neurological Disorders and Stroke at the NIH.

Doctor Koroshetz has served as co-director of the Joint Uniform Services University, NIH Intramural Research Center for Traumatic Brain Injury and is acting director of a new NIH office of emergency care research.

Doctor Koroshetz also served as a Professor of Neurology at Harvard Medical School, Vice Chair of Neurology and Director of Stroke in neuro-intensive care at the Massachusetts General Hospital and was a member of the Huntingdon Disease Unit.

In the latter position, he pioneered pre-symptomatic testing for persons at risk for Huntingdon's Disease, addressing the ethical questions that the new genetic technology posed.

Welcome, Doctor Koroshetz.

DR. KOROSHETZ:  Thank you very much. Thanks to the Committee.  It's a pleasure to be here.  I'm going to talk very quickly and try and hone in on what I think the new technologies are. And so I think if you forget about BRAIN, what it means, just think of the N, neuro-technology.  I think that's the focus that I'm going to particularly talk about.

So I'm going to pose the ethical issues as not new, but basically revisions of what you've seen in the past.  So we're going to talk about interrogating the nervous system and then modifying the nervous system.  So interrogating the nervous system we've been doing it for many years.

           As an example from the 30's the first lie detector test.  This is the lie detector of the modern age.  Functional magnetic resonance imaging which Doctor Chalmers mentioned where in this experiment they find with high accuracy that they can detect from areas of the brain that light up when someone is lying.  So just an example interrogating the nervous for a discreet purpose.

The second one is changing the nervous system to change behavior.  Now, I thought I'd talk about stimulation, but in fact, we change the nervous system all the time through a number of different ways.  Experience being the major one and drugs being another one.  Stimulation is directly affecting neural circuits usually through some electrical intervention.  So 1938, electroconvulsive therapy was first used in a human patient in Italy for depression.  I don't know if anyone has seen the effects, but electroconvulsive therapy can just be amazing in terms of returning someone from really a fatal depressive condition to a functional human being within a day.  It's really quite amazing.

But in the current time frame, I show you what's happening now and that is interrogating the nervous system –- this is Helen Mayberg’s work -- to find an area of the brain that's overactive based on blood flow PET scanning in depressed people.  Finding that those people remain in that area hyperactive despite antidepressants but then putting in electrode, turning on electric current in that area which you see here as this SCC25, this subcallosal area, turning on electricity which is supposedly a brain stimulation, but it's not stimulating the brain.  It's just throwing current into the brain, whatever happens, happens.  What happens is that actually the activity goes down and the patient recovers.  So brain stimulation, in fact, is electrical current, but it's actually not, as far as we can tell, stimulating the brain it's actually turning off the brain.

A semantic issue that we'll get into but the point here is that the current way in which the brain is stimulated even in the finest tuned situation is still quite crude.  You’re just throwing electrons in and seeing what happens.

So the BRAIN initiative is really based on developing new technologies as far as we understand it.  NIH is a granting agency so what actually happens depends on a bunch of processes the end of which is usually peer review determining what is the most meritorious science to go after in the space.  And there is a committee that's now working to try and develop what the space should be.

I think the key thing which I think my colleagues from the National Science Foundation, DARPA, can speak to better is that the movement that we need to make is not purely based on biology, but it's based on taking the best science from material science, from engineering, mathematics, chemistry and applying it to understand how the brain is functioning. Developing new tools for understanding how the brain is functioning.

There is now a planning committee. They have four public meetings.  Interestingly, I think this gives you an idea where they're going the first meeting was on molecular approaches.  So how do you get genes into the brain that will turn on certain proteins that then are reporters of brain activity is one.

The second one is large scale recording technologies.  How do you record from large areas of brain.  We now -- people won Nobel prizes for understanding how the visual system, by putting electrodes in and recording one neuron at a time, in particular areas.  To understand how the brain works, the brain is totally interconnected and that spatial confinement of the electrode really only gets you to see the needle and you miss the entire forest.

The third one is big data and computational methods.  Meaning, because as we're going into this is you'll see we're going to be obtaining data from a large number of neurons from the large scale recording technologies that are coming and the question is what does it all mean, what's the correlations, what are the causal features that define the circuits that we're interested in.

And the fourth meeting which is yet to come, is looking at human measurements analysis.  So a lot of this stuff is going to be pre-human at least, I think for the first couple of decades.  But I think we can say that the patients with diseases will push scientists to try to bring these technologies into the human condition.  But particularly diseases, Parkinson's, depression, some of the things we use DBS for, tremor.  I'll show you a couple of those.

So what's breaking now in terms of technologies.  Well, going into -- I'm going to break it into three areas.  One is direct recording of neural circuit activity.  So previously this was done by electrode, going into one cell, recording how the cell is firing.  Now, we have dyes that can be put into cells either through viral vectors or through transgenic approaches that these dyes will then light up when the cells become active.

These are oftentimes calcium sensitive dyes which give the best signal.  Voltage sensitive dyes which would then go along with each cell firing would be the best. Their signal to noise now is not as good as the calcium dyes. So a lot of the work now is with calcium dyes.  But this potential that you'll have a dye, a voltage activated dye that you can then see light emit when that cell fires an action potential.  And that is the dollar in the nervous system is action potentials how cells fire and then move information from one cell to another.

The way in which people are working now to record from large number of neurons is through nets of electrodes.  So electrodes are detecting electrical activity in the brain.  They're used commonly in patients for epilepsy surgery staging to identify where seizures are occurring.  But they can detect normal activity in these patients. People are undergoing certain cognitive tests and you can see the neural circuit activity related to a certain response.  So these are useful therapeutically, but in the context of a therapeutic use one can now also interrogate the nervous system for other reasons.

There are now nanoelectrodes that are being developed that are really, really tiny that go into brains and not cause any kind of disruption of the nervous system and can collect activity from many, many neurons, single cell -- single neurons at a time.

There are some futuristic ideas where you can use potentially DNA as a bar code recorder for electrical activity.  If there's a calcium signal that affects a mutation in DNA.  You can actually think of the DNA being used as the old paper chart for electrical activity as it goes by.  So people are working on these type of nanodevices.

In terms of direct stimulation of neuron and circuits, the big new technology is optogenetics.  Optogenetics occurs from the introduction of genes that code for light activated channels or receptors.  So recall that in the nervous system activity of the neurons is caused by opening of channels that allow ions to flow.  It turns out that there are basic science and single cell organisms found light activated channels, the genes are identified and you can now put them into neurons and then if you turn the light on you will activate those neurons in which you have these channels.  And this can be done with exquisite precision.  You can actually attach these to promoters that are specific for certain neuronal groups.  So you can actually specifically turn on dopamine neurons.  Specifically turn on GABA neurons.  This gives the type of stimulation that is so much more refined over the deep brain stimulation when you turn on a current a million things can happen and you don't know what they are.  Here you have the neuron under your control.  It's a very interesting new technology, just a couple years old.  And with two-photon optogenetics you can actually go down and activate one neuron at a time.  So really exquisite ability to activate neurons with this new genetic technology.

The last thing that's happening is that our understanding of anatomy of the human brain and animal brains is coming into a new zone. So this is one example of a new technology, just out a year.  It's called CLARITY.  Here's a mouse brain.  As you can see, you cannot see through the mouse brain.  So if you want to image, you know, the cells and the pathways you cannot do that in the mouse brain.  But there's a technology now to make it transparent and this is just an example of what this looks like.  Just bear with me a second.

You see this is a 3D image of this translucent mouse brain that's been stained with a particular antibody stain.  And you can go right into the brain and see all its 3D magnificence down to single cell level.  So a technology that is really quite amazing right now.

In the human -- I'm just going to show one more thing.  The technology that's breaking is the ability to look at the white matter connections between brain.  Previously, we have not been able to really say or tell which fibers are going where.  Most of the human brain is white matter and not gray matter.  Those are the wires that connect the different parts of the brain.  They're now with this new diffusion technology – diffusion tensor imaging – it’s becoming reasonable.

So I'm going to end there.  I think the take home point is new technologies, that's what the BRAIN initiative is going to be about.  It's going to allow interrogation of the nervous system and manipulation of the nervous system primarily in animals, but at some point people are really going to try to make that leap into the human as we have done in the past.

Thanks very much.

DR. GUTMANN:  Thank you very much. We will hear next from Doctor John Wingfield.  Who is Assistant Director for Biological Sciences at the National Science Foundation.  He has also served as Division Director for Integrative Organismal Systems from the University of California, Davis.  His research focus is on neuro pathways for environmental signals affecting seasonality in bird and their mechanisms of copingwith environmental stress.  Birds also get stressed, not just we, apparently.  Doctor Wingfield has served on several editorial boards and has held positions as Associate Director or Editor in Chief for major journals in his field.

Welcome, Doctor Wingfield.

DR. WINGFIELD:  Thank you, Doctor Gutmann and thank you to all of you for the opportunity here to talk a little bit about the National Science Foundation and the BRAIN initiative.

I'd like to start out just by reminding everybody that NSF is a funding agency where we make awards related to basic research, fundamental research.  And then the biology directorate this is related to health, food, energy and environment.  So actually the brain does have relevance to those other areas as well which I can talk about later if people are interested.

Having said that, let's focus now on the BRAIN initiative and as we've already heard this is focusing initially on neuro technologies.

Foundational knowledge tools for higher resolution, measurements, computational models and theoretical frameworks as Doctor Chalmers pointed out.  But also the data issues, data storage, management and analysis.  It's not entirely clear how we're going to move forward with those.

So the NSF role, as we see it, we are uniquely positioned because we have seven directorates, engineering and other science domains and social, behavioral and economic sciences that we can bring together in various combinations to advance tool development, but also educate the workforce needed for the BRAIN initiative to succeed several decades down the road.

So specifically, what we are supporting right now determining the genomic architecture such as patterns of gene expression, but also the epigenome.  We know from research in animals that the environment has huge influences on how brains develop as well as stand to the level of synaptic activity in neural circuitry.  So we are focused very much on developing molecular probes, improved ability to sense and recall neural network activity, imaging and related nanotechnologies.

We also are working to establish conceptual and theoretical frameworks.  In fact, we have a workshop coming up in March next year organized by the Directorate for Mathematical and Physical Sciences which will be focused precisely on how we develop these theoretical frameworks.

Then going on to social behavior, economic sciences, cognitive neuroscience linking brain activity pattern to cognitive and behavioral functions and specific ecological evolutionary development in social context and applying social science theories to methods to link brain activity to actual human behaviors.

There's an awful lot of science coming together here particularly from the engineers in developing new sensors, nanosensors , where we can actually follow animals in the field as they go about their everyday activities: feeding, reproducing, avoided predators and so forth.  And it's not that far way that we'll be able to do the same with humans.

A final example here before we talk about the ethical concerns.  This comes from engineering where a lot of interface of engineering with the science domains.  This one is from an engineering research center that developed a retinal prosthesis for -- particularly for patients with retinitis pigmentosa.  And a video camera affixed to glasses can transmit information to this prosthesis on the retina and the patient can actually recognize some letters, improve mobility, object localization, motion detection and so forth.  This is just one example.  We had a workshop last week with engineers engineering the brain which was just phenomenal.  But I would say at least half of that conference was focused on ethical issues.

So moving on to that, how do we manage or regulate rapidly evolving technologies? This is something that is just really beginning to sink in at the National Science Foundation that the regulations and management issues of today will be very different tomorrow as these technologies develop.

Do we need different principles to guide ethical policies?  And neuro-technology can include intelligence, defense, medical, personal.  I would put in there from the NSF perspective environment and agricultural as well.  Should there be a distinction based on the intent of the used treatment versus enhancement.  This was a major issue at the workshop last week for to what extent brain enhancement, cognitive enhancement, would be possible in the future.

One example given is that today we're worried about baseball players using steroids to enhance performance.  Tomorrow it may be enhancements of perception and motor responses in both pitchers and hitters.

Who will manage these policies?  When and where should neuro-ethics education start?  Some of the studies the Commission has supported said this should start in high school.  But at NSF we are focused mostly on undergraduate and graduate education.  And this is something that we're beginning to talk about now.  In fact, how we can update, revise and improve undergraduate education across the sciences, particularly biological sciences.

Some bioethical concerns that are particularly of importance to the National Science Foundation and this is dual use research of concern.  This came out of a recent study on H5N1.  There were two studies: one to be published or was published in Nature and another one in Science.  Concerning the -- working on potentially very pathological organisms and also organisms that produce toxins.  This now is starting to expand and will cover many other aspects of biology.  In fact, we're starting to see dual use research of concern in many other areas as well which includes, I think, brain research in the future.  The dual use research of concern what that means is that the technologies were developed for good purposes, but they have -- there is the possibility that they can be used particularly in bioterrorism and so forth in the future.  So that's where the dual use comes form.

Synthetic biology, the Commission has already released a report in 2010 on this.  But again, synthetic biology is something we fund an awful lot of.  The basic research that goes behind this synthesizing life-like systems that make things for us, but also nanosensors that can be infused, injected into organisms and ultimately humans as well.  We already heard about microelectrodes and so forth which are also coming out of synthetic biology.

Animal science is one that we are concerned about because we do fund a lot of basic research and there an awful lot of regulations come out of the Office of Laboratory Animal Welfare, with NIH and also the Office for Laboratory Animal Research.  But we fund an awful lot of research on wild animals.  Animals that are actually moving around and operating under natural conditions.  And it's not always entirely clear that the guidelines for use of laboratory animals are even relevant to wild animals and this causes a tremendous amount of concern amongst animal rightists and the public in general.  And there are some very ethical issues, I think, especially as we're looking at using nanosensors that can be followed by space.

Protecting human subjects, of course, is also an issue.  Again, the Commission released a report in December.  This is very linked, I feel, to animal science and the sorts of technologies that we are developing that will be applicable to humans in the near future.

One issue, too, that's developing is a major concern for us is a brain machine interface, invasive versus noninvasive interface, implants (we've already seen an example of that), brain stimulation and augmentation, prosthetics and mind control.  All of these are things we are now beginning to address as a foundation and we hope to see more guidance and leadership from the Commission here.  Because I think the seven science domains and engineering that we have at the National Science Foundation is developing such a complex issue related to the brain and the ethical concerns underlying it that this is going to be very, very plastic and changing constantly in the future.  So I think we have a huge challenge here.

So future steps, improving accountability, ethical underpinnings of regulation, should the explicit investor obligation should be explicit in policies and regulation.  And engagement by the communities, not just the funding agencies like us, but research, the institutions, private foundations, industry itself and non-governmental organizations, the student, and of course, the public at large.

I'll stop there.

DR. GUTMANN:  Thank you very much. Our cleanup hitter here who concludes this panel before we open it up for questions is Doctor William Casebeer.  Doctor Casebeer is a program manager in the Defense Sciences Office at DARPA, the Defense Advanced Research Projects Agency, where he developed science and technology dealing with the neurobiology and psychology of training, education and influence.

During his 24 year service in the Air Force, Doctor Casebeer served as an intelligence officer and as Associate Professor at the United States Air Force Academy.

He is the author of Natural Ethical Facts, Evolution, Connectionism and Moral Cognition.  And his current research includes work on neuroethics, the evolution of morality, the intersections of cognitive science and national security policy, philosophy of mind and military ethics.

Quite a combination, welcome.

DR. CASEBEER:  Thank you, Doctor Gutmann.  It's an honor and a pleasure to be here and I appreciate the invitation to talk before the Commission about these important issues.

Like our other two partners in the BRAIN initiative, the NSF and the NIH, DARPA, I think has an important role to play in funding and guiding BRAIN initiative-related work.  And when you mention DARPA you may immediately ask what interest does the Department of Defense have in brain sciences?

So let me revisit for you briefly what DARPA's mission is.  So the Defense Advanced Research Projects Agency was founded in the wake of Sputnik's launch.  We found ourselves strategically surprised by an adversary that had developed the technology that was a game changer in the national security domain.  So our mission is to prevent strategic surprise like that from happening again and we're possible to enable our soldiers, sailors, airmen and Marines to create it in our adversaries so that we can prevail on the battlefield and ideally prevent battles from happening at all. So that’s DARPA’s mission.

Given that human beings are an integral part of warfare, it's no surprise then that for the reasons Doctor Chalmers outlined quite well that brain being an important driver of behavior that the Department of Defense would be interested in developing neuro-technologies.

So some caveats I should place then on the type of work that I'll talk about in the next 8 1/2 minutes is that DARPA is a fast and lean organization.  So some of the programs and efforts I'll mention to you probably by the time a report is issued from this Commission may very well have been transitioned and gone on their merry way.

           And other programs and initiatives that I don't mention today will likely be started.  So DARPA is very much driven by the passion and expertise of the particular program managers who are brought on board to make their vision for developing technologies to give war fighters new capabilities a reality.

So let's first begin then with the “what” we intend to do in the neuroscience domain inside of DARPA.  So I think the useful way to think about this is to walk our work into four separate categories.

First of all, we can use our efforts in neuroscience to help us understand how we protect, repair and restore the brains and minds of war fighters.  So think of someone who’s deployed multiple times, down range into harm’s way, who’s experienced multiple traumatic brain injuries.  Is there some way we can use findings in neuroscience and the BRAIN initiative to help them recover from those injuries so they can live a normal life?

So protect, repair and restore is one important aspect of our neuroscience work.  Programs that come to mind here are things like the REMIND and RAM programs.  Those are both contrived acronyms that deal with programs that look at the neural correlates of memory.  How memories are encoded.  How they are recalled.  With the idea that we might be able to build a device that will help you jump the gap in case of brain injury.  So somebody sustains hippocampal damage which means they can no longer their past.  Is there any way we can build a multi-input and multi-output device or implant that can help restore those missing connections so that that person could have a normal system of memory again?

So those are very important parts of BRAIN initiative work given the types of injuries that war fighters face in the battlefield.  And this, of course, extends to the nonvisible damage that war fighters sustain in conflict including the diagnostic and statistical manual diagnosable mental illnesses that war fighters can suffer from.

So I would fully expect that DARPA would develop BRAIN initiative related work that would let us use REMIND and RAM like devices perhaps to treat DSM-IV and V diagnosable mental illnesses.

Second major area: leveraging the brain.  All right, so first is protecting, repairing and restoring.  The second is leveraging.  Are there signals, for instance, that the brain produces that we can monitor and use so as to give the war fighter better capabilities in the battlefield.  So one example of this is a program called NIA, Neurotechnology for Intelligence Analyst.  That's actually coming to an end, it's lived it’s full life span.  Where we've used electroencephalograms, EEG's, to monitor electrical activity over the surface of the scalp to look for signals related to detection of objects and images that imagery analysts and the intelligence community may not have otherwise been aware they were seeing.  So if you got thousands of miles of photographs from overhead satellites to look at, for instance, as part of your daily work.  Is there some way we can look for a signal that lets us know, oh, these five percent of those thousands of miles are important to examine if you're looking for a particular surface to air missile site, for instance, so you can understand the threat.

So that's an example of leveraging the brain to give the war fighter an important capability.

Third significant area is augment the brain.  That is brains and minds don't come preloaded to perform well on the battlefield, if you will, right.  So we have a process, training and education that takes someone's mind and brain and gives it additional capability through a process that's normally pedagogical in nature.  So can we use our understanding of the brain to help us develop better teaching and learning tools that use those signals I mentioned earlier to help us augment the capacity of brains?  An example here is a program called Accelerated Learning which looks at the neural differences between novice and expert performance in multiple domains and then try to develop technologies that made novice brains look like expert brains more quickly in areas like group performance and second language learning.

And one of the images that Doctor Koroshetz showed in his presentation was actually from a lab that was funded underneath this program at Walt Schneider's lab at the University of Pittsburgh that used diffusion tensor imaging to help us trace out some of those white matter tracks that are responsible for connectivity between large parts of the brain.

Finally and fourth, we have the emulate notion.  So by studying the brain well, we might be able to emulate the things that the brain does very well.  We have a pretty amazing three pound universe, as Doctor Chalmers pointed out, sitting on top of our spinal cord.  And it operates at very low wattages, produces all other things being equal, relatively little heat and yet does amazing computations, incredible computations.  Enables us to get around in the social world, reason morally and ethically, make judgments and decisions that have important consequences.  So can we study the brain more closely and help build computational systems that are artificial that might emulate the strengths of the brain?

So those are the four major buckets of research that I anticipate DARPA would continue to invest in keeping in mind the caveats I mentioned earlier about program manager expertise and passion.

So how we can do that, I think, is to break our explorations up into two general domains.  And here I'll use engineering speak: looking at transducers and effectors.  Are there technologies we can develop that will help us translate those signals in the brain into something that's usable to help us in theory development or technology development?  And then are there a novel effectors that we can develop that will help us intervene the brain say in the case of clinical pathology to correct the condition and restore someone to normal functioning?

Now, developing effective transducers and effectors is something DARPA has always been interested in and has always invested in.  It has downstream consequences for everything ranging from big data, right.  We're going to have to develop a way to harness angstrom to meter-style databases that will help you build that unified theory that David mentioned in his second point in his presentation that kicked off the panel.

I went over to new types of technologies that will let us change those brain states quickly and efficiently especially in the case of pathology.

Now, you may think that in a sense we are out at sea in a sieve.  When we think about the ethics of that whole enterprise and what I mentioned to you in the last 7 minutes.

Where I'm going to end my last minute is to suggest that we're not at sea in a sieve.  That we can use a lot of the traditional principles and standards from biomedical ethics and military ethics to see our way through into our framework for thinking about the ethics of these issues.

So when people grade diamonds they talk about the 4 C's, color, cut, clarity and carat weight.  And I would argue that in the ethics domain we would consider at least 3 C's.  The C's of character, consent and consequence.

On the character side, drawing in the wisdom of Aristotle and Plato: are the neurotechnologies that are developing enhancing human foraging rather than standing in its way?

On the consent side, drawing from theories of Immanuel Kant and respect for persons are we obtaining consent both on the experimental side and when we use these technologies are the subjects and the human beings aware of their impacts?  So are we respecting autonomy?

And finally on the consequential side, the third C, about consideration of Mill-style utilitarian concerns.  Are we doing all we can to ensure that our neurotechnologies have good consequences?

We left the traditional mechanisms in place at DARPA to help us think about those issues including an ELSI style review before every program is launched.  Of course, we follow the traditional IRB regulations as well as having the second level Department of Defense review and our program managers are empowered to empanel panels of experts in their programs that provide advice on all three of those C's.

We look forward to hearing from the Committee about how we can improve the way we think about ethics in the context of neuroscience in the BRAIN initiative.

And thank you for your time.

DR. GUTMANN:  Thank you.  Wonderful set of comments and I'll begin with a question for any or all of the panelists and then open it up for other Commission members who have a question.

So it's a follow-up to what I think everybody here, not only agrees, but is passionately interested in doing which is to make sure that we simultaneously enable the best of neuroscience to move forward and at the same time make sure it's ethical science so that nobody a decade or two decades or 50 years from now looks back and says what were they thinking.  Meaning, not that we should have done a neurological image of their brain, but meaning that they were in just -- in some commonsensical, ethical way doing something which to quote the New York Times article on Guatemala -- you know, that became so relevant to the Guatemala case was ethically impossible.

So why weren't they thinking ethically and scientifically?  You all expressed concerns for the science going forward and for it going forward ethically.

So my question is very simple.  What do you think is the most important potentially neglected ethics of your science.  Just say I think Doctor Casebeer I think is absolutely right.  It's not they need to be new ethical principles invented here.  But there does need to be at least public -- not at least.  There needs to be not only internal but public assurance that this science is going forward ethically.  And I just -- it's our job to answer that question eventually.  But we have you and I would like you to say what you think is one of the most important ethical considerations that you are potentially concerned about.

I might call on -- I will call on Doctor Koroshetz if he doesn't answer because I think -- I didn't catch what you thought in the neurological -- in what you do.  But let's start with John Wingfield.

DR. WINGFIELD:  One issue that's already struck me is that I think all of us agree that we should continue to fund basic research without any limitations and so forth.  And I'd like to give one example and it goes with the example that Doctor Koroshetz gave with optogenetics.  That those ion channels, those calcium channels that are hooked up to rhodopsins, light sensitive proteins were first identified in algae.  And this is NSF funded basic research.  So one of the components of the tool came from a totally unexpected area, and taq polymerases came from work that was done in bacteria in Hot Springs and Yellowstone.  So we're funding a lot of basic research here.  Sometimes it's immediately obvious where that has application.  Sometimes it sits dormant for 20 years and then emerges.  But then you have these brilliant people like Deisseroth who put all this together and develop a new techniques.  And I don't think any of us would want to inhibit that but ethical issues that then arise from that kind of research how do we prepare for that not knowing what might be emerging.

And the -- my colleagues in the directorate for mathematics and physical science say well, this is rather similar to the situation 70 years ago when the nuclear bomb was first developed and that heralded the age of nuclear power.

Are we thinking what were they thinking back then?  I think there have been a lot of good applications in nuclear medicine, for example, and we have been able to keep it under control, sort of, until now.

That's something I find of concern at the NSF where we have all of this fundamental research that could give rise to anything.

DR. KOROSHETZ:  So I think the things that I would be worried about are replays of what's happened in the past, but now instead of clubs we have, you know, machine guns to worry about.

So if you think about ECT went through various phases.  I think that that could repeat itself, but it could not just be depression.

           It could be other areas where there's evidence that affecting the nervous system will have some outcome which people judge to be good and that could get out of control.  Particularly, I think if people don't quite understand what the issues are, the expectations are, and correcting them.  Can I change you memory by putting a chip in your head.  A lot of people will go to China to have stem cells put in to treat their Alzheimer's with no evidence that it works.  So I think that's the kind of scary part in terms of affecting the nervous system.

Enhancement issue is worrisome.  I mean, you develop skills by changing how your brain is wired and connected.  So now we're going to be able to interrogate the nervous system, find out how that happens and there are good things.  So we're looking at stroke recovery.  We should be able to find out how the brain rewires after stroke to give you a good functional outcome.  But you could also imagine that some foreign government could use some virtual reality testing looking at feedback from the nervous system to turn a person into something that they weren't before for some other purpose.

So I think the power to affect the nervous system is going to change dramatically, but it may be decades as opposed to years.  So I think those are the areas, the enhancement.  The interrogation of the nervous system you can potentially look at people and get behavioral profiles and you want to marry somebody who is very friendly and equanimity.  There's a scan that tells you that's the kind of person for you or do you want to – there’s lots of things where you can…

DR. GUTMANN:  That sounds pretty good to me.  That was very helpful, very helpful.  Doctor Casebeer, do you want to --

DR. CASEBEER:  Yeah, one quick comment is my concerns are more generic, I think, but they are twofold.  First, is just the potential for misuse.  So I worry about the misuse of the basic science both by commercial entities and by actual and potential adversaries.

The second thing I worry about is over-interpretation of the results.  So I think neuroscience work in the popular mind is very attractive and that we can often attribute certainty to it that might result in us moving up quickly on a technological -- where we ought not to.  So but that cuts both ways, right.  We have a charge in DARPA especially to wear the light hats, if you will, and step in to prevent misuse and to develop technologies to address adversaries who are trying to use the technology in the various fashions.

DR. GUTMANN:  Good, I know that John Arras is on the phone and has a question for Doctor Chalmers so I'm going to ask him.  But I just want to underline something that Doctor Casebeer just said which is the over-interpretation of results because that isn't science fiction that is happening already.  And I think there's a real scientific and ethical convergence of concern and interest there which we may be able to at least speak to and make some suggestions about.

John Arras, are you on the phone?

DR. ARRAS:  Amy, I am.  Can you hear me?

DR. GUTMANN:  We can, loud and clear.

DR. ARRAS:  Oh, great.  Okay, so I want to thank our speakers for really thoughtful and clearheaded exposition of these issues.  And I'm sorry I can't be with you in person today, but here's what's troubling me.  In a way this is really a kind of gloss on Amy's question just now.

           As a layperson in this area trying to get educated I go to Amazon.com and I notice really an explosion of interest in literature in what's come to be called neuroethics.  Scores and scores of books. There are now societies devoted to the exploration of ethical issues.  But you know, somebody who has been around for a long time, a bit long in the tooth like myself, might suspect a lot of this is hype, okay.

           For example, it's been claimed that the use of fMRI in the courtroom will have revolutionary results and cast out on our traditional notions of responsibility and freedom. And this has spawned a countermovement, I guess, people like Sally Satel are arguing that a lot of these ethical issues that are being raised are based on hype, you know, are vast overextensions of what we actually know.  So they're arguing that this is all or mostly hype.

So I would ask this panel and then Doctor Chalmers, in particular, to help guide us or to help us separate the wheat from the chaff.  What do you take to be the really enduring important problems generated by this kind of research and what do you take to be really sort of, you know, exaggerations or dead ends that we’ll regret having spent time on.

DR. CHALMERS:  Well, I guess what we find in areas like this is that there is a lot of hype and overstatement.  That's typically in the short term – people have a lot of incentive to over-exaggerate the pace of the progress for funding or commercial or self-interested purposes.  So you hear a lot of what's out there in the press just comes from people who speak the loudest.  But a lot of times that's just a matter of degree.  So you'll hear that something is happening now when, in fact, it's not happening now, but maybe it's going to happen in a couple of decades time.  That's just a few seconds in the pace of history and I think today's hype often ends up being tomorrows reality.

You know, can we right now do mind reading, no, not to any -- with a brain image, no, not in any particularly effective way.  You can get signals as to the very broad character of the kind of mental state which a person is in, you know.  In one of these famous studies you can tell whether certain areas are active, are engaged in spatial imagination or motor imagination, imagining walking through the house or imagining playing tennis.  Can you look at someone's brain and tell what they're thinking, no, absolutely not, although there are people who are trying to promote technologies to that effect.

           Might we be able to do this in a few decades once we’ve got the kinds of technologies which the BRAIN initiative promises to map out states of the brain neuron by neuron and correlate them with states of the mind, well, maybe.

So I can say it's hype today, but 20, 30 years time may be reality.

DR. KOROSHETZ:  I’ll just add there is an experiment which is very primitive.  But you can record from the hippocampus in a rat and tell from what the pattern of the brain firing is where the rat is in a maze.  So that's a very primitive kind of approach to what you're saying.

DR. GUTMANN:  We'll come back to some of this in our roundtable discussion.  I have a list of people's questions and we'll get as many as we can and then we'll have a chance later this morning.

Jim, you're next on my list.

DR. WAGNER:  A quick question and if it's too long an answer we can move this also.

Doctor Chalmers, you can answer on your own right and I would like you to.  The other three of you all represent agencies that are funding this kind of research.  I'd be interested in your assessment of the degree to which the researchers you fund are actually sensitive to the several ethics -- bioethics issues that each of you mentioned you start with privacy and you end up with three C's and there was a bunch of things in between.

And secondly, what do you three, in particular, well all of you, imagine the role of funding agencies ought to be to ensure in research like this or research broadly that ethical considerations are in the forefront of the protocols that these researchers are proposing.

DR. KOROSHETZ:  I would say two things.  One is that people who are developing the technologies are not thinking ethics.  They're real engineers.  They're hardcore scientists.  The people who are working in the human realm are heavily thinking about ethics.  And I think the review committees as they review grants ethics is high up on the list.  I don't know if Christine wants to comment --

DR. GUTMANN:  Can I just say when you said they're real engineers, the vice chair of our commission is a real engineer and he thinks ethically as well as scientific.

So that is a challenge -- we would argue that to be a real scientist and a real engineer you must also think ethically, otherwise, you're going to do things -- you won't even know whether what you're doing is a legitimate enterprise.  I'm not -- believe me I -- what you said I think is absolutely an accurate response to Jim Wagner's question.  But it is our joint mission to make the idea of being a real engineer and a real scientist include having an ethical, you know, part -- the ethical part of your brain firing as well as the scientific part of your brain firing.

           And I say that metaphorically, not neurologically speaking.

DR. WINGFIELD:  I agree that a lot of the PI's who are developing the straight technologies we call components of these tools are not always thinking about ethics.  But one thing that really struck me was that the engineering of the brain, the meeting we had last week with the engineers, that because this was focused on the human brain, 50 percent of the meeting was a debate on the neuroethics.  It was really very, very interesting and the potential misuse that all of you have brought up here.  And also with the hype and so forth which I think all of us have to deal with from time to time.  I've generally found that at least in the biology directorate of panels the PI's cut right through that.  And they try to set the record straight, but sometimes reporters don't always listen, as we all know.

And as for what work we are trying to put in place for this we fund research so we don't actually do any research at NSF except in our own labs back at our institutions.  So we feel we have a dialogue with the institutions and they are responsible as well to a certain extent about what their faculty do.  And you know, if you are funding people in industry and nongovernmental offices that are not academic, then this becomes even more complex.  I think we're just starting to address that issue.  I mean, we have all the animal care and use ethical issues that we have a good system going with the Institutional Animal Care and Use Committees.  But beyond that there's not a whole lot.

DR. GUTMANN:  Thanks.  I'm going to ask Christine to answer.  And again, those of you who haven't been able to answer each of our questions I hope we'll have the roundtable to continue.

Christine. then I have Raju and Nita.

DR. GRADY:  Actually, I wasn't going to answer anything.  I was going to ask -- is that okay?  I was going to ask a question, not answer one.

Actually, building on this conversation -- first of all, thank you, all of you, for your presentations.  Building on this discussion, I wanted to make two observations.  One is even if the ethical principles don't change as I think Doctor Koroshetz said some of the questions are the same questions just with new twists to them.  It seems like maybe there are two levels at which there might be some important changes to talk about and recognize.

One is conceptual because you talked about privacy, for example, might have different meanings than it used to.  Even respect for persons.  Maybe persons might have different meanings.  Consciousness might have different -- different concepts that we may have to revisit.  So that's one level I think I love to hear your thoughts about.

And the other level is what Doctor Wingfield just referred to.  You know, the specific rules and guidelines that we currently use for human subject review, for animal subject review.  And the question is whether or not they're adequate for the kinds of technology and the kinds of science that we're already starting and anticipating in the future.

DR. GUTMANN:  Would anyone like to comment? Doctor Chalmers.

DR. CHALMERS:  Yeah, maybe I could connect that to what Doctor Gutmann raised earlier about potentially neglected issues and concerns.

Some of the ethical issues that get raised by all this are sort of obvious: privacy, mind control, enhancement.  Some of them are less obvious and they will sneak up on us.  You mentioned the changes in the conception of a person.  I mean, many of the issues which have snuck up on us in the past roughly come by expanding our conception of a person or an object of ethical concern.  We start with maybe members of your local village or members of your nation, members of your race and that expands out to all of humanity and now we're very sensitive to ethical issues concerning non-human animals. This is an expanding circle of concern that people have thought about.

And I want to connect this to something that Doctor Casebeer mentioned which again DARPA is starting to think about work on emulations and simulations.  And right now we just don't think about that as a potential object of ethical concern in its own right.  You don't think about the machine that your simulation is running on as a potential object of ethical concern.  But at some point this is going to become at least a question that needs to be raised.  I mean, certainly once we get to the point of say recording a whole human's brain state on a computer and simulating it we're at least going to have to raise the question: is a simulated person a person with its own ethical value?

Even before then, if we start doing our studies on simulating the brains of mice and pain: is that simulated mouse feeling pain?  And if so, do we need to think about ethical guidelines for our use of computer simulations?  Of course, it sounds like science fiction now and it sounds a bit way out.  But this is precisely the kind of question that ends up sneaking up on you.  And in retrospect, people say, you know, what were they thinking then.  But at some point someone is going to be thinking about that issue, if not now, then in decade and I guess this is the Commission for doing it.

DR. GUTMANN:  Raju.

DR. KUCHERLAPATI:  Thank you.

Thank you very much for the discussion so far.  And I heard from some of you addressing the sorts of ethical issues for the development of new technologies.  What I wanted to ask all of you is that: are the sets of issues that we should be concerned about fundamentally different than all of the technological things that happened in biological sciences over the 30 or 40 years? In my active career, and give you some examples of them.

In the 1970's, you know, the discovery of the recombinant DNA technology, you know, really raised tremendous sets of issues and you know -- you know the city where my university is shut down actually being able to do recombinant DNA research until the NIH came along with guidelines and things sort of quieted down with many of those issues.  Some of the issues were dealt with. But you know, a consequence of all the recombinant DNA technology was genetically modified foods and we're still dealing with those types of issues throughout the world. So there are lots of ethical issues about that.

Or a little bit later on when the human genome project was launched in 1990 there was also whole sets of issues about this technology, what it will do and we're still dealing with those types of issues that all the commissions deliberations reviewed yesterday and one aspect of the consequences of that.  And just sort of mention about the stem cell biology and the implications of the ethical issues so on and so forth.  And the Commission has mentioned that earlier one of the first things that we tackled is synthetic biology.  And each of these really dealt with a new technology that raised ethical issues. This Commission felt that there was some basic fundamental principles of ethics that need to be able to be considered and that the framework is the same for all of these different things.  But the question for you is that whether, you know, the technologies that we're talking about in the BRAIN Initiative are fundamentally different in thinking about the ethical issues or whether they're the same.

           DR. CASEBEER:  That's a great question.  My opinion is that there is not much new under the sun in this domain.  And that the principles that come from both the east and the west, all right, from the ethical tradition of the west as well as things like the Confucian tradition can speak to a lot of the issues that we'll be confronting in the BRAIN project.  So the three C's idea is one that I think is built on that touchstone that some of the traditional tools we use from moral analysis can be applied in this domain as equally effectively as they have been in other domains.

So I think that any differences you see will be mostly of degree rather than of kind.  So take, for example, the notion of mind control that we discussed earlier or the notion of mind reading.  You know, it's well-known in the cognitive sciences that we actually have faculties mentally that are designed to help us understand the mental states of others.  It's called theory of mind.  It's what allows us to make inferences about somebody else's intentions as they approach us. It's what allows us to make inferences about whether someone is feeling pain so that I can be empathic and so on.

I think that the issues we struggle with as we think about how brain understanding will influence issues of mind control are going to be a lot like those of what we do about interactions between individuals or one individual might be especially good at sensing bodily signals and tells related to making inferences about someone else's mental states. So I don't think that there will be that much new under the sun. 

It will be difference of degree rather than of kind.  Although, I will say in the popular mind I think there are certain notions about the received image of humanity that the BRAIN project might challenge and that can lead to some discomfort or a feeling that some of our traditional tools of ethical analysis might not fit well.

So dualist assumptions, assumptions about the nature of mental states that would make the study of the brain entirely irrelevant to the study of the mind.  I think if you were batting around for anyone that comes preloaded with some of those assumptions then you might think we have something radically new here.

DR. GUTMANN:  Nita.

DR. FARAHANY:  So first, thank you for these presentations.  I think it just underscores what an exciting time we are in and the tremendous developments that we can anticipate over time.

I do worry a lot about neuro-hype and neuroethics is an area that I spend a lot of my own personal focus in.  So I wanted to gauge from your presentations if a characterization that I'm going to present to you to kind of makes sense as to the current state of the science and where we are.

So if we kind of break down into a few different categories.  Awareness of what's happening in the brain, access to the brain and alteration of the brain.  And think of those on a spectrum, right, build the foundational awareness knowledge then what are the things we can learn from access to interrogating the brain, reading off memories if there is such a thing as a stable memory.  And then alteration of the brain.

We are still, from my perspective, in the infant stages of awareness of the brain.  We have made some inroads into access to the brain and being able to decode what's happening there from the kinds of studies that Doctor Chalmers points to, some of the extraordinary work that people like Jack Gallant are doing at UC Berkeley.

And we've made some modest inroads into alterations of the brain as improved ways from being able to do things like transcranial current simulation or drugs or things like that.  But those latter two categories a lot of the different ethical issues that we raise like mind control and interrogation all presumes that we can do those things surreptitiously and without the consent, awareness and full compliance of individuals.  And we're nowhere close to being able to do anything like that in those latter two categories.

So is it fair -- is my characterization of this fair -- I mean, the BRAIN initiative is really squarely first and foremost about trying to develop a baseline awareness.  And that while it's really important for us to think about and grapple with these issues about access and alteration and that they are part of the goal once we’ve developed awareness that those are developmentally in the future kinds of issues and we're not there or anywhere close to being able to do things like control, interrogate, observe memories, violate the privacy in the brain, things like that. Is that fair?

DR. GUTMANN:  Doctor Chalmers.

DR. CHALMERS:  Yeah, I think that's -- I think what you're saying is pretty well accurate from -- at least from my perspective.  I think there's a good chance within about a hundred years time, people -- you know, say in the golden age of neuroscience, people will look back on the early 21st century as the period of prehistory in some senses of the science because we lack at the moment a unifying theory and, because, we're so much in the dark about both the underlying mechanisms and the connections between brain and mind.

At the same time I think that what the BRAIN Initiative promises and if we really get to the point where we can monitor the neuron by neuron state of the brain and analyze at that level in the context of also monitoring states of behavior and so on we can expect that kind of technology if it works out will actually put us in a position to start delivering on that hype and maybe actually entering into that golden age.  But once again, I think it's probably a case where the hype is just a few decades in advance of a reality.

DR. GUTMANN:  Doctor Koroshetz.

DR. KOROSHETZ:  I would agree.  The only caveat I would say is that in the history of looking at how the brain is functioning, the people who take care of patients who have brain diseases are always trying to make a leap.  And so there are things like I mentioned, you know, identifying areas in the brain associated with depression and then turning them off.  That works. There is no reason, inherent reason, to think that you couldn't think of five other areas that you could go after for a certain purpose. 

So I think that ethical issues will be broken, my sense, by studies in diseased patients to interrogate and then alter the brain.  And then the hype comes from oh, man, they can do this and now they can make me really happy and more productive in society so why don't I go to China and have this area stimulated. So that's kind of the caricature which would worry me.

DR. GUTMANN:  Yeah, I've been quoted previously as saying there are snake oil sales people in every area.  And when you have breakthroughs of the application that are really positive, which you have, there is the ethical concern of people who it's not just an intellectual concern, but the ethical concern of people who are out there selling things that not only can't deliver but have real, real harm.

And that's not new to this field. That's why I prefaced it by being there are snake oil salesman in every field.  But the more promising a field is in its actual ability to do some things, the more worrisome it becomes that we don't want it to be deterred because of the people who are abusing it. So that's really good.

For those of you who are hoping for a break, we decided we -- the vice chair and I made an executive decision that we're going to go right into the next session when we conclude this one. That we don't really need a break since you've stimulated our brains and minds so well here.

But before we do go into the next session and before we thank all of you I do have a question from a member of the audience that I want to read.  It's from Doctor James Giordano. Doctor Giordano, where are you.

Thank you for this who is at the Pellegrino Center for Bioethics at Georgetown. He's the chief of the neuroethics study program there.

And the question is as follows:

Given that neuroscience and neurotechnology are becoming ever more international in research and applications, might we consider cosmopolitanism as a possible fourth C that is important to the development, discourse and articulation of neuroethics.

DR. GUTMANN:  Doctor Wingfield.

DR. WINGFIELD:  Yeah, the international issues are huge especially since the UK -- well, the European Commission has launched the cell and brain -- human BRAIN initiative.  So there are huge issues here and I just want to point to one possible bright spot there is the formation at the Global Research Council which was convened at NSF in May last year and they had a meeting this year in Berlin.  There will be one next year in Beijing.  It's now expanded from 40 countries, I think, to something like 80.  This will be a place where we could start to focus on an international dialogue and coordination on these issues.

DR. GUTMANN:  And we will certainly want to call upon an international group and we have a somewhat international group right here to thank right now.

So thanks to Doctors Chalmers, Koroshetz, Wingfield and Casebeer. Thank you, it was really terrific.

 

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.