Date
Location
Presenters
Steven L. Small, Ph.D., M.D.
Stanley van den Noort Professor and Chair,
Department of Neurology
Professor of Neurobiology and Behavior
Professor of Cognitive Sciences
Director, Neuroscience Imaging Center
University of California, Irvine
Professor Emeritus of Neurology,
The University of Chicago
Paul J. Ford, Ph.D.
Program Director, NeuroEthics Program
Education Director, Department of Bioethics
Cleveland Clinic
Associate Professor, Division of Medicine
Cleveland Clinic Lerner College of Medicine of Case Western Reserve University
Helen Mayberg, M.D.
Professor of Psychiatry, Neurology, and Radiology
Dorothy C. Fuqua Chair, Psychiatric Neuroimaging and Therapeutics
Emory University School of Medicine
Download the Transcript
Transcript
SESSION 3: NEUROSCIENCE RESEARCH—CLINICAL INNOVATION AND APPLICATIONS
DR. WAGNER: Welcome back everybody. I hope you enjoyed a good break over lunch. We are going to turn our attention now to clinical innovation and applications through neuroscience research. We are going to hear first ‑‑ same process, panelists. We are going to hear from each of you in rapid and fire sequence, and then have an opportunity to engage you with our questions from the panel.
And the first of our presenters is Dr. Steven Small. He is the Stanley van den Noort professor and chair of the Department of Neurology, Professor of Neurobiology and Behavior and Professor of Cognitive Sciences and Director of Neuroscience Imaging Center at the University of California, Irvine. He has a very large business card.
Dr. Small is a counselor of the Association of University Professors of Neurology and previously served as chair of the Section of Neurorehabilitation and Neural Repair of the American Academy of Neurology and also has served as a member of the Rehabilitation Prevention and Recovery Committee of the Stroke Council of the American Heart Association. And he is on the Advisory Committee of the Adler Aphasia Center. He is editor and chief of the International Journal of Brain and Language and founder and past president of the Society of Neural Biology and Language.
I wish we had more than just a few moments to hear from you, but we are pleased to have you here.
DR. SMALL: Thanks very much, Chairman. Thanks to Amy, Jim and the committee for inviting me here. It's a great honor.
I was asked to talk about clinical ‑‑ to set the stage for my esteemed colleagues here on the neurological applications, clinical applications of some of the basic research that could be conducted under the Brain Initiative. And so I'm a neuroscientist. I'm a neurologist. I'm also a computer scientist, as you will hear in a second, and so I will focus on that. I will make a few ‑‑ in my slides, I make some allusion to some of the ethical issues, but I'm going to leave that to the discussion in the committee because that is more your expertise than my expertise, but I do make some comments about that as I go.
I figured for an ethics committee, I ought to tell you my disclosures. I get grants from the NIH and philanthropy at the University of Chicago and the University of California and from Elsevier--pays me a little to edit the journal.
What I will talk about today, first, I will talk a little about my personal view on basic facts on 21st Century science. And part two, I will talk about the clinical applications a little bit.
The 21st Century translational research to me, a lot of what I am going to talk about today has to do with genetics and imaging because genetics and imaging really are coming to age in the 21st Century. And one of the important things about genetics and imaging is that one gene rarely explains disease and one brain region rarely explains behavior. And to get biomarkers then, you have complex collections of interactions among different sources of data. And what has been able to put all that together in my view is high performance computing, and I will tell you a little about that.
The issue with high performance computing is that we have massive data collection and analysis about individuals and populations, and we can do massive computations about that.
The cerebral cortex is an enormously complex entity, as you know, more than 30 billion neurons just in the cerebral cortex. That doesn't include the cerebellum and other areas of the brain that have lots also of neurons. 100,000 kilometers of axons. As you know, the Human Genome is of a similar sort of complexity.
To show the complexity more clearly, if you just take five different regions of the brain, and you look at how many different networks there are that include those five regions, you end up with over a million networks, okay, with only five regions, similar with five genes, okay. So you start with only five regions of the brain, and you have over a million. Take a look on the slide, eight regions and you have 72,000 trillion different possible networks that can be done, okay.
When we do brain imaging, I do brain imaging for a living, mostly in languages, as you heard, we sometimes look at 80 regions per hemisphere. Sometimes we even look at 1,000 regions per hemisphere. So what is the complexity and what does it mean for 21st Century science and for clinical applications?
My old friend, Martha Farah, we were in Pittsburgh together a long time ago, has testified before this Commission but also wrote 10 years ago, "mindreading is the stuff of science fiction and the current capabilities of neuroscience fall far short of such a feat. Even a major leap in the signal to noise ratio, a functional brain imaging would simply leave us with gigabytes of more accurate physiologic data, the psychological meaning would be obscure." That is 10 years ago, and it is my argument that high performance computing has changed that equation significantly.
The NFF has sponsored significant research into this through terror grid and Exceed. You also can see high performance computing in the commercial sector and in the government sector. And this high performance computing is becoming a huge issue for us.
What does it mean? It means we have massive data storage, massive search capability. If we can record memories, intentions, disease risks, life expectancy, whatever we can record, we have enough computational ability now to search those spaces. Ten years ago, that might not have been true. Now, we know we have that computational ability.
So let me move over to clinical neuroscience and research topics. So, I am going to talk about very briefly, I am just going to give a glimpse, and we can talk about if we want during the discussion, brain circuit analysis, which sometimes is called mind reading, biomarkers and personalized medicine, brain circuit alteration of the less invasive variety, okay.
Okay, I will pass this around if you want. I bought this for 200 bucks before I came here just to show you. So that is brain circuit alteration that is not so invasive. What is that? This is called TDCS. Yes, I will show you a picture in a second.
Brain circuit alteration, it is very invasive, and Dr. Mayberg will talk much more about that, and then automated analysis leading to alteration. That is brain computer interface work where you do interpretation of brain circuits. But then you automatically do intervention, okay, based on computer algorithms.
So just briefly, brain circuit analysis, some examples of how this is used in clinical neurology and psychiatry now, seizure detection and epilepsy, have electrodes on the brain, and you try to read when a seizure is going to happen. Motor intentions, you try to read motor intentions, do I intend to move? Do I intend to move my arm? Do I intend to move my leg?
Speech intentions, at Berkley, there is a neurologist working on reading your intent of what you are going to say, and of course memories, which is a very, very big deal for post‑traumatic stress disorder, reading, memories, interpreting what these memories are. I want to point out that there is a biomarker for lie detection already existing.
Some of the ethical issues related to this, and these are just my own, you know, ideas, accuracy and reliability of the information, the unintended findings that you get from these data and the breaches of volition and autonomy that they might lead to.
Biomarkers in personalized medicine, we have imaging biomarkers that can tell you motor impairment, multiple sclerosis, neuro‑inflammatory biomarkers that are genetic. We can read all these things. We even have a virtual brain that can simulate fRMI signals and EEG signals on an individual basis. So, I can take one of your images. I can put it into a simulation. I can simulate. Then I can have this computer program do something, and I can simulate the fMRI or EEG responses. That is called the virtual brain.
Issues here, early detection, prediction, what that means in social terms, the accuracy and reliability, unintended findings again, and just the realistic nature of the models, brain circuit alteration. There’s a couple of them, transcranial magnetic simulation, just behavior and pharmacology.
You talk about education being a way to change the brain, certainly it changes brain circuits. TDCS for depression and stroke, again, this is a TDCS device. It is over the counter, it’s sold for gamers, although it does say ‑‑ not only does it say it increases synapses but it also says that it stimulates your prefrontal cortex, okay. And you know some of the issues related to this, you know, controlling someone's thoughts, creating false memories, and then the long‑term risks and the direct‑to‑consumer issues that you heard about already.
This is the same slide you saw previously. This is a TMS, which is a magnetic stimulation.
And, finally, invasive brain circuit alteration, we shall hear more about in a minute from Dr. Mayberg. Deep brain stimulation is used for many things, Parkinson's Disease, tremor, dystonia, depression, pain, Alzheimer's disease. I'm sure Dr. Hauser and Dr. Mayberg can give you other examples. There are a lot of such examples.
Cortical surface stimulation. We use this stimulator here in a study of Aphasia therapy. Someone who had a language disorder after a stroke, we did brain surgery, and stimulated their brain to try to get their recovery.
Stereotactic ablation is now back. It used to be called psycho‑surgery. Now it is called ablation, and it is now stereotactic for obsessive compulsive disorder. There is surgery to ablate a part of the brain for OCD, vagal nerve stimulator.
I think you know these issues very well, the ethical issues, the permanency of these things, the fact that these are irreversible, the side effects, informed consent. Is there such a thing as informed consent? What does it mean, informed consent, in these cases? In my view, because I don't believe in this one‑to‑one relation between structure and function, I call it the phrenological fallacy. If you will ablate one area of a large circuit, what does that mean?
And I want to point out that in China 10 years ago almost, they were doing clinical ablation for drug addiction, and it is very challenging. They stopped that a couple of years later.
Finally, the last is brain computer interfaces. So we can take the circuit reading, like in seizure detection, and we can then predict seizures and then stop them automatically. So we can build a computer system that will read these, read the circuits and try to predict when a seizure will happen, and then stimulate in order to block the seizure, have that an automatic system.
Motor intentions for spinal cord injury or even stroke. Read the motor intentions off the premotor cortex, interpret what the intent to move is, and then stimulate the arm to make a movement, either a robotic arm or in fact the real arm, although it has mostly been done so far with robotic arms.
Speech or motor intentions, the same thing, and obviously the reliability fault tolerance, adverse effects and the automaticity of this thing, the fact that there is no human intervention. You read and it has to be reliable, and then you try to stop it.
So, in summary, this is my last slide, 21st Century neuroscience collection analyses of large data sets, explanations in neuroscience require network level inferences. Networks contain massive amounts of data. High‑performance computing will permit this work to succeed. And in clinical neuroscience, we have recording, analyzing and altering these networks, creation of personalized medicine based on these networks, and the use of these data to characterize individuals and/or alter the brain has significant ethical implications. And that's it.
DR. WAGNER: Steven, thank you, fascinating, fascinating stuff. The second speaker for us on this panel is Paul Ford. Dr. Ford is the director of the NeuroEthics Program and education director of the Department of Bioethics at the Cleveland Clinic, Associate Professor in the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University.
For more than 10 years, he has been part of a Deep Brain Stimulator Team, and the Epilepsy Surgery Program, providing ethics advice in clinical cases and on research.
Dr. Ford co‑edited two books and is author of more than 70 publications. He lectures nationally and internationally on a range of issues that include neuroethics, clinical ethics consultation and bioethics education. It seems like the right guy for this panel. Thank you very much for being here, Paul.
DR. FORD: It is truly an honor to come and address this Commission. I'm delighted that you have selected this afternoon to talk about issues that are relevant to patients in clinical research. And particularly I was charged with discussing participant selection and consent as it applies to the neuro research that may arise from the Brain Initiative. And I will use deep brain stimulation (DBS) as the paradigm to address a few of the topics.
You notice on my title I talked to a patient-participant. I do that purposely, oftentimes it is most interesting when it is clinical research, when the patients then become or are asked to be participants in research.
There are many, many people who have influenced these comments. A few of my local collaborators, I put on the screen. In particular, patients, research participants and families are ‑‑ I am particularly indebted to them for sharing their stories with me, either in the interviews I have done with them for a research or as I help them to make some tough decisions about whether to have elective surgeries.
So of the whole universe of things we could talk about in this topic, I chose really five points for us to jump off on. The first is about complexity, which Dr. Small nicely has started, and so I will talk about that very briefly. We need to be careful that not all brain diseases respond similarly even to the same intervention. We need to actively dispel media and societal mythologies that arise out of these magical seeming neurotechnologies.
When we use the term "vulnerability," as we often do in bioethics and in research, we need to be very careful about the assumptions we make. And, finally, this is perhaps in the neurosciences research where patients need to be most seen as collaborators.
So Sir Ockham wanted us to often move to the simplest explanation rather than the more complex when we are given a choice, reductionist of his idea. But Dr. Small, again, gave a nice overview of how things are likely to become even more complex in terms of brain circuits rather than being reductive to a single part of the brain. This impacts ‑‑ we need to be careful about how we are going to provide good informed consent and educate our participants in research.
We have complex systems. And, again, Dr. Small nicely pointed out that we have unpredictable at times side effects, depending on how we move one system, it is going to affect other systems. So we need to again try to find the best way for consent as well as developing protocols.
In our oversight for research, because of these complexities, again we need to try to make sure we avoid either overprotection or under protection. And we should be very keen to pay attention to what expertise we have on local IRBs and other places while saying we need to keep this a balance of efficiency. We don't need inefficient regulatory processes keeping us from good science and good studies, but we need to make sure we avoid under and overprotection.
So technologies like deep brain stimulation may in fact give almost magical results. A patient with Parkinson's implanted with DBS often their tremor, they will turn the stimulator on, and it will almost disappear within moments.
However, in dystonia where the person may for a long time have had a twist in their body, over the next year with therapy, it will respond. And so we need to keep the proper expectations in developing protocols, which Dr. Mayberg will talk more about the protocol issues, as well as expectations of participants that you flip the switch and it may take a long time to come on. You flip the switch, and it may also only potentiate therapy. For obsessive compulsive disorder, it may only give the brain plasticity that allows for the hard work for cognitive behavioral therapy or some other conventional therapy. So it may not be a quick fix, and we need to recognize that the stimulator may then lead to other hard work that we have.
And finally, again, Dr. Mayberg, I think this is the slide I will talk with Dr. Mayberg most, she has some interesting and important research that moves to the point of sometimes we have direct and indirect kinds of methods for getting at brain diseases, for instance, in her study, in depression, differentiating who might be helped with a pill and who might be helped with cognitive behavioral therapy. We need to keep into account that each illness may have a different modality that it responds best to.
I said "magical" earlier, and there is a natural progression. There are studies that look at neuro‑stimulation in the media and oftentimes it is overly optimistic or only highlighting the miracle kinds of stories. This leads to a real barrier to getting best informed consent because people come with these expectations, "but you have to say that, but I read on the Internet." So this may in fact give us a new obligation, not to just give an evenhanded approach but to actively dispel the myths we know are being propagated out in the media.
So we love vulnerability, we always need to talk in terms of vulnerability, not just of being vulnerable but why someone might be vulnerable, for medical reasons, for judicial reasons, for financial reasons, and what their vulnerable and need protection from. And our colleague, Joseph Fins, in a number of places talks about the need not to disadvantage those that we decide are vulnerable for some reason, not to disadvantage them from getting therapies and doing research on therapies that could help their lives. And just because they have a mental health diagnosis does not necessarily make a person more vulnerable. And Dr. Appelbaum, who will speak later, demonstrated this with those with moderate depression years ago. We need to be careful not to make that kind of assumption.
Vulnerability also plays into the issue of what harms or risks are too much to ask of a research participant. I was on the Data Safety and Monitoring Board for the Minimally Unconscious DBS Study, and we were faced with what you do when you have a patient that might have a kind of emerging awakening if the protocol calls for it to turn off, he may lapse back into the non‑awake state or the minimally conscious state. Is that ethically justifiable?
Further, some mental illnesses, severe mental illnesses, have a real risk of death, maybe there are terminal mental illnesses. And so we need to be careful as we decide whether it is justifiable to put that population at risk for death in research, that we don't pass it off as something very simple.
We need to avoid assumptions about what patients and research subjects actually want. In a study that Cynthia Kubu and I interviewed patients with Parkinson's disease before they had deep brain stimulation for regular therapy, and then six months later. And surprisingly at six months they wanted less control of that programming in their head than they did at the outset, which may go against the common thought, well, they will want more control once they see how easy it is to program. If we hadn't asked, we wouldn't have known. So we need to ask the stakeholders what makes their life better, and create outcomes that actually address what it is that will make those things better.
A patient that I saw that had intractable pain that was being evaluated for a cingulotomy, not just for obsessive compulsive disorder, sometimes they will do that for intractable pain. All he wanted to do was be able to have his grandchildren on his lap. It turned out after the surgery, he could have his grandchildren on his lap without pain, but he no longer cared. So we need to be careful about which kind of outcomes.
Patients as collaborators. Patients often are asked for subjective reports in not like many other kinds of therapies. We need to be respectful of this effort and this kind of self that we are studying.
So, the points that are connected together that I want you to think more about and discuss perhaps: We need to be careful not to try to oversimplify. These are complex issues and complex circuits. Not all brain diseases are going to respond similarly, and we need to be attentive to that in our overview and our expectations that are set. We need to actively dispel this mythology that gets presented in movies and in reporting of the magic of these technologies. When we talk about vulnerability, be clear about what assumptions we are going to make so we're fair. And, finally, let's find new ways of making these patients our collaborators as participants.
Thank you.
DR. WAGNER: Thank you. Our final panelist is Dr. Helen Mayberg. She is professor of psychiatry, neurology, and radiology and the Dorothy Fuqua Chair of Psychiatric Neuroimaging and Therapeutics at Emory. So she and I are colleagues. She has tried in vain on multiple occasions to teach me about biomarkers and neuromodulation, but she also had some professional successes.
(Laughter.)
DR. WAGNER: She heads the multi‑disciplinary research team studying brain mechanisms, mediating depression, pathogenesis and antidepressant treatment, a response using multi‑modal neuroimaging and was instrumental in the development of deep brain stimulation for treatment‑resistant depression.
Dr. Mayberg is a board certified neurologist, member of the Institute of Medicine, active participant in a wide variety of advisory and scientific activities across multiple fields of neuroscience. It is good to see you here, Helen. Thanks for joining us.
DR. MAYBERG: It is an honor to be here and to share some thoughts on what has been a very interesting and eye‑opening day. I was asked to speak to research design considerations, and it is actually much less fun and much less provocative because it is fairly straightforward, but I think that in the context of the Brain Initiative, the DARPA initiatives, and things that are going on right now, it is an opportunity to think about what goes into designing these experiments a bit.
I also have some disclosures because I do work in this area, and did develop and have intellectual property related to deep brain stimulation for depression.
So, I think the big picture of all of this really is if you are going to modulate brain circuits, which are really what we are talking about, with invasive devices, we now start to call them circuitopathies or connectopathies. There are all kinds of even interesting new words for these disorders, but it really is based on the assumption that we can define brain circuits in human beings, not just in worms and not just in mice. And what that means is with imaging, we have actually more information than we want. And we are trying to reduce down the necessary and sufficient amount so that we can think about why we might want to invade the brain and tune it in some way, where we might do that, how we might do that, and who we might do that to.
And we have lots of examples already of movement circuitry, mood and motivational circuitry, memory circuitry. I have a few on the slides that give us potential nodes in obviously over‑simplified circuits but ways in which not just on the theoretical but the actual real life application have already been done, and you have heard from both previous speakers about those examples.
So I took the prerogative of ‑‑ a little self‑centered ‑‑ but to set up the idea of what are the mechanics of what you would have to do if you wanted to actually design an experiment to implant a device of any kind in the brain for any condition and without making any pre‑conceived notion that to me if you could meet this criteria, regardless of the diagnosis, I think it is fair game if there is proper informed consent and approvals to do it.
And I think the mechanics is that you have to kind of know where in the brain you are going to place something, you have to have an end point of what you think should occur, and you have to know who is appropriate to offer that intervention. And that means that to answer those three basic questions, you have to have some idea of the illness or the behavioral circuit, you need to know what part of that circuit is the necessary and sufficient to change to get well. And that might entail that you might want predicted biomarkers of who to implant. You might want metrics of target engagement. And, at the end of the day, you need some way in which to test, analyze and refine.
You actually need a human closed loop, not just a device closed loop, to where you have an idea, you have a hypothesis, you design an experiment, you try it out, and if you aren't as smart as you thought you were with an implanted device, that you have the possibility to go back, iterate, refine and perhaps continue to do benefit and make progress, but equally to know when your idea actually should be abandoned. And the question is is how to know the difference. But it will at least set up a framework to be able to go in either direction, a go or a no go.
And I think the caveat is that kind of goal has really got to balance the risk/benefit tolerance, which is going to be very different for different indications.
So I've made a list of using brain stimulation with very conventional devices, but this applies to the initiatives that want to implant multiple implantable devices in patients as being proposed in the DARPA sub‑nets and DARPA RAM initiatives that I think you've discussed or for any new or novel device invention, innovation that might be developed by industry, that the strategy is always going to be based on the pathology of interest.
So, if you are interested in just stopping tremor, you shouldn't worry about other symptoms that might be part of a syndrome but that aren't the target that you are going after. If you are thinking of something like a degenerative disorder, maybe your goal is if you can't affect a cure, that at least you can slow progression. And if you can buy back time, reverse side effects, that is a reasonable goal.
If you have an acquired lesion, as we talked about in both previous talks, maybe you want to just facilitate plasticity. And that stimulation can do that in various ways. If you have an episodic disorder, maybe you want to prevent or interrupt the onset of symptoms, whether that is a panic attack or a craving an addiction or where most obviously in epilepsy where it has really just been approved to do that.
But you can equally start to think about what do we want in a development disorder. Maybe there is a way that you could evoke pruning of a non‑pruning system in something like autism if that turns out to be a mediator of the path of physiology. And schizophrenia, quite frankly, I am not sure what I would propose to do, but I offer it as if one can strategize about what the pathology of interest is, one can see if electrical tuning might have impact.
I think the second broad issue to address is exceptional needs for implanted devices. And I think this is where we do not want to throw the baby out with the bathwater. There is clearly—as Dr. Ford articulated—that risk of surgery has to be weighed against the risk of a disorder. And in my own work in treatment‑resistant depression, by the time you've failed electroconvulsive shock therapy, you are at the end of the line. And the risk of suicide is extremely high. And so, again, the idea of what is the risk of an implanted device over the risk of likelihood that you will kill yourself has got to be considered. And I think that is true for other disorders.
You equally have to consider something that may be less obvious is that just getting rid of a symptom or affecting change clinically has to be weighed against how long is that benefit realized. If you can do something that lasts a week or a month or maybe only six months, does that justify implantation of a device or the cost of that device in a changing health care market?
And I think, again, most important to design is that you want options for modification based on new data once implanted. The idea of just withdrawing a pill that you have given for 12 weeks when it doesn't show to be effective is not relevant when you have an implanted device, and there is risk of removing it. You just might not have gone long enough. You might have flexibility in terms of where or how you stimulate that once implanted, even though you didn't meet your FDA end point, that the opportunity to still give benefit has to be explored. And that means we need new ways to consider our design, our outcome and our measures of efficacy.
And I think again defining outcomes becomes key. Do I need a cure? Can I get better? What is better or well? How long should it be? Is slowing progression and reducing needs for meds enough? Might be. Depends on the topic. If you could use adjunctive treatment with DBS, I can enable cognitive behavioral therapy that I couldn't do without cognitive behavioral therapy. That doesn't mean you only needed therapy. No, you needed therapy plus stimulation that they are synergistic. And study design for FDA—that is very hard to proceduralize. Not only do you have to do rigorous surgical implantation, end points, but now you have to enable a team that both gives therapy and the device. And that is a lot for industry to take on as their responsibility. And obviously developing target engagement biometrics is going to be key.
I think that, you know, if you ask the community now where we are, you get a different answer depending on where you sit. If you are like me that mostly does my own sponsored IDEs and does research, I say, "We're good. We are making progress. We have small samples. We are getting grants. It is going along fine." On the other hand, if you are an industry who went to a pivotal trial that failed, you are kind of going back to the drawing board. We have a lot of enthusiasm for some of the new initiatives, and engineers are building new tools. But we have got to have a construct and a platform to do these going forward. And I have talked about that.
I think critically is a transition of an experiment to treatment. When is an experiment no longer a study? And when does it become ‑‑ when does it become a treatment? And if it works for you and a trial ends, are you better? It is sort of an existential question but not for the patient.
And I want to make that point on this slide from the perspective of patients from the point of view of depression. It is an intractable illness in which you cannot imagine ever being free. And so if you were to be in a trial and to be better, if it is safe, why should I have to wait for the results of the pivotal trial whenever that may be, 10 years from now, seven years from now? How good is good enough for me to have access if it is safe? I have the money. I'm willing to take the risk. What do I have to lose?
And I think this is a cultural shift we need to facilitate these communications, not just FDA in isolation with industry, not just physicians in a silo of research, but we all have to get together to understand what our goals are, how to do this ethically and how to meet the needs of patients who are suffering.
And I have this as a summary: Know the biology, know your outcomes. These methods are not turn‑key. Don't treat them like that in clinical trials. We need new standards for success and all or none is probably no longer meaningful. And if we can meet those criteria, maybe we need to think about what we offer patients beyond randomized clinical trials or humanitarian device, et cetera.
Thank you.
DR. WAGNER: Helen, thank you very, very much. In fact, thank you all. You put your finger on it, a question that I have here at the end but also, Dr. Ford, and it is this question about the distinction, particularly when we have ‑‑ you used the word "exceptional," and we struggle with exceptionalism of a neurological both investigation and especially therapies. How do you answer the patient's question when the opportunity to do the work that you are doing is occasioned by desperation rather than by a carefully planned deliberative process of scientific method?
DR. MAYBERG: So, you know, this is one of the hardest things one faces everyday as a clinician doing research is to remain scientifically agnostic, empathetic but neutral. And that with the publicity, as Dr. Ford kind of talked about, they just re‑ran a story they did on our DBS procedure on CNN last week in light of Robin Williams' tragic death. And, again, it just promotes many people thinking this is available. They don't listen to what Dr. Gupta had to say. And it just brings this to the forefront again.
You know, the issue becomes what I tell patients is that we have good evidence in our hands that this can be effective. What we do not know is who it doesn't work for, which means that this remains an experiment. It takes us three months to enroll patients to make sure that they remain ill, that they have failed all other efforts. As desperate as patients may be, the appropriate ones are actually in a state of purgatory. It is not an emergency even though in the big picture, it feels like one.
And so that again trying to actually have patients appreciate science moves slowly is very hard. But at the same time, if we get ahead of ourselves, we do no one any favors. So I think when you speak with patients, and that there is communication, patients can come to understand that. But it is a daily difficult road to even self‑monitor ourselves, that we have to collect the data, demonstrate what we do and what we don't do, and to have appropriate expectation of potential participants.
DR. WAGNER: What I'm going to work for, and since I want to pass it around here of course, is are there general principles to extract from the individual conversations that you have with patients and the individual issues that ‑‑
DR. MAYBERG: I will just comment on that. I will try to be brief. I think that what I've learned in this is that one has to listen to the patients. He stole my line, Paul. I tell the patients, I say, "You are our collaborator. If you don't tell me what happens, and you think you are trying to please me, you are pleasing no one and you are not helping yourself, and you won't help the science."
But that is very hard to proceduralize. So if you talk to Dr. Pena, how do you actually put that into the rule book for FDA in a pivotal trial? And that means that we have to do research before pivotal trials and actually try to understand and listen to what we're doing.
The exceptionalism, and that word is a hard word, and I did think about the double entendre of it, is that it isn't really exceptional. If this is the way that we need to affect change in the brain with an implanted device, then so be it. It's not really exceptional. It's exceptional in that it has an opportunity to be iterative and requires iteration if we make a decision and feel justified going forward. So I think we have to design things in a way that lets us do that in a systematic way.
COL MICHAEL: So one quick comment to two of you for making the comment about patients as collaborators. In my field, all my professional life, has been involved in HIV care and research for HIV, especially for vaccines. And the concept in our field, I think we gathered pretty early, was to make patients or other stakeholders, people at risk, not just collaborators in terms of clinical care but in terms of research. I think that has been enormously enabling for our field.
My question is: you mentioned, Dr. Mayberg, the field needs something other than randomized control trials or humanitarian device exemptions. What are some of these other remedies that you are thinking about?
DR. MAYBERG: Well, again, sometimes posing where the problem is is easier than posing a solution. I think that just by example of the failed trials in DBS device work, the epilepsy trials that failed, the two depression trials that have been halted, is that picking the end point has been problematic and not taking advantage of the capacity to go longer. So if you said it is a three month end point or a six month end point, but by following patients' long term, you actually see a year is quite clear, being able to include that, to be iterative, to consider how to evaluate efficacy.
I think one of the things people have talked about is to treat everyone, and then have your randomized discontinuation be late once people are well and look for loss of an effect. It is very clear in most of DBS research, maybe not Dystonia the same, is if you discontinue the intervention, the efficacy goes away. And that's very true in depression. You are well. The battery is depleted. Patients get the dwindles. You replace the battery. You get the effect again. So you could build that into design.
COL MICHAEL: As a follow‑up, couldn't you do that by designing your RCT based on clinical outcomes at a time?
DR. MAYBERG: Yes, so maybe I should have made my slide a little clearer, other than randomized control trials but randomized control trials that are sham, active at the beginning. And I think the big problem is "when" the end point and "what" the end point more than "who" and "where."
DR. WAGNER: I've got John, Amy, and Nita.
DR. ARRAS: Well, Dr. Mayberg just addressed my question, which was very similar to Nelson's, so thank you. Dr. Ford, I am always intrigued when people make the claim that sometimes less information or less choice is better than more, right. So you are floating this notion that some of these patients really don't end up wanting more control over whatever, you know. Could you just sort of share some of the reasons that they give for that preference?
DR. FORD: Sure. We interviewed 52 patients before and after. And, of course, the reasons vary by patient, but a couple of common themes were an increased trust in relationship that they developed in the nurse practitioner and neurologist who were programming them. So a kind of development of relationship over six months of programming that now they, by and large most of them, had seen an improvement. So you had an attitude of if it is not broken, then why fix it?
And even the couple of people who at the outset disclosed to us—because we are not on the clinical team for these patients—that they thought they might like to experiment and play around with it. They sort of lost interest once their lives were improved. By and large, this group showed a good improvement as they do with the majority of Parkinson's patients. But they were engaged in other aspects of their lives, so it is almost as though their attention was now focused towards doing things with their family and doing things. And so as long as it didn't cause them problems, keep at it.
But it was counterintuitive to me at first because I thought that it would seem simple. And for me, if something seems simple, then why not let me control it and see if I can offer to optimize it.
Our European colleagues sometimes, and even in the US now, will have several settings for some of the diseases, will let them have different side effect profiles. But, yes, I was surprised.
DR. GUTMANN: Steven, I understood you at the beginning of your talk to say that the position that you can't read the mind is now or becoming ‑‑ it is either or becoming obsolete because of big data. And I don't quite ‑‑ I'm not sure what you want to say by that, but one thing that reading the mind means is that you could read Harry Potter off of a neuroimage with enough data of J.K. Rowlings' brain. I mean that is the commonsensical, accurate view of being able to read the mind. It is not the same as saying ‑‑ I mean that is where the philosophical dispute is. It is not that you can't ‑‑ that J.K. Rowlings' brain isn't causing the Harry Potter novels to be written. Everybody, at least in the philosophic scientific community, agrees that there is causation there. But we, as far as I know, and I'm not a producer but a consumer of this literature, we are far, far, far away from being able to read a Harry Potter novel by imaging J.K. Rowlings' brain.
And that's why people still believe, people who are firmly committed to science and causation, still believe that there is a big distance between big data imaging the brain and reading one's mind. There is a meaning to the Harry Potter novels that nobody has yet been able to impute into brain image. And that is culture. I mean that is a big part of human life. It is not the same as reading early onset Alzheimer's, which we may be very close, indeed we're doing or something like that. Do you understand?
So, am I wrong, do you still believe that we're on the cusp of reading minds in that sense or you mean something more about diseases and things like that?
DR. SMALL: No, I didn't mean exclusively diseases for sure.
DR. GUTMANN: Okay, okay.
DR. SMALL: I certainly did not. I mean the question whether Rowlings' development of her book can be read over ‑‑ through some neural circuitry, the development of the ideas, the writing of that over time, I think you are right. I mean that is a little preposterous. We don't even know to what extent all of that remains in her memories, right?
DR. GUTMANN: But that was her mind, her brain and her mind, I mean her ‑‑
DR. SMALL: At some point in time.
DR. GUTMANN: Her brain produced those novels, no doubt about it.
DR. SMALL: Her brain did produce those novels.
DR. GUTMANN: But reading her mind would mean reading the words that she wrote in that book, and that's ‑‑ I mean I'm not making a small point here. It's a big point about ‑‑
DR. SMALL: I'm not sure. I'm not sure you are extending this to a point that is not ‑‑ would never make any sense. I mean to what extent ‑‑ do you think that she knows ‑‑ if you asked her, that she knows all of the words in these books? I mean ‑‑
DR. GUTMANN: That's not the point. The point is that when you say that you, as you said and many people say, that we are at the cusp of reading people's minds. People's minds have produced Harry Potter novels. They have produced great symphonies. They have produced all kinds of dialogue that comes on to take a very rich meaning in our daily life, and we are just very, very far away from being able to read those from any kind of data about brain imaging.
DR. SMALL: But we are far away from some of them. I mean what you mention is certainly ‑‑ to me, it is ludicrous to think we could do that in the near future.
DR. GUTMANN: Okay.
DR. SMALL: On the other hand, suppose I had ‑‑ would you think this was plausible? I have an electrode in my brain somewhere, and I'm playing a video game. And two seconds before I make a decision, this electrode tells me ‑‑
DR. GUTMANN: Absolutely.
DR. SMALL: ‑‑ I know you are going to make a decision left or I know you are going to make a decision to the right?
DR. GUTMANN: Absolutely.
DR. SMALL: That is an example of mind reading, right? And so the question is can you go ‑‑ from that, can you go to probably not where you are talking about but to somewhere in between that.
DR. GUTMANN: But the reason I'm asking, and this is a very helpful discussion, is that it is important to be clear about what we can at this point read and what we can't because there is a huge amount of mind reading. There is a huge amount that the mind represents.
DR. SMALL: Sure.
DR. GUTMANN: That's all.
DR. SMALL: No, absolutely.
DR. GUTMANN: I just think it is really important because we are going to as an ethics commission, if we come out and say that neuroscience is on the cusp of reading the mind, that means all the things I talked about, as well as whether somebody is going to hit right or left.
DR. SMALL: Neuroscience is currently able to make some predictions in the future from reading brain circuits, including that experiment I just mentioned which was done by Itzhak Fried at UCLA where he could actually tell in a video game situation whether the person was going to do one thing or another. And my point with the high performance computing is we can increase the complexity of those sorts of predictions, both in disease and in not disease. And some of those can be extremely helpful, and some of those are potentially subject to abuse.
DR. WAGNER: Maybe I can also tie two of these thoughts together and ask for your advice. Part of the presentation, your presentation, seemed a little contradictory also. And when you talk about if one assumes you need to know all about every connection, and some of the numbers you showed tens to the hundreds, very easily one can imagine that if you spent as little as a femtosecond on each connection it would take you more time than the life of the universe to have touched each one, right? So that makes that just sound so, so overwhelming.
But getting back to John's point about sometimes less is more, isn't there quite a bit of conversation today about looking at groups of networks, the networks themselves rather than their connections? It would be like saying if we had to evaluate all the chemical bonds between the ink and the paper in Harry Potter before we thought we could understand the novel, that would be wrong, right? And that would be an overwhelming ‑‑
DR. SMALL: So, Jim, yes, we are getting to a very, very old discussion of mind/brain to some degree, but my point wasn't that you have to read every single one of these connections. My point is that networks of neurons working together, regions of the brain working together, sub‑sets working together are what are implementing our mental functions, these networks. Some of them can be relatively small. Helen showed a couple of networks that although they are oversimplified, okay, they do reveal a lot. And in some cases you can reveal a lot by taking ‑‑ by making a phrenological fallacy, as I call it. You can ‑‑ neurologists for hundreds of years had some success treating disease by assuming that functions were located one to one. As we get bigger and bigger, my point is, I believe that we are going to get computational capacity so that we can delve more and more into larger and larger and larger circuits.
Today, we did a study using the NSF network, okay, thousands and thousands and thousands of processors, where we looked at five regions of the brain, and all their interactions, in order to understand brain behavior relation in one area. My belief is as we can understand more and more and more such interactions, we will be able to infer more and more and more about the functions that are being performed by ‑‑
DR. WAGNER: And I think that is a safer statement than mind reading. That is all I think ‑‑
DR. GUTMANN: That's all ‑‑
DR. SMALL: Well, I took mind reading out of the popular literature. I mean you know.
DR. GUTMANN: But it is really very misleading because you said, and you planned to say, that Martha Farah said that we are very far from it, but we're not anymore.
DR. SMALL: But she was talking about lie detection of course, right?
DR. GUTMANN: Right, well that, again, that's another story but there is a whole realm of what the mind produces, which we live in and gives a lot of meaning to our life that we're very far from reading from a brain image. I don't have anything at stake in this. I mean I'm not entering into the old ‑‑ there is an old debate that has dichotomies. It is just that we as a Commission have been committed to trying to keep the hype down.
DR. SMALL: Oh, that's fine. Yes, no, I would never argue that we can read any significant portion of anyone's mind now or in the near future, but I do believe that we can read significant portions of intentions, of goals in the near future, 10 year time frame, certainly mood, and even action.
DR. FARAHANY: I want to add a tiny bit of disagreement in with that, but then turn a question to Helen. So I took it to be that you were also making reference to work like Jack Gallant's work that reconstructs visual imagery in the brain where he has been able to create a brain dictionary essentially using significant computer algorithms that is able to do things with fMRI that reconstructs real time visual imagery or reconstructs language from the brain. And to be clear, so that we don't have any hype about this, these are very small scale studies where for a few entirely cooperative participants, they have been able to recreate this. But it has been recreated by several other labs where people have been able to recreate what a person was listening to or what a person was in words, recreate what a person was listening to or recreate what a person was thinking of in words, recreate visual imagery that they were seeing or recreate visual imagery that they were imagining. That is a lot closer to the conversation. I wouldn't use "mind" because that invokes so much more, but that is a lot closer to the concept of reading visual imagery or words in the brain that I think people worry about. We are very far from being able to do that surreptitiously. We are very far from being able to do that without the cooperation.
And you couldn't recreate an entire novel. It is going to be what you are thinking of at the time that you might be able to recreate. So I took that to be some of the kind of work that you are referencing.
DR. SMALL: Well, the successes now in that realm of reading brain images, reading EEGs and so forth and trying to infer what the person's doing, believing, thinking, most of the success in those kinds of studies has been with limited categories of information.
DR. FARAHANY: Although Jack's work is a little broader than limited categories. I mean that was hundreds of YouTube video clips that he downloaded and was able to reconstruct visual imagery of novel and random video images that he downloaded.
But I had a question with Helen. You had so many interesting both ethical and scientific issues that were packed into the limited time that you were given, and there's a couple of things that you said that I thought were really interesting that I was hoping you could build upon. In particular, as you talked about the kind of risk/benefit analysis, thinking about where many people, the alternative might be, that they are so far down the path that suicide is a realistic alternative. It raised for me the kind of question of to push you a little bit on how you are thinking about the risk/benefit, particularly given the last quote that you put up. But also if you could just help me understand a little bit the informed consent in this patient population given that they are, many of them, I mean this is a last effort after many things have failed. We expect that they have diminished capacity and compromised autonomy in some ways. And so how is it ‑‑ how does that work in this population?
DR. MAYBERG: So we were fortunate enough to collaborate with Paul Appelbaum and some other ethicists on using the CAT CR to actually evaluate what does someone who enters into a trial of a totally novel invasive procedure, what do they perceive is going to happen given the extremis that we require in our inclusion criteria.
What you found is one misnomer about depressed patients even who are extremely, extremely ill and disabled is they are quite rational. They are rationalists. It is, look, you know, this may not work for me but they care more about can you guarantee that you don't hurt me. I can deal with the fact that you may not get me better but the idea that you would make me worse by giving me a stroke or a hemorrhage or a seizure or a heart attack, they are very clear on that. And the ones that actually don't consider that are actually probably not appropriate.
So my personal feeling is when they ask the wrong questions, they actually are in the wrong state. So I've found that one just needs to proceed. I can remember my very first patient in Toronto where she says, "Look, you seem like a very well‑informed and committed physician. You showed me these pictures of what you want to have happen in my brain." She goes, "You know, given what I've been through, it is unlikely to work but if you might learn something, and I might benefit, it sounds as though the risk may be worthwhile."
And, again, hearing that kind of personal balance in equipoise as entering into this, you can argue that patients with depression that are that ill, they have almost ‑‑ they actually don't believe they can actually be in any other place but deep down what you can hear in a conversation with a patient, whether it is as just getting the informed consent or talking to them as a clinician is that they hope there is a chance for them. I think that capacity is preserved.
I just did want make a point on this issue of the oversimplification, what can you read or not. It gets back to the issue of what we're trying to do in experimental procedures. And, again, I can only use depression. That is all I know about. I don't do anything else.
But what I've learned over now 11 years, the first implant was in May of 2003, I think that by understanding the biology, we learn that we remove psychic pain. I'm actually starting to believe I think that is all we do with the stimulator. We turn the negative off. And when that release happens, which now we can be more predictive by our imaging so the science has gotten us there, then actually you meet the person underneath. And they actually have to, enabled by the machine, it allows them the capacity for plasticity, and they have to actually do something now with a brain that works.
And so, again, I was thinking about in this conversation about the mind, I can't predict creative capacity. I can't predict what you will be able to do by looking at your brain scan or not on and off the machine. What I can know is that I can remove something that is an impediment. And what you do with now a brain that works is actually up to you. And I don't think I will ever be able to read what the capacity of the person's brain is, but I hopefully will be able to read what I did to the brain that allows them to be whatever they will be.
And I think in the trial design in thinking about that means that we're affecting the brain in stages. And just like in the morning discussion, you never go ‑‑ there is no such thing as irreversible in the brain. Every interaction changes the brain. So you got to have big shifts to actually detect anything to make meaning. And I think we have to define the biology and the accompanying ethics, quite frankly, at every stage. And that was just something I wanted to add.
DR. ATKINSON: Yes, this is to Dr. Ford but maybe everybody. Your fourth point, the vulnerability one, I wanted to hear a little more. You really were talking about how they might not be vulnerable, and we should be careful on it. I wondered where you drew a line, where you said this was a vulnerable patient and how you would make that decision and how you would not more specifically?
DR. FORD: So I think that the first step to that is to ask yourself what aspect of vulnerability are we talking about. Is it that they are desperate because they have failed all treatment? I interview as part of the safeguard to informed consent for patients with thalamic pain, which is a central pain that starts in the brain because of stroke rather than out. It looks like it is out in the arms and otherwise, so it only is central. So the first step is do they have other medical options, and maybe they are vulnerable just by that fact. Maybe they are vulnerable because they lack education or the educational level that would allow them to protect against being coerced.
And so one ‑‑ during an earlier response, one safeguard is perhaps to have a few people outside of the clinical team come and help them with that education as well.
So it is what elements are putting them at vulnerability, not are they a vulnerable category, right? Because our next panel we will hear Dr. Menikoff nicely has said several times that in the regulations, it is just a for example. Those aren't the categories, "For example, these." It doesn't get to be why they are vulnerable, why they're vulnerable, what they are vulnerable to, and then whether they do need protection or whether we are being overprotective or not respectful of their autonomy by putting in safeguards.
DR. WAGNER: Barbara and Paul, thank you for that segue to our very next session. We will welcome this panel back for the roundtable of course. Thank you all for a very interesting session.
(Applause.)