Social Responsibility, Risk Assessment, and Ethics
Date
September 13, 2010
Location
Philadelphia, Penn.
Presenters
Bryan G. Norton, Ph.D.
Distinguished Professor of Philosophy, School of Public Policy, Georgia Institute of Technology
Jonathan D. Moreno, Ph.D.
David and Lyn Silfen University Professor; Professor, History and Sociology of Science; Professor, Philosophy; Professor of Medical Ethics, School of Medicine; University of Pennsylvania
Transcript
Amy Gutmann:
Could everyone sit down, please, so we can get started? Thank you.
This session is entitled “Social Responsibility, Risk Assessment, and Ethics,” and we will begin with Bryan G. Norton, who is the Distinguished Professor of Philosophy and Science and Technology in the School of Public Policy at the Georgia Institute of Technology. His current research concentrates on sustainability theory and on problems of scale in the formulation of environmental problems. He is the author of several seminal books on the topic, and he served on the Environmental Economics Advisory Committee of the EPA Science Advisory Board as well as on the Governing Board of the Society for Conservation Biology. He is also a member of the Advisory Committee for the MacArthur Foundation Advancing Conservation in a Social Context Project. Dr. Norton, we’re very happy to you have today.
Bryan Norton:
Yes, and I’m very happy to be here. Thank you very much for the invitation. I’m going to speak from the point of view of someone who advocates the protection of biological diversity in situ, in the wild.
Amy Gutmann:
Use the microphone close up. Okay, thank you.
Bryan Norton:
And I’m going to speak about only one part of a range of concerns here. Jonathan will take over and talk more about risk. What I’m referring to here is the escape problem; that is, what kind of effect might we find?
I’m going to assimilate that problem over to the risk assessment side and try to concentrate more on a broader consideration of how the development of synthetic biology is likely to affect our conception of life — and a topic that was broached in the last session by Dr. Caplan and Dr. Gutmann. What I’m going do is, I have some longer remarks, which I believe you have. But I’m just going to go through the executive summary and add to a little bit and try to stimulate some discussion.
The first thing I want to suggest is that there is a worry here, and it’s a worry about how a new field, a growing field like synthetic biology, as it develops as it should its own conceptual frameworks and its own vocabularies and so forth, that if that language were to become universalized or moved out into our consideration of life more broadly, it may well be a real threat to those of us who feel that we need to bring the public along to protect biological resources in the wild. So, my concern here is that almost conceptual in a sense. And I think most biologists, synthetic or otherwise, would recognize that it probably is not a reasonable — that synthetic biology is not going to provide substitutes for real living species. It is, rather, going to go in a somewhat different direction. But the fear that I’m going to talk a little bit about today is that, as the conversation about synthetic biology becomes more general, could it be that we will start to think of life as collections of biobricks and lose some of the mystery that was raised, that came up, in the last session?
I’m going to start by trying to support a general claim. I’m not sure if it’s true, but it sounds true to me. And there are people here who know so much more about this that I’m hoping to be corrected if I’m wrong.
But my first claim here is that synthetic biology introduces life forms that are new in a particularly important sense. Now, I’m not going to deny what was pretty clear this morning, made pretty clear this morning, which is in terms of techniques and methods, I think we have continuity; that is, from recombinant DNA engineering and so forth, we’re not doing things — we’re not doing things that are that much different. What I would like to argue, though, is that, under certain developmental patterns of this field, that we are going to have products which are new under the sun. Okay?
And let me explain what I mean by that. We all are well aware through plant and animal breeding, through more recently genetic engineering, that we have modified life forms, we have mixed life forms. But even in the most hybrid kinds of products like hybrid corn and so forth, always there we start with an organism and we add something from another organism. Both of those organisms have been in the wild. Both the donor and the receptive organism, have an evolutionary and an ecological history.
We are now going to — I think, if I’m not wrong listening to some of the projections that we heard this morning, I think we’re possibly facing a situation where we will have species — well, life forms, let me say — life forms that have no evolutionary or ecological history. Now just think what that means. A species in the wild is a complex result of forces that we don’t even begin to understand. And some of these forces have been in effect for billions of years. Okay?
Now, if we’re going to do that in a laboratory, even if it takes ten years, we have greatly contracted the processes by which a life form comes into existence. I’m not sure how important — I’m not sure how that will affect us. I guess I just want to speculate and sort of give some of my worries about what might happen.
So, there are three differences that I think we need to think this is an important distinction or, in this new kind of product, I mentioned two, the evolutionary and the ecological histories, that are lacking in a sense. The third thing that is sort of interesting here — and I don’t know again how important it is. But notice it really has the — that these development of a living organism that has no evolutionary history, no ecological history would really confound our very notion of a species. Right?
So, suppose Venter or someone else creates a living organism that reproduces and has a life, hopefully, at least in the beginning in a laboratory. Is it a separate species? Is it a new species? Well, there’s only one empirical base for all of our taxonomic work in biology. That is the species. We have some empirical basis for separating species, not subspecies, not other taxonomies, taxons. But a species is a group of organisms which interbreed in their natural habitat. That’s our anchor for our whole taxonomic system. And now we’re going to have a set of life forms that couldn’t possibly, as far as I can see, be fit into that definition.
Now, maybe all that means is that we’re going to have wild species and then over here we’re going to have life forms. But those two will not be, as far as I can see, easily reconciled. What to make of that? I’m not sure. I just thought it was worth getting out on the table.
Now, so let me move now to the second part of my presentation, which is to just raise three concerns which I think have some basis in the original argument that we really are going into some pretty new territory here.
So, now starting with part two, asserting that advances in synthetic biology may or could negatively impact the social goal of protecting naturally occurring biodiversity. And this is an area that I have been working with for a long time and there is an urgency to this problem, and we do need the public support for the idea that’s protecting species and other organisms in the wild is important. We’re learning a lot about that. And one of the things that we are learning is that our traditional methods of saying, "Okay, put a fence around it and protect a reserve or a national park," we’re learning that that won’t cut it. We really need to pay attention, not only to those sources of genetics perhaps that are in wild places, we also need to pay attention as conservation biologists would say to the matrix; that is, to the whole landscape. And the way that parks and reserves are placed can have a huge impact on their success. For example, some people would argue that you need corridors among various parks and so forth. Well, that’s going to affect a lot of property owners who may not want to be affected. So, we need public support, right, and we need it desperately.
And I guess the first question then is, will confidence in our ability to create life forms undermine concern for naturally occurring biodiversity? There’s several ways this might show up. Let me just give it a name. I’m just going to call this the shortcut problem. Right?
Now, we have learned that it’s very difficult to protect wild species, especially complex ones such as predators — very difficult. And so it could be a shortcut. Right? We won’t worry about losing one. And, if we lose it, we’ll just get the lab, get the synthetic biologists, to crank up another species that might be close or that would be functionally similar perhaps. And I know that most synthetic biologists would not say this. But what I worry about is how the things that they do so become interpreted in the general public because, as I say, we really need to have people recognize the importance of wild species.
One of the strongest arguments we have, of course, for strong species and biodiversity protection is the irreversibility argument, the argument that, if you cause an extinction, you have lost. Right? End of game. No more opportunities. Well, I worry that people who say, "Oh, well, they’re creating new species in the laboratory over there. Why would I worry about losing a predator species? We’ll just go get another one." I think that would be a disastrous outcome. I think it’s — I don’t think, again, that synthetic biologists are saying this. I’m just saying that I worry about the message that people are hearing. And I worry about it because you can see that they’re creating — synthetic biology is creating a whole new vocabulary for talking about life. I mean, biobricks were not around even when I started working on biological diversity. Right? I mean, biobrick, the idea of a biobrick, the idea that a species or that a life form can be just made up out of bits is a new idea, and it’s a very — and if people pick up on that, we may lose one of our strongest arguments — that is the irreversibility argument — that causes us to say that there’s great urgency in the tasks that we face.
Secondly, will synthetic biology’s emphasis on parts, wholes, and artificial units of life encourage the inaccurate model of biodiversity protection as maintaining an inventory of biological units? Now, again, this is not a new problem. We faced this in conservation biology before, and we still do face it because it turns out that probably the best measure we have of biodiversity and whether we’re protecting it is counting species. At the same time, leaders in the field would never say that protecting biodiversity is protecting species. We use it as a shorthand because you can count species reasonably well and you can’t count things like complexity and what we call cross-habitat diversity. These are at least much harder to count. So, we tend to count species. We already have that. I call it the inventory view. And people tend to think of it as just, "Okay, it’s like a library" or "It’s like a repository." And as long as we have — we can always go back to the repository and get another part and so forth. And, again, I think that that’s dangerous thinking from the point of view of conservation biology.
And along with that, of course, is the growing comodification aspect. Right? I mean, we have had a session on intellectual property rights and so forth. And what is intellectual property rights in this area other than comodification or at least — that’s using the word very broadly, but it has huge economic impact, and it’s going to — it tends to increase the likelihood that people will start to think that way.
I have only one more point, if I may finish. Finally, how will the creation of new forms of life affect our current understanding of the values that are attributed to wild organism species and ecosystems? And here, traditionally, we have had two strong arguments. One of them is the instrumental argument, right, that, if we lose wild species, we may lose opportunities to create new drugs, new technologies. And, furthermore, we recognize their importance, the importance of natural resources more generally in our economy.
So we have the instrumental values argument. We also have the more non-instrumental argument. Some people would say that non?human life forms have intrinsic value. Other people would talk, economists talk, about what they call "existence value." So there’s a lot of different ways people talk about these non?instrumental reasons. Sometimes spiritual gets thrown in there. There are lots of different words for that.
But the point here is that we have these two pretty strong arguments. On the one hand, we have protecting things which may have economic impact in the future, and also protecting things because they have a special kind of value. And the worry here is that maybe both of those arguments start to weaken or at least people might see them as weakening as we emphasize the biobricks aspect and the substitutability and so forth that would affect the instrumental value side of the argument. And then, furthermore, again, it’s perhaps — and we had a wonderful discussion of it in the last session — it’s perhaps disarming to those people who think of nature and natural living things as somehow mysterious and beautiful and beyond our ken. You start to eat away at that sense if you start thinking of a species as biobricks in collections, in various collections that humans can control.
So let me summarize then and close by saying I don’t see synthetic biology in this way of thinking as the threat. I think that the threat is that we will develop a conceptual framework or a way of thinking about life which could be quite damaging to another area of important public policy, and that’s the area of public biodiversity. So I hope that the Commission will pay some attention to that problem. Thank you very much.
Amy Gutmann:
Thank you. Jonathan D. Moreno is the David and Lyn Silfen University Professor of Philosophy, Medical Ethics, and History and Sociology of Science at the University of Pennsylvania. Dr. Moreno is an elected member of the Institute of Medicine of the National Academies, and has been a member of numerous national academies and committees serving as co?chair of its Committee on Guidelines for Human Embryonic Stem Cell Research. He is a veteran of two past national bioethics committees and is a past president of the American Society for Bioethics and the Humanities. Dr. Moreno, thank you for joining us.
Jonathan Moreno:
Thank you very much, Dr. Gutmann, and thanks to the Commission for inviting me. As Dr. Gutmann mentioned, I have had the opportunity to work as a staff member for a couple of your predecessors, and I hope it’s not self?serving to say that this is really one of the great things that I think we do in American government is to have these commissions in which experts and citizens can talk about these issues.
And I want to actually start this afternoon by unplugging this thing — well, I guess I won’t do that, but it does occur to me. So, this actually is a talk that I have never given before. Senior professors like stand?up comedians really don’t like to do that. We like to have the yellowing notes or the yellowing Powerpoint as the case may be. And one does get a certain frisson when one gives the new talk, particularly for a presidential advisory commission. But to reassure you, it is a talk that I have wanted to have an excuse to think about for a number of years because I think this problem of planning for remote risks is a very interesting challenge for people who like to think of themselves as rationalists and empiricists. And in a nutshell, is it because the end by definition is really small.
So, if you have a really small number of examples of very remote, but high magnitude events, what as a society do you do about that? And I think that Bryan sort of alluded to that as well. And it seems to me that in thinking about synthetic biology really — and I’ll get back to this whole recombinant DNA debate — this was the problem, and this continues to be the problem. It’s not that we really think a really bad bug is going to eat West Philadelphia, but it’s hard to prove a negative. By the way, there’s really good eating in West Philadelphia in restaurants. You shouldn’t have the bug.
So, I’m going to get sort of the foundation of this stuff under the way — wow, this thing keeps ticking. First of all —
Amy Gutmann:
You can move it to the side, Jonathan.
Jonathan Moreno:
Yeah, I’m going to move it actually.
Amy Gutmann:
You can’t turn it off, because it enables us to have time for questions.
Jonathan Moreno:
Oh, it does. Right. And probably alarms go off if I do that.
Amy Gutmann:
Right, go ahead.
Jonathan Moreno:
No, I’m going to make John Arras abide by this.
So, first of all, philosophers talk about risk in two dimensions: the likelihood and the magnitude or severity. You may not like all of my examples here. But in the case of, say, a high magnitude/high risk would be a high probability of floods. A high magnitude/low probability would be nuclear disaster. I’ll come back to the examples in a minute. It always amazes me that I don’t get hit by more bird droppings because there are so many birds and I walk under trees a lot. But it’s a low probability and, although it offends my dignity, it’s not that severe.
There are natural risks, and there are anthropogenic risks. There are mixed cases like Katrina which I’ll come to in a moment. Suppose for the sake of argument that synthetic biology poses a risk that is a very low probability and a high magnitude. I’m not an expert. I’m not a scientist. I’m not a biologist. That seems to be what we’re really worried about here.
So how should we, as a society, plan for a very low probability/high magnitude risk? How concerned should we be? How much societal investment should there be? How should this be prioritized? This is not necessarily a moral question, though I think there are some interesting moral questions that Bryan also alluded to here that I will come back to. But it’s really a question for a society to decide about, how much concern we ought to have?
So just compare New Orleans and Amsterdam, the levees. The levees in New Orleans, if I’m not mistaken, based on what I have seen in the press were designed basically by the Corps of Engineers to last for at least about a hundred years or so to withstand a flood that could happen every century. That seemed to me, as I think to many other citizens, to be way too low. In Amsterdam, they had a very bad flood. And in 1953, they invested billions of dollars for the next couple of decades in making levees that would withstand a risk that would go for hundreds or a thousand years. And that’s a decision that the Dutch made. They obviously had to — a small country, low?lying, everything depended on their having really strong levees. We can argue about whether we should do that for New Orleans or not, but there are different ways to think about this.
In bioethics, I guess, the principle example has been Asilomar. You’ve heard about this before. There is Sydney Brenner you’re going to hear from tomorrow. Not the all of the names and faces have changed in 35 years. My guess is that few, if any, of the scientists who were at Asilomar in 1975 thought that what they were dealing with was a high probability risk. But it was a low probability, a very low probability, risk that something would get out, something would get into the biosphere and would cause a lot of damage.
So you know the Asilomar story. There was a moratorium. I think the scientific community in general believes that was a good thing to do at a lot of levels, and it created a kind of precedent for thinking about these issues. But, still, the underlying and philosophical questions remains.
Also, of course, many of these problems are emerging risks. They’re not — they’re risks that we have a hard time negotiating rationally because they are pretty new. And I’ve got this long definition here of an emerging risk from the International Risk Governance Council, which actually is an organization that’s now functioning well. It was formed largely by a number of nuclear engineers whose business it is to worry about emerging risk, to worry about the very low probability but high magnitude risk of nuclear reactors.
I was privileged to be at the second organizational meeting of the IRGC in Zurich in 2002. And it was striking that our host was Swiss Re, and the president of Swiss Re was our speaker — only a couple of dozen of us at the dinner meeting. And he pointed out that Swiss Re at that moment — this is February 2002 — was in litigation about their liability for 9/11 because they were a reinsurance company for the Twin Towers, for the companies that had insured the Twin Towers. And nobody had thought about — well, some people claimed they had thought. But at least Swiss Re didn’t think about the idea of using fully loaded passenger airplanes as missiles.
And the interesting question — and this is kind of a footnote, but I thought it was fascinating — that was what would make a lot of difference to Swiss Re and, in fact, could have put them out of business, was this one event or two. If it was two events, then it would have cost them so many billions. It was one event, and it was only half that, four billion if it was one event. The court finally decided it was one event. It probably saved Swiss Re and saved many other reinsurance companies from unmanageable liability, but that was not a risk that anybody thought about seriously in that business. So they had a very great interest in thinking about the low probability/high magnitude risks and they got together with the nuclear engineers to think about this in a general level in other areas than nuclear engineering.
So here are some examples of anthropogenic emerging risks. It’s useful, I think, to distinguish — I was thinking about this earlier today — between the risks that emerged from invention as against innovation. Invention basically takes place in garages or in the laboratory. Innovation is what happens when you push things out and diffuse it into the society.
So, you know, Benjamin Franklin was a great inventor. Edison was arguably both an inventor and innovator. And it’s the innovation phase that starts to cause us trouble. By the way, I do want to get involved in the God-playing and playing discussion because I want to be a contrarian and say that playing is something we don’t actually want to discourage among scientists and inventors, technologists. Ben Franklin was playful, and it was his playfulness — not to put in a plug for Franklin here at Penn. But it is, in fact, his playfulness that was part of what gave him his imagination and creativity. It’s one of the things that we value about Ben Franklin, and one of the reasons that Einstein is embodied in Washington outside of the National Academies building on Constitutional Avenue as a child, something that scientists object to.
But there is a playfulness about science, and there is an interesting theology about God-playing and God as a gamer that I will not get into because my time is rapidly ticking away. But there is more to be said about playing and games. God is the only entity, I say as a non?theologian with great fear and trembling. But I think that Nietzsche figured out that God is the only entity for whom there is no boundary for playing and games. God plays, but God also makes the rules — so one difference between us and God.
But here are some examples of emerging risks that are anthropogenic. By the way, the FDA just, as many of you know, approved for the first time genetically-modified salmon and that hasn’t been a big issue in the United States. GMO salmon will be a big issue in Europe, an interesting cultural difference about different cultures worrying about different emerging risks.
So, this is a table I have taken from an International Risk and Governance Council report on emerging risks. And I just point out to you that one of the issues that they point out — I think this is a pointer — is here under A?10, "We don’t do well at overcoming certain cognitive barriers to imagining events outside of accepted paradigms." So, I’ll come back to that in a moment.
The other point that they make in another table in the same report is under B?9, "We have trouble dealing with these things as a society because we really don’t maintain an adequate organizational capacity to manage risk." And, again, that’s partly because of what number of people have pointed out is a kind of systematic bias we have about risk. This is a psychological and sociological association. As individuals we are bad at assessing personal risks.
I know you don’t do this as esteemed members of a presidential advisory commission. But if you are driving 60 miles an hour and you get that text message on your Blackberry or on your iPhone and you think, you know, "I can handle this, but I hope the idiot behind me isn’t doing this." We are bad at assessing our personal risk. We generally think that other people are more at risk than we are. And societies also are pretty bad at, for interesting psychological/sociological reasons, assessing their risks.
And, again, it’s partly because of the very rarity by definition of low probability/high magnitude incidents that trial and error, good empirical conditions that we think of as rationalists don’t work. And I think also there’s an argument to be made that, because these incidents are so infrequent, we lack biological and cultural coping mechanisms to know how to deal with these things.
So, just as kind of arbitrarily, I’m going to define these very low probability/high magnitude events as one case in hundreds or even a thousand years. I am not asserting that synbio poses even a modest likelihood of creating this kind of risk. I don’t want to end up on the front page of a newspaper tomorrow. But suppose it does present a very low probability/high magnitude event. How great of an investment should society make in planning a governance strategy for such a thing?
This is an example. One could multiply these examples just out of the anthropogenic side — creating the modified mousepox unintentionally in a lab in Australia ten years ago. Clearly there would be more of these risks.
Let me now move to the extreme version, the extreme version as an existential risk. It’s interesting how many people, not for the most part philosophers, but technologists and engineers, are talking about existential risks these days. I’m going to define it simply as an extremely low probability but extremely high magnitude event. My friend and colleague at Oxford, Nick Bostrom, has defined it as an adverse outcome that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
I forgot to say at the outset of my talk that I know you’ve heard a lot of rich talk this afternoon, a lot of ideas. I’m the only thing that stands between you and relaxation and socializing. My goal on this talk was to scare the crap out of you so I would attract your attention for the last few minutes of the afternoon. I’m trying to do that now.
Why should we worry about existential risks? It’s interesting how very few philosophers have actually looked at this. One of the few — and I was mentioning this to John Arras before — was a philosopher, a first-rate philosopher, Steve Tillman that we lost last year. Steve wrote a piece 30 years ago I found very intriguing in which he said, you know, why should we worry if the human species, let alone all those other species, go out of business? And, actually, Steve is much smarter than I am. His conclusion was — "Well, gee, it would be too bad. You know, it would be a real shame." That was the best he could do.
I can’t help but think that losing the hippocratic oath forever, losing The Apology, losing Shakespeare, losing Goethe, losing Lady Gaga, all of those would be really unfortunate. Right? I put Lady Gaga in that pantheon, by the way.
But I think there are secular and theological reasons to think that to continue the human project in those, maybe not all of them on my I list you agreed with, but there’s some intrinsic value to that. And I also think from the theological standpoint to continue to express the divine and play out the divinity in the way that God wants us to, I think there’s some importance to that.
So should we in light of these considerations establish the ultimate hedge fund? In fact, naturalistically, there was a remarkable existential risk, 74,000 years ago we’re told by the geologists now — the Toba eruption in what is now Sumatra. It caused such chaos in the climate that according to recent studies there may only have been 500 human mating pairs remaining on the planet at that point. Now we have with alacrity satisfied one of the commandments and rebuilt the number of human beings. I participated in that myself. But it is really quite stunning how close the whole business came to ending 74,000 years ago.
And now I’m going to end by citing some people who worry about this a lot. Ray Kuzweil worries about the singularity, the possibility that some great intelligence that we create — probably creating right here at Penn, as a matter of fact — will get out of control. He actually refers to it as the dual-use problem in biology. Sir Martin Rees, a great British physicist, very concerned — in fact, his estimate of our succeeding through the next century is only 50 percent. He is one of those people that’s very worried, as I am, about nuclear warheads. I’m frankly more worried about nuclear warheads than I am about synthetic biology. And I’ll just mention that the START treaty is being considered by the senate starting again today, which I think it’s a good thing. I hope they can work it out. The British seem to be especially concerned about this, I guess, maybe because they had a particularly bad time of it in the previous century as compared to the United States. And so Professor Hawking, whose name has been already been invoked today, thinks we really need to figure out how to get off the planet to seed the universe because we are creating so many extreme existential risks for ourselves.
So I’m going to conclude with what would you expect a professor to say: We need more research. But I actually think this is an area in which research is needed. I think there is a very good argument on how do we, as a society, address these questions and what are the rational ways of thinking about existential risks because the N is so low, the examples are so few. It’s been suggested that the goal should be an acceptable worst-case scenario. I’m thinking here about the end of Dr. Strangelove, you know, where we send the good mating pairs down into the mines.
For anthropogenic risks, we’ll have to develop, I think in this century, new non?proliferation regimes. I think we’re going to have to think about pretty tough enforcement mechanisms in some ways.
And just in your books, I have asked that this particular piece, which I think is quite interesting again from the Risk Governance Council be included, and some of you may find it helpful.
So thanks to Nick Bostrom, my colleague at Oxford and from whom I’ve learned about these things over the years; the IRGC; and Jason Schwartz, proudly, our doctoral student in Bioethics and History and Social Cciences at Penn, also working for the Commission. Thank you.
Amy Gutmann:
Thank you, and thank you for preparing something new for us. I am certain I speak for everyone in saying we appreciate it. We really appreciate that.
Jonathan Moreno:
It worked.
Amy Gutmann:
It worked. Yes, it worked. And you did it on time, too. And John Arras has the first question.
John Arras:
First of all, I do want to follow up on that and thank Jonathan for not faxing it in. That’s really terrific. All of us back in Charlottesville actually thought that you had peaked back in 2006. You know, it’s really nice to see that you found gainful employment in Philadelphia here.
Jonathan Moreno:
You know, I introduced John Arras 20 years ago at a talk in New York as the "grand old man of bioethics," so I’m glad you’re still around. It’s good to see you even older.
Amy Gutmann:
Okay. I’ll call it a truce. Do you have a question, John?
John Arras:
We’ll talk later, yeah. Well, yeah, okay. So, first of all, you know, you’re clearly highlighting the huge problem for us which is to try to come to grips with the proper attitude toward risk. Okay? And it’s a little disconcerting to look at the literature, which is very confusing. One of the articles in our booklet basically boiled it down to the virtue of being cautious. Right? You know, thanks a lot.
But my question is going to Bryan. Okay? And, Bryan, this is just purely just to get a better sense of what you’re arguing, what you’re intending, okay, because I heard you saying that you weren’t really here as an opponent of synthetic biology, but you were raising red flags about the way in which our language is being used and concepts are being formed to maybe distort the way we think about living things and species and so forth.
So, one question that I would have for you is, do you think that we could proceed with the agenda of synthetic biology without running that sort of risk of distorting our language and how we might go about doing that? And the second question I have for you is, even supposing that synthetic biology were to pose some sort of a risk of increased commodification or a kind of reductionistic approach to life and living systems, how are we supposed to weigh that in the ultimate policy scales, I mean, especially when juxtaposed against those hypothetical risks are what many say are palpable, likely benefits from the research?
Bryan Norton:
Well, you’re quite right. I don’t mean to say that this would derail the progress toward — in synthetic biology. I think I can sort of, I think, answer both of your questions at once by emphasizing that what’s really going on here and what’s troubling me is that we have two very different conceptions of diversity going on. Okay? On the one hand, we have got diverse biological diversity as we understand it very is tremendously complex, which is dynamic, which is constantly changing where it involves evolutionary forces, it involves ecological competition, and so forth. And, I mean, I have spent 30 years trying to get my head around this version of diversity. All right? And I think that conservation biologists and to some extent the general public have sort of caught on to this.
Now the idea of biodiversity itself, we have some empirical evidence to show that a lot of people in the general public find this either just confusing gobbledygook or they may even have a really — they say, "Oh, this is the scientists trying to put something over on us again." You know, that’s the attitude. And this was an empirical study funded by Defenders of Wildlife but done by an independent group.
Now, so what came out of that was that Defenders of Wildlife at that time sort of said, "Well, maybe we need to shift or at least develop a concept." And what they found was they tested the concept of web of life, which is much more intricate, the idea of interrelationships and some degree of dynamism and so forth, and people react to that better than biodiversity. So there I’m just trying to sort of establish the importance of terminology.
Now, the second point I want to make is that when we have new life forms — and so far they’re coming slowly, but they might start tumbling out of the laboratories very rapidly at some point. Well, is that diversity? Are we creating more diversity? Well, the tendency would be to say, yes, but it’s a very different kind of diversity.
So the one recommendation I have, based on this set of concerns, I think, is somewhere along the line — and this Commission can perhaps do more than anybody else could possibly do for this — we need to have a discussion about the nature of living diversity, the relationship of synthetic diversity to that, and I think that needs to be a public discourse, and I don’t hear it now. And I hope that the Commission will be able to stimulate that conversation because my fear is that, lacking that, people will adopt, sort of in a lazy way, "Oh, well, that’s just diversity. And coming out of the labs, we’ve got so much diversity, we don’t need to worry about losing a species here and there."
And so that’s a conversation we need to have because the term "diversity" is legitimately used in both ways. But, to me, it’s very importantly different in the ways we think about and what we do.
Amy Gutmann:
Good. Thank you. Nita?
Nita Farahany:
Thank you both for these talks. And, Jonathan, particularly in thinking about risk and how we can approach risk, I want to toss at you a theory that comes from tort law to see if this is consistent with what you think of as a way to deal with risk.
So there’s a theory called disproportionate risk theory or disproportionate loss theory which looks at different categories of risk and tries to assess what would be the reasonable precaution to take on under the circumstances. If it’s a very low probability risk approaching zero, we say there’s no burden to do anything. And we can think "burden" in this instance would be regulatory burden and a burden on individuals. If the risk is greater than zero but less than very large, then you would think that it would take on a proportionate risk. The last category is the most interesting one and deals with what you were talking about which is, if you are dealing with high magnitude/low probability risks or high probability/low magnitude risks or high?high, then you take on every possible precaution to avoid the harm that would result.
And I wonder if you would agree with that in the way you presented this. You presented us with we need to think about these high magnitude/low probability risks. Do you agree that the resulting burden would be that you need to take on every possible precaution? And this really pertains to your comments as well about biodiversity as potentially a high magnitude but low probability risk that would result.
Jonathan Moreno:
So, you’re a good lawyer because, of course, if I were to say no, then I would have to try to figure out what the criteria were for sorting out. And if I were to say, yes, then that would be an impossible social burden
Nita Farahany:
Right.
Jonathan Moreno:
So, I think —
Nita Farahany:
Not to be difficult.
Jonathan Moreno:
— I should probably just go to law school to figure out how to answer that. So one of the reasons that I wanted to end with existential risk is that this might be a special category because this doesn’t refer to any sort of entity or group or business or party or individual. This is about the whole shebang, as it were, of humanity. And perhaps there’s something unique about this particular category of risk that separates it out from the others.
Nita Farahany:
Meaning that we should take on every possible precaution or —
Jonathan Moreno:
Well, no, no. Meaning that this particular chain of events that would lead to threats to the survival of humanity is one that we should attend to, but so attending does not imply that a lot of other lesser risks ought also to have potential. This is sort of a unique category. I mean, there are people who are making this argument now that you would not infer from worrying about the survival of humanity for the next X?hundred years that every risk, every remote risk, would be in the same category of concern as that one, that there is something sui generis about that.
Nita Farahany:
Thanks.
Jonathan Moreno:
I haven’t persuaded you.
Amy Gutmann:
Nelson, and then I’m going to go to the audience.
Nelson Michael:
This is for Dr. Norton. So I’m just trying to clarify some of the assumptions that I think that I’m gathering from the comments that you made.
The first would be that post?whatever synthetic biologic event that occurs, at that point, it seems to me they are almost assuming that there would either be no impact of natural selection on that organism or that there would be inherently a selective advantage which to my view probably would be just the opposite.
Two is that at the current state or even at the projected state of the next several decades is that we’re going to even understand a fraction of all life processes and that, critically, we would understand what the genotypic and phenotypic association is going to be; and, thirdly, that then, despite all of that, we would need to know how to parse those processes into biobricks or modular biology.
So it seems to me that you would have to embrace all three of those assumptions, which all are a fairly low probability to come up with the sort of the dramatic worst-case/high-impact scenario on biodiversity which is that somehow biodiversity would be over time less driven by natural selection which occurs, you know, constantly in the environment around us and we would somehow be out competed by what scientists could do even in a substantial number of laboratories.
Bryan Norton:
I didn’t mean to suggest that the risk of escape that organisms created in a laboratory would out-compete and be free of — I didn’t mean any of that. That’s what I sort of deferred when I said, I don’t want to talk about the risk of escape. I want to talk about the risk to our society of the ways we talk about life and biological entities. And so it’s a very odd risk, and maybe "risk" isn’t even the right word. It’s more a caution that, s we treat life more and more as something under our control, in the laboratory, I would make the same case, right, that as we see it as more and more our ability, something that we can control, something we can replace. If we lose some parts, we can replace them and so on. Those were the worries that I was trying to get at which are more conceptual and might even say ideological.
But my point here is, the ways we talk are important for what we do. Okay? And in talking about life — I mean, if this conversation sort of takes over the conversation about biobricks and synthetic life and so forth, if that sort of swallows up our understanding of biological diversity in the wild and life as it evolved, that to me would be a tragedy and it might lead, in fact, to people being more careless and perhaps not worrying so much about some of the risks. But I was really more concerned about the ways it’s going to affect us in the ways we think.
Amy Gutmann:
Yes, please introduce yourself and then ask your question. Thank you.
Tim Trevan:
Tim Trevan, the International Council of Life Sciences. Again, not so much a question as to point you to a resource.
Our Council together with the Royal Society held a meeting on biomedical risk in February of 2009. Martin Rees and Nick Bostrom were both there. And the Royal Society actually published a report at the end of that which I think would be a useful resource for the Commission.
One of the gentlemen who couldn’t make that meeting has another very useful way for looking at risks of which the very high impact/very low probability is a subset. This is Andy Sterling at the University of Sussex. And what he does is he takes the two-dimensional grid of probability versus impact, and he looks at confidence and prediction of impact and confidence and prediction of probability. So at the top right-hand, you have got the high confidence in both of the measures. That’s where traditional risk management and the engineering sense comes in.
When you know something is going to happen but you don’t know what it is, then you have ambiguity. When you know something is going to happen but not with what probability, you have uncertainty. And that’s the high-end problem. When you know something is going to happen but not what, you have ambiguity.
Amy Gutmann:
This shows you when graphs are really helpful.
Tim Trevan:
Yes. When you know something is going to happen but you don’t know what, then you have ambiguity. When you don’t know what’s going to happen and you don’t know whether it’s going to happen, you have ignorance.
And what he has done is to look at the methodological approaches to addressing each of those types of problems. And, essentially, we have reasonably good methodologies for ambiguity, uncertainty, and risk management. Where we are really lacking in methodologies is in the ignorance sphere, and that’s where particularly intent problems are very difficult to measure because you don’t know the probability and you don’t know the risk.
Amy Gutmann:
Good. We’ll circulate that among the Commission members, the report.
Tim Trevan:
I can send an e-mail to the Commission, to the Royal Society.
Amy Gutmann:
Yes. Send it to Val Bonham, and she’ll make sure we all receive it. Okay? That’s really helpful. Thank you. Please, step forward.
Nancy Connors:
Hello. My name is Nancy Connors. I’m sorry. I don’t have Dr. Moreno’s slides to help me through this.
But I’m struck by the fact that we are all sitting here as a snapshot in time where we have an idea of what the science knows at this moment and only at this moment. And I am taken back to one of the few things of science that I really know well which is Alexander Graham Bell and his look at genetics and deafness long ago when he thought that deaf children came from deaf parents and when he said that deaf people should not marry deaf people because then you would just have deaf people, all their children. But now we know that most deaf children come from hearing parents and now we know much more about the genetics regarding deafness. So over the time since Alexander Graham Bell, even over the time of my lifetime, we have learned so much more.
And so how do we look at assessing risk when we are simply a snapshot, and we are limited in our knowledge?
Amy Gutmann:
John, do you want to take that, assessing risks when we know — actually, I think I’ll just make it when we know that we don’t know as much as we know in the future.
Jonathan Moreno:
Well, the Alexander Graham Bell case was wonderful because, as was said, his idea was really — he was concerned about helping people who had hearing impairments, and he was not the individual. He was the inventor, but not the innovator. And so whatever the risks or benefits that you think of apply to the international telephonic system and its successor in the worldwide web, this goes to the problem that a great inventor can intend one thing but the innovators do other things with it.
And it is precisely, as Bob Dylan put it, “Something is happening, but you don’t know what it is, Mr. Jones.” Right? So we didn’t know it was happening, and nobody could have known. This is a collective endeavor to create an international telephone system. It wasn’t intended by anyone in particular.
Amy Gutmann:
Are there other questions in the audience because, if not, I know that Commission members have other questions. But if you do, please step — please.
Harriet Bernstein:
My name is Harriet Bernstein. I’d like to thank you very sincerely for opening this to the public.
I just have a comment and perhaps an unbelievably chutzpahdic suggestion.
Amy Gutmann:
Translated, as you just did into different cultural lingos. Okay? Yes?
Harriet Bernstein:
I come from a tradition where there is no word for “ownership.” You cannot say in Hebrew, “I own a pencil.” You can say, “I have a pencil” but you can’t say, “I own it.” It doesn’t exist in the vocabulary.
So I’d like to throw out a suggestion that in your deliberations when you think of the scientific, the ethical, the moral, the social responsibility that you put into an equation we didn’t hear at all today, a word, and that is the word “stewardship” because perhaps that’s the word that for the general public it would allay a lot of fears and has a lot of historically precedent and has a lot of calming influence and you could use it in many different ways with a lot of integrity and I think could open up a whole lot of really valuable discussions.
Amy Gutmann:
We actually are considering the notion of a principle of responsible stewardship, so that comment is very welcome. Other members of the Commission, questions or comments? Alex.
Alexander Garza:
I just want to ask Dr. Moreno, I don’t think what I heard in your comments is that you weren’t advocating for a position on whether it was a good thing or a bad thing, just that people need to be aware of the possibility. And as someone who thinks about risk every day as part of my job, those are the things that I think of as well. And I think you also succinctly summarized the book, The Black Swan, which I think is a terrific read on risk
But if I hear you correctly, I think your position is that it’s not something to be feared but something to be understood — and using your word "hedged" — or develop some hedging strategies ahead of time rather than figuring out that a plane can be used as a weapon of mass destruction.
Jonathan Moreno:
Well said. I have nothing to add to that. I think that captures the spirit of what I was getting to.
Amy Gutmann:
Barbara?
Barbara Atkinson:
Just a little related to that is there’s the risk of an accidental something — we have talked about that — versus a real bioterrorism something, and if you are thinking about risk, how do you balance? It seems to me the bioterrorism part is the one that we have to separate from the accident part.
Jonathan Moreno:
Yeah, that is a really nice point. I’d like to talk to you more about that because the public health people know that essentially the principle response is the same. Why is it so much different for us sociologically, psychologically when it’s a result of human evil intention? That has huge consequences. We saw that on 9/11.
I think there’s something about the continuing reaction to 9/11, to the anniversary, as it is to Pearl Harbor and so forth. Thousands of people have been killed under terrible circumstances before, but the fact that it was a result of human intention separates out those events for us. But then what is the reasonable response to that above and beyond the public health response? That’s beyond my pay grade.
Amy Gutmann:
Yes, Anita?
Anita Allen:
Just a quick thought and question. Both of you seem to implicitly, and maybe even explicitly at one point, put out the view that life or culture and the web of nature and web of life has intrinsic value. And I’m wondering whether you think the Commission should embody that discourse of intrinsic value in its worth because, I mean, why care about that biodiversity in situ and why care about Lady Gaga or human beings?
Amy Gutmann:
Lady Gaga?
Anita Allen:
Should we really care?
Jonathan Moreno:
I think that question answers itself.
Anita Allen:
But the serious question here is, do we need this concept of intrinsic value in our risk assessment? If so, and what kind of role should it have and how should we understand it?
Bryan Norton:
I’d like to respond to that. What I tried to say in the somewhat longer remarks of which you now have a copy is that this concept — let’s start with the idea of instrumental versus non?instrumental. Okay? And what I would say is that a very wide range of the population would, if asked under the right circumstances, respond to the non?instrumental thing. On the other hand, if you go into the theoretical, right, if you start talking to environmental ethicists, some of them will say that the only way we could save the world is to attribute intrinsic value to it. Other people would have — and then even within that view, it splits into dozens or at least a large number of different interpretations of what intrinsic value might mean.
And then you just have to step over one step and say, “Oh, well, intrinsic value is independent of humans.” But what about independent of a human’s specific needs and wants, like some people talk about the spiritual value of nature? Well, that’s non?anthropocentric. It’s very anthropocentric. But yet it’s giving a special place to other species.
And economists even would talk about what they call “existence value,” the value somebody puts on something even though they don’t expect ever to use it in any way and so on.
So what I would say is that we have this range of value positions. Almost all of the positions recognize that nature is useful to us. And then there is the question of how far you push that. Right? And what I would say is that, once you’re over on that side of accepting non?instrumental value, it probably doesn’t matter so much how you parse it, how you interpret it, how you define it. What is important is that almost everybody agrees that carelessly destroying life forms that have evolved over thousand and millions of years is an act of arrogance and unacceptable behavior. So I think the policy implications are there, despite all of the background noise.
Amy Gutmann:
Jon, I’m going to give you the last word before we sum up.
Jonathan Moreno:
I think there is a resonance and a response to Professor Allen. There is a resonance for Americans in thinking about the human adventure because we sort of think we embody it here, you know, not always well but as well as we can, given the human limitations. And I think there’s something to be said for the idea of instrumental versus non-instrumental. I mean, it’s a little bit like mom and apple pie, I grant you, but it’s another way of framing what the issues are, what the general concern is. There is some intrinsic value, however one defines it to continuation of the diversity of the web of life, and it resonates with us and for complicated psychological evolutionary reasons we get it. I think there’s something to be said for articulating that.
Amy Gutmann:
We thank you for articulating some of these concerns and some of the challenges that we have before us. Thank you all. First, let’s thank John and Bryan for a really wonderful presentation.
[AUDIENCE APPLAUSE]
Since the hour is late, but we’re on time, I just want to say that tomorrow we’ll focus on ways in which synthetic biology is helping to expand, the interest in biology inside and outside of the academy and industry, and we’re also going to take a much closer look at concerns about biosafety and biosecurity, and we’ll hear about some current federal efforts to address these issues and the views of industry and its role. And I’m just going to ask Jim Wagner to have the final words before we adjourn for the afternoon.
Jim Wagner:
Thanks, Amy, and thanks again for your leadership.
It was a day, I think, for the Commission that was rich in input. I heard a lot of conversation about value and values: The value of deliberative process as we were given examples from past deliberative bodies; the value of innovation for understanding some of the enormous potential of this technology for the public good; the value of wise commercialization that allows for continued intellectual freedom and attendant fairness to access the benefits; and the value of wisdom, whether for moral philosophy or religious bases that will help guide us in responsible stewardship over the new power and as it has an impact on the environment; and the value of understanding risk and cautions around that intrinsic value on many things that we have been deliberating on as a commission and with great thanks to you all. See you tomorrow.