TRANSCRIPT: Meeting 3, Session 3

Emerging Technology Framework & Next Steps for the Commission


November 16, 2010


Atlanta, Ga.



Tom Murray, Ph.D.
President, Hastings Center



Amy Gutmann:
It’s an honor for me to introduce Tom Murray. Tom is a distinguished bioethicist, and he is also the President of the Hastings Center. Many of us here on the Commission and, I’m sure, some people in the audience today, have had association with the Hastings Center. It is hard to be in the field of bioethics and not to have association with the Hastings Center. My professional association with the Center began very early in my career when I was named a Hastings Fellow and then served as the Center’s Vice President. Along with many of my fellow commissioners, I continue as a Hastings Center Fellow, and I appreciate the Hastings Center strong support for really being on the forefront of research and discussion of important bioethical issues. The Hastings Center was there at the beginning and has only gotten stronger. And one of the strengths is evidenced in the fact that while it began pretty much alone in the field of bioethics, it’s helped to support and generate a significant number, indeed, so significant that we probably all lost count of the number of bioethics centers there are around this country and the world.
Tom has a long and distinguished career in bioethics. He serves on numerous editorial boards, he’s been President of the Society of Health and Human Values and the American Society of Bioethics in Humanities, and among other current posts, he’s served he served as International Expert Advisor to Singapore’s Bioethics Advisory Committee. Tom served on the National Bioethics Advisory Commission during the Clinton Administration, and he has testified before many congressional committees. Dr. Murray is the author of more than 250 publications, he is also editor with Maxwell Mehlman of the Encyclopedia of Ethical Legal and Policy Issues in Biotechnology. Dr. Murray is currently Principal Investigator of the Hastings Center on a project funded by the Alfred P. Sloan Foundation on ethics and synthetic biology.
In sum, Tom is a wise and knowledgeable expert in bioethics and we are pleased to have him here to share his thoughts with us today. Without further ado, we welcome you, Tom.
Tom Murray:
I’ve always thought it was a curious practice to applaud before you know what someone has to say, because it’s very difficult to take it back later, but thank you for the very generous introduction.
I could refer to you as President Guttmann, or Chairperson Guttmann, but I know you as Amy and I’m Tom and I just know too many people around here personally. My apologies to all the rest of the people in the room who see my back; it’s rude but I think that I had no option but to sit in this orientation.
Well, my goal is to be useful to you in your work with synthetic biology and, if possible, beyond. Your very capable Director, Valarie Bonham, asked me to drop on my experience with a number of deliberative bodies, including a previous President’s Commission, but also I was trying to think of all the NIH panels and blue ribbon panels I’ve been a member of and I’ve lost count. I don’t know. I chair the World Anti-Doping Agency’s ethics panel, so there are many things in which many occasions I’ve tried to do the kind of deliberation, at least encourage the kind of deliberation that you clearly are trying to do. And in addition we have a project at the Hastings Center on Ethics and Synthetic Biology, so I’ve had the opportunity to become familiar with the details of it.
So I’m going to offer whatever insights I may have about emerging technologies, especially synthetic biology and as to what you can do to maximize the impact your work. I think that’s the goal.
So Val’s asked me to draw on that experience and also to look at the draft recommendations which I had as of Friday and I won’t do any wordsmithing. In my view, wordsmithing almost never works in a Committee setting anyways. But I will talk about some context issues to try and make them maximally effective. So I’m going to focus on how your recommendations may be misread, misconstrued, fail to be implemented, implemented poorly, or otherwise not accomplish what you intend. It’s much better to have your friends do this, and that’s why I treasure good editors and I treasure the people who read my articles and other manuscripts before they actually get published. They prevent me from making some otherwise significant mistakes- none of which I think you’re making, by the way. I think the recommendations are in very good shape. Let me start off by saying that I don’t see any big problems.
So I want to start by addressing, briefly, two broad questions: What is this report to be about on one hand, and to whom is it addressed? So the first challenge, really, and it’s one that’s we’ve encountered in our work at the Center, has been to establish the contours of synthetic biology. Now you probably have had a similar introduction to the science and technology as we have and truly synthetic biology is not one thing. It is multiple streams. In the future some of these streams may converge, certainly in the divisions of some of the founders of the different streams. Others may never touch each other and stay separate. Some streams may peter out in the desert and others may become a powerful torrent, we just don’t know the fate of the various streams. I’d be happy to talk about how we see the different streams, but I suspect you already know that, so I’m not going to waste your time.
The question is what do these streams have in common? A couple of features at least: one being a determination to exert human mastery over biological entities, be they lego-like fragments, be they microbes whose genomes were refashioned with the aide of a computer, or a yeast colony whose metabolic pathways have been diverted to help make vital medicines, for example.
Each stream, however revolutionary its vision, though, relies on tools and methods that are quite old in the scientific sense. The Cohen-Boyer paper of 1974 that quickly led to the Asilomar Conference was the beginning of our ability to, well, as our scientist friends describe it, “hack” DNA. So the second challenge in determining the contours of synthetic biology, since we have these. That’s a sort of horizontal challenge, all these streams moving along here, there’s a kind of vertical or temporal dimension to this--mainly that all of these streams are united by the fact that they all are different ways of “hacking biology.”
Now, that phrase is not my phrase. That phrase comes from the scientific advisors to the Hasting Center Project. I use it advisedly. When a certain Congressional Committee staff called us at the Center to ask our help and come testify for a hearing they were going to have and I used this phrase, there was a silence at the other end of the line. They said, “Please don’t say that in the Congressional hearing.” To a congressional committee, “hacking” can only be a bad thing. To scientists, though, in the biological sciences it’s not a bad thing, it just means that there are things you can do to cut apart and reassemble bits and pieces of, particularly DNA, to accomplish whatever research or other kinds of things you may have in mind.
So the Commission faces this kind of challenge. As you formulate your recommendations, all I’ve seen is the text of the recommendations and not the full report, so you may have already addressed this. Do you apply your recommendation only to what we’re currently calling synthetic biology and the various streams of it, or do you in fact look back and say there are other ways in which scientists are working with biological systems--hacking biology? What we would regard as more traditional forms of genetic engineering and such, and should your recommendations also apply to those? It will not be easy to draw to draw a clear and distinct and defensible line between what’s currently marketed as synthetic biology and the other things that biological scientists are doing these days. So just be aware of that.
I guess I never quite dived down on it, but when I was trying to deal with the issue of what I thought was an exaggerated emphasis on the significance of genetic information in relation to other kinds of health related information, I defined this concept of genetic exceptionalism and tried to take it down. So be careful whether you want to embark on Syn Bio exceptionalism or whether in fact that you do want to tie it in and acknowledge this connection with other forms of biological research. That’s that jumbled point.
And to whom is the report directed? You’ve done a commendable job, I think, on the whole, of making clear in the draft recommendations to which federal agencies or other entities a recommendation is addressed. That’s good. These entities will therefore know that they’re responsible for responding to your recommendations. When we were on NBAC we had — this is the National Bioethics Advisory Commission from 1996-2001 — the one power we had was something called a forcing power and I don’t know if that’s in your rules or not, but it sounds way more powerful than it is. All it means is that you get to address this to some federal agency and you say “you must respond to our recommendation.” The response could be, well, I’m attempting to make a gesture but it could be, you know “this is the dumbest thing I’ve ever heard.” But that’s your response and that would be all that we could demand was that they respond. It turned out to have some use, by the way, because you could put them on the spot and in fact if your recommendations were sensible and not foolish, they would be less likely to just dismiss them out of hand.
So I want to focus on what I will call the practitioners of synthetic biology. So we have on the one hand professional practitioners and someone, one of the commissioners responded, if you walk into a synthetic biology laboratory, you’re as likely to see a physicist, a synthetic chemist, a mathematician or an engineer as you are to see a biologist. It’s a remarkable confluence of different kinds of disciplines and professional experiences and backgrounds.
To me that has at least two significant implications. Number one, what may be most revolutionary in synthetic biology is synthetic biology’s application of an engineering mindset to biological systems. An engineering mindset that, in our experience at least, values control over understanding. It’s not a bad thing. It’s just a different ay of looking at the world. They’re less interested in understanding all the different variations of complexity as they are in establishing something they can predict and control; building something that does what they want it to do as they want it to do it and doesn’t do other things.
Another point to be taken, I think Christine Grady may have mentioned this, but they come from different traditions, different backgrounds. Engineers come from schools of engineering and biologists come from schools of arts and sciences or perhaps medical schools. The training is different. It may include different training about the ethical responsibilities, their professional obligations and about their larger obligations to society. This is again, not a complaint. It’s not saying one is better than the other; it’s just that these are different traditions and we need to bear that in mind if we are formulating recommendations about how to assure a socially responsible outgrowth of synthetic biology.
We have to pay attention to all of the various streams that are contributing to the burgeoning practice of synthetic biology. For example: recommendation 9, which is on ethics education. Why not include the National Academy of Engineering along with the National Academy of Sciences? In general, any time you refer to people doing synthetic biology, the language of the recommendation often refer to scientists, but also, I would say, and engineers — just to be more inclusive — and that’s the reality of the way it’s being practiced right now.
Also, this relatively new concatenation of professional practitioners of synthetic biology also raises a kind of concern. I’ll try to phrase it as best I can: As the relative participation of biologists, familiar with the complexities and the non-linearities of biological systems diminishes, so may an appreciation of consequences of intentional or unintentional perturbations of, for example, eco systems. It’s just not the way they think about it. Biologists are trained, or at least particularly whole organism biologists even microbial biologists do think about whole organisms and think about environments and ecosystems. That is less true about some molecular biologists, and probably less true about some of the other people that are now coming into synthetic biology. So I don’t think you need to wear Birkenstocks and have a ponytail to notice not all human interventions in environments, intentional or unintentional, have gone well. Some have, some haven’t. The zebra muscles in the Great Lakes or the kudzu, which I guess is still growing in the Atlanta area, are two examples of that.
Why is this important? We need to make sure the people who are on the leading edge of synthetic biology understand the complexities of the systems they will eventually purport to tinker with.
I was at a meeting just last week for the Department of Energy, and the Sloan Foundation funded investigators looking at ethical, legal and social issues of synthetic biology. And I met Nathan Hillson there, who’s working out in California as a synthetic biologist, and he noted that, in their effort to do a kind of risk assessment, they’ve concluded that focusing on organisms, even microbes, is the wrong focus. The unit of analysis needs to be genes, or sets of genes. Again, why is that important? Well, in response to a journalist when they were interviewing me I tried to explain how it is that microbes could actually exchange DNA, so I called it microbial French-kissing, and that does go on, but so does microbial scavenging, so if you break open the cell wall of some microbe that you’ve engineered, say that suicide gene breaks it open, those genes are hanging out there for other microbes to come along and vacuum up and incorporate into their own DNA. So you just need to have those kinds of sensibilities built in, I think, at the front end, that would be helpful.
There was some conversation about the DIY or garage practitioners, which is the second large group. We went online about a year ago just to see what it would cost to buy a DNA synthesizer and you can pick on up on ebay for under $9,000. With shipping, maybe a little over nine, but it’s about $9,000. And that’s an international marketplace. Here’s where the hacking metaphor really acquires its force. Look at the BioBricks movement, which has, in my view, a commendable commitment to an open-source model. And there you see a very conscious echo of software engineering, right? Anyone who has benefited from work on Linux or Firefox, which are open-source programs, knows that this can provide huge benefits.
We also know that people can tinker with software and create various forms of malware, including viruses, Trojans horses and the like and that’s where the analogy becomes a little discomforting. I think Amy’s correct- right now, I’m not worried about what’s going on in garages with synthetic biology. Five or ten years from now, as the tools become cheaper and more widely available and the number of practitioners will expand significantly, some will be doing essentially basement chemistry experiments like we did as kids, but some will be doing more ambitious things.
Just as a thought experiment, not to frighten anybody, I have a Smart phone here. Some people jailbreak Smart phones. That’s the phrase, I think, that’s used where they sort of try and escape the carrier. You buy it, you get subsidies by buying it from a particular Verizon or AT&T or SPRINT or whoever and then they tinker with it in order to do whatever they want with it. So no longer is it under that control of the carrier they bought it from, that’s called jailbreaking. Will we have jailbreaking in synthetic biology?
Suppose you build something for a useful purpose and you insert a suicide gene. Some clever person comes along and in their garage figures out a way to get inside it and remove the suicide gene. Now, that’s not today, no one should rush home worried about it, but those are the kinds of maneuvers that it would be prudent to anticipate. So one comment I have about recommendation 11 in this connection, is do everything you can to avoid reinforcing an “us vs. them” mentality with the community of synthetic biology. Make engaging that community a priority. It’s good that it’s still in a nascent phase. You can work with them now and encourage them. I haven’t heard Drew Endy’s phrase “do it together” but that’s perfect. You want us to actually generate a true community that will establish its own norms and in order to do that we need to understand the motives, the ambitions and the priorities of the people who are doing DIY synthetic biology and then enlist them as best you can.
And here, the analogous experience I have is doping in sport. Doping is an odd word. It means: “the use of performance enhancing drugs and other prohibited technologies in sport,” and my sense is to the extent that the athletes — who are, after all, the ones affected by those who are cheating with the use of drugs — to the extent that the athletes have been effectively recruited and involved in helping to set policies and enforce policies, doping controls goes better. To the extent that it’s imposed from the outside, you end up with the us vs. them and it just doesn’t go well at all.
Shall I make a few comments about some of the recommendations? The first note I made before I started reading them was “assign specific responsibility whenever you can” and I think you did a good job with that.
I would rather have a conversation than continue endlessly, so I will only pick out a few of the recommendations to discuss.
Number one, under public beneficence calls for a coordinated evaluation of current public funding for synthetic biology activities, including risk-assessment and the consideration of ethical and social issues. You may have revised that, I only got here about two this afternoon … That’s excellent. Be aware of one trap that the Genome LC Program has fallen into and that is the trap that I will call — the only things that can be counted, count. That means that there has been a real movement away from serious engagement with normative and conceptual issues into data gathering. In fact, those two activities are complementary when they’re done properly and some have managed to do them very, very, very creatively and very well. But there is definitely a domination of the, as a friend of mine who is on the study section says, the bean-counter mentality. So we just want to, I think, be on the lookout for that.
I should say that your recommendation 4 begins by saying that there’s no need at this time to create additional agencies or oversight bodies to focus specifically on synthetic biology. I enthusiastically support that decision for a variety of reasons, one of them being if we are in fact to see the continuity of syn-bio with other activities, what’s the point of singling out syn-bio?
Recommendation 5 calls for risk-analysis, which again, I agree, is a very important, and I think not fully understood, process. I say this from a variety of perspectives, but one of them is one of these countless bodies I serve on — I’m on an NIH Blue Ribbon panel right now that it has was convened to help work through a conflict that has arisen between Boston University, which received a large grant to build what’s called a bio-safety level 4 lab on it’s campus. BSL4 labs are of the highest level. You can deal with the most dangerous organisms in BSL4 labs. And the local community rose up in opposition. Or at least some segments of the local community. So our job of the panel is not to solve the problem. We have two duties, we had to make sure community voices were heard and their concerns were fully taken into account. And the second was to make sure that the best risk-analysis possible, given current tools, was done.
So two points: one about public participation and I think Amy mentioned, I can’t remember exactly how you put it, but I think with your pioneering work on democracy and deliberation, and you have an opportunity that no previous commission has had, or certainly no one has taken advantage of. And that is with social media. You have an opportunity to encourage engage a kind of large scale public conversation a dialogue about whatever issues you’re working on, not just synthetic biology, that we never had with NBAC, which is there was no social network with NBAC. But you can do things that no commission has ever done before and it will be an experiment. But I heartily encourage you to do that.
The second lesson from that Blue Ribbon panel experience is that even the most sophisticated tools of risk-analysis contain philosophical assumptions and preset positions that probably require a great deal more vetting and analysis before we can be comfortable with it.
I would recommend you take a look at number 6, which begins new organisms should be marked or branded in some manner. And there you refer to the suicide genes or to basically make them hothouse organisms that couldn’t survive in … outside, because they require specific nutrients that are not generally available. That all seems right to me, but my worry is that there’s a little bit in this recommendation as a problem with a living will as an advanced directive. Do you know what living wills are? Most of you know. My bioethicist friends are all nodding because, of course, we know what they are. But living wills are, as we say, if X happens, don’t do A, B or C. We never know. We’re very bad at anticipating whether in fact it’s going to be X. In fact … it looks kind of like X, but it’s closer to Y a little bit, and besides, we don’t treat X with A, B and C anymore, we gave that up two years ago. So living wills in their specificity becomes a vice not a virtue. So to the extent you could frame this in the most general terms possible rather than listing the things, you’re probably better off. Right.
International cooperation and again, number 8, I think is terrific. Very important. I’ve already spoken a bit about the required ethics education and ask that you include engineers and the National Academy of Engineering as well, but I don’t know how to accomplish this, but it’s so important I know I heard things that were said that I think you feel the same way, it’s so important to do whatever ethics education we do in a way that really engages students and doesn’t become just a box to check. It is just so common that if you require an ethics course, the professional or the research integrity course, it can be done very well or it can be just a thing that you gotta sit through the X hours of it and I can just check a box and you never have to think about it again. We really need to use our creativity to make it meaningful.
In recommendation 10, the first one under Intellectual Freedom and Responsibility, you recommend a continued cultural responsibility and self-regulation. And I did hear a very sophisticated discussion of the different kinds of entities in which synthetic biologists may work and that’s very important to take note of that, because the reach of your recommendations varies enormously depending on the kinds of entities that we’re involved with.
And here, one other involvement that I had, I was asked after the death of Jesse Gelsinger in a gene therapy trial, the NIH asked assembled a small group to look at it’s policies with respect to it’s oversight of research on genetic manipulation, gene therapy. And I took two lessons from that. One was you really have to pay attention to the forces that are operating on the scientists and on the institutions in which they work. Whatever one thinks of large pharmaceutical companies, they tend to have lots of irons in the fire at a single time. Whereas a small biotech company often has one idea and not much money and so it’s got to reach its … meet its benchmark. So the forces that compelled that encouraged those companies to move forward more rapidly and to pay a little less attention to the obstacles in their way can be very powerful. That’s one lesson and that’s just about the different kinds of entities. The second is clarity and consistency in the obligations imposed it incredibly important. It turned out if you did this kind of research, you have reporting requirements to NIH and you have reporting requirements to the FDA. They use different standards, they use different timetables, the obligations fell on different entities, they some of them fell on investigators, some on the sponsors. It was a complete mess. Most of the people that we spoke with had no idea what they were supposed to be telling who about what and when. So clarity and consistency are vital.
And the discussion of export controls, also, I thought very sophisticated. Be aware that if to the extent that export controls are instituted, I wasn’t sure what would you not allow to be exporting? Particularly, machines? They’re on ebay. Sequences? Particular synthesized sequences? The US isn’t the only place that can do that. If in fact we did have a monopoly on certain kinds of technologies and synthetic biology and we refused to export them, we may or may not have good reason to do that, but you can anticipate accusations from around the world that the US wants to keep the best and most dangerous stuff for itself and it will feed the kind of paranoia that one finds.
As you can see, you’re learning from my scars as much as anything else. Well, I think I’ve spoken long enough, I would really rather now have a conversation. Thank you.
Amy Gutmann:
Tom, thank you very much. Very helpful as you intended to be.
Let me just run down a few of the things that you’ve said that I think we know what our responses altogether accepting and then open it up for any body — any Commission members or members of the audience make comments and questions.
So, for public beneficence, you made the point on our first recommendation how important it is that risk assessment, not just be what you called the bean-counting, but take into account consideration of ethical and social issues which we’ve included in this recommendation and I think we’ll make sure in the text we emphasize that the risk assessment should not be reduced to just things that are easily measured, but also take into account as we say in this ethical and social issues. Which is, you know, very important both, we all believe that. But it’s also part of our explicit charge.
On 4, in this you just said we did it right, but I just think it is important sometimes to say what you don’t need to do as well as what you do need to do, and we said we really saw no need to create additional agencies or oversight bodies. And I’m glad you agree with that because you actually did independent investigation to ours and it’s good to get that reinforced.
What I see saying in 5, which is helpful, and this is where we talked about prudent vigilance, and a kind of iterative risk-analysis in the face of uncertainty is to be assured — that we should be sure that we advise that community voices be heard in this. And there, I think, we haven’t actually explicitly done that, but it’s definitely in the spirit of what we’re talking about of ongoing inclusive deliberation and I take it that also fits your over-arching advice to us, which is, in our recommendation, have an ear for not creating a we/them mentality unintentionally. So I think that that’s very helpful.
And I’ll just end on 8, the internet, you wholeheartedly endorsed that we need to work, especially in biosecurity, but also in other places, internationally and not simply domestically because synthetic biology is global in it’s reach as well as in it’s pursuit and that’s, finally I’ll say and then I’ll open it up, what you’ve advised up overall, which we’ve also taken to heart is that we’re very clear that most of if not all of our recommendations apply not only to synthetic biology but to biotechnology, more generally. So with that I will open it up and John, would you like to begin? And again, any body in the audience who has a question or comment, if you stand up to the mic, I will recognize you after the next speaker.
John Arras:
Yeah, thanks so much, Tom. I want to bring you back to number 5 here, actually to talk a bit more about risk-analysis. As you know, we’re trying to split the difference here between proactive, letting science rip mentality versus a European-style precautionary principle and we framed that in terms of prudent vigilance. Largely we see this, I think, as a procedural way of coming to grips with a very difficult problem of how actually to think about it — issues of where the threat is low-incidence but there’s a high-magnitude of possible damage at the other end.
I did recall you saying something not too long ago about the Hastings Center undertaking a project on just that question. A question of how to rethink the ethics of risk-analysis and I’m wondering if you could share with us what went into your grant proposal. In other words, apart from these sorts of generalizations that we’re making here about the need to split the difference between these two very opposed stances on risk. Is there anything more that needs to be said or that needs to be said on a document like this?
Tom Murray:
The grant proposal is not yet in, so it’s a little premature. I got an email from the Port of Muenster asking to have it in about a week and a half so we’re going to move swiftly on this. Some of our, for example, this is an analogy but I hope it will be eliminating particularly since so many of you come from the health profession.
As you know, there’s a lot of sentiment in factor of doing preventive measures rather than just taking care of people after they’re sick. And there are ways to assess the cost effect of this, of a variety of different measures in medicine including preventative measures. There’s also this thing called a discount rate and present value calculations so if I spend $100 today to save a life twenty years from now, the value of the life saved is very small because every year the percentage diminishes by a percentage.
So the question is, is that the right model for thinking about how to assess preventive measures? Should we use this kind of aggressive discount rating computation present value for interventions. So that’s one way in which — it’s not risk-analysis — but it’s a way in which the soon to be assumption may in fact not reflect our deepest sort of considered values in going forward public policy.
In the case of synthetic biology, I was struck by a comment, a scientist made to my colleague Greg Kaebnick who said, with no irony, he said “I think synthetic biology is great, the only thing I’m worried about is the possibility of catastrophe.” And I said well, we worry about that too, and we’d like to find ways of, we like to look at the various tools that are used to do risk assessment. I watch them in process in the current Blue Ribbon panel in which I serve. And they just strike me that there are many assumptions that are not being fully questioned and we’d be happy to continue our conversation with you about that, but I don’t think I have a whole lot more inside thought for you at the moment.
Amy Gutmann:
Daniel Sulmasy:
Tom, you were somewhat silent in talking about the over-arching framework under which we organize the principles. Maybe it’s because we didn’t give you any detail about them. But you concentrated on the recommendations rather than the framework, and I was wondering whether you thought the framework useful, vacuous, is it sufficiently comprehensive to, as you were suggesting, apply to other kinds of emerging technologies or did you just sort of see them as headlines that you didn’t really think about?
Tom Murray:
Well it began as the latter, but upon looking at them, and hearing the discussion — I’ve been here since 2 — and seeing how you framed it, I think it’s a useful framing, I don’t have any better offer and I think it is good to group them under these kinds of broad categories. I love it that you’re talking about responsible stewardship. The concept of stewardship is one we’ve been trying to inject into the debate over health reform for quite a while now and I think it absolutely has a place of pride in your deliberation. So I would congratulate you on the framework and I have no modifications to suggest to it.
Amy Gutmann:
Thank you. Nita.
Nita Farahany:
Thank you. It’s a pleasure to hear from you, as well, your thoughts on the recommendations. I wanted to turn specifically to recommendation number 13, which you commented on. Your comment makes me a little bit weary about the recommendation itself because I don’t understand that to be recommending any sort of export controls. Rather I understand us to be recommending that, given whatever export controls may exist, that we want to make sure the scientists are part of the policy making in discussions to ensure that there isn’t an undue limitation on intellectual freedom in light of those export controls. And so to your point, how other people may read or potentially misconstrue it, I just want to say, that was my understanding of it; if you’re reading it differently I find that quite helpful so that we can be careful in wording recommendations.
On a total different side-note, because I’m an iPhone fanatic, I thought I would share this with you: Jailbreaking, is in fact when you make it available for third-party applications, you then have to go a further step to unlock it, so you can use a different carrier. Just as a side-note.
James Wagner:
How hard has that been for you?
Nita Farahany:
So it’s actually quite easy to do, but nevertheless, it’s a two step process; you really have to hack into it to make that happen.
Tom Murray:
That’s so far beyond my capacity that I have no …
Amy Gutmann:
James Wagner:
Tom, first of all, thanks for the presentation and taking the time to come here, it’s wonderful, I appreciate it. Your comment about the fact that the unit of analysis, we need to remember that the unit of measure is the gene. Did it occur to you that maybe we were straying from that? What prompted you to make that observation here and maybe say a little more about that?
Tom Murray:
So, how I got to make that point. I’m not sure that’s the right answer, that’s the answer that Hillson and his group have come up with and it was a plausible answer. That you would build in safe cards such as self-destruct mechanisms into synthetically engineered organisms that did things that were very powerful and transformative and you didn’t want the powers to escape so you create a way for the organism to sort of destroy itself or to be destroyed at your will. The problem is, if the power that you build into it is basically vacuumed up by another organism that has no problem and no suicide gene built in, that’s an issue that needs attention.
James Wagner:
And particularly, I guess, to the extent that some of these containment concepts would be based on the chassis, the scaffold, if you will, that executes the will of the genome, but it, in fact, may be the chassis that suffers from a — that was designed to have the greenhouse properties that, hothouse properties that don’t exist out in the environment. Because the gene could survive out there in spite of the fact that the chassis could not. One imagines, I suppose, that we could be going in the same direction that we did in so much of molecular biology looking for common chassis bacteria phages or something like this that would carry these things. Do we need to be more explicit about that in these recommendations?
Tom Murray:
If we are to take Hillson and his group’s analysis seriously — and that’s your call — it’s probably worth saying something more general about protective mechanisms and that we should not assume that the strategies that are currently being advertised are in fact A) going to be the best ones, or B) really work —
Amy Gutmann:
Right, and we’re in total agreement on that, that’s very helpful. Christine.
Christine Grady:
Thank you, Tom, for your wisdom. Two things that you said I wanted- I know you only saw the recommendations, so that may have limited what you can say about all of the things that we’re thinking, but you said something like we all know that not all progress has gone well, or something like that. So my question is, do you think that either we didn’t, or we should pay more attention to that in what we say? In other words, are we too optimistic about the benefits of things? Is that what you’re view is?
Tom Murray:
I think the examples I said were that not all interventions in nature have gone well. I mentioned the zebra muscles which when we lived in Cleveland were becoming a problem, altering the ecosystem in the Great Lakes, and kudzu. But you can multiply the examples. How many times people brought a really interesting plant back form some place else and the plant has turned out to be a scourge or has carried with it some insect that has attacked local — the point is, ecosystems are complicated things and our powers of prediction about, and control over, what happens when we nudge it are far more modest than I think an engineering mindset would find comfortable by and large. Certainly, the synthetic biology engineering mindset. That’s really the point that I wanted to make, do you need to say anymore here, I’m not sure, I don’t know the full report.
Amy Gutmann:
That came across loud and clear in what we heard in testimony from the scientific community rooted in biology about the scientific community rooted in engineering, how there was, and you put it there was not, from the biological community’s perspective, at least the representatives we had, including Sydney Brenner himself. There was a sense that it’s very important to pay attention to how much we don’t know about the systemic effects of introducing new living entities into the world are, and the synthetic biologists we spoke to were more sympathetic to that. But, they didn’t claim that they were expert at knowing that, so it’s something we tended to and will make sure adequately in the report. Anita.
Anita Allen:
I have two questions for you, Tom. The first one relates to when Ruth Faden came and spoke to the Commission she said that she might suggest one service we might perform would be to come up with some principles that would apply not only to synthetic biology, but to all new technologies moving forward. Then in your opening remark, you said something about — do you intend to apply these principles to just synthetic biology or do you also intend they be applied looking back to past and future technologies? I wondered whether you think it would be or would have been a better thing for us to come up with some very generic principles that could apply to many different new technologies and not just synthetic biology.
The second question: in your remarks about jailbreaking and your comments about Linux and so forth, I sensed that maybe you were inclined to think that data-security, data-breaches were a very special category of risk in this field. And we talked earlier about it’s all about data, it’s all about information it could be easily accessed illegitimately and so I wondered whether you think that maybe the Commission should have a special recommendation that just deals with the special risks, a special risk category of data-breaching and data-security?
Tom Murray:
What kind of data-breaching would you have in mind, here?
Anita Allen:
Well, if someone has on their computer, or it’s in the cloud — it’s data in the cloud — or if there’s data on somebody’s hard drive, maybe a less sophisticated user, for example, and it’s the kind of data that’s going to be exploited for nefarious purposes, as Nita mentioned earlier, and then that data is hacked into further by an even more diabolical person, we’d have a problem. So does the fact that we were talking about very sensitive information that could be put to bad purposes enhance the need to be concerned about data-breaches and data-security and does that warrant, because of a separate recommendation or a part that we could focus on ways to encourage actors to be mindful of a special need for data-security in this context?
Tom Murray:
Well that’s a very interesting proposal. On the one hand, we are, by large, very reluctant to hold information, per say a sequence, so, genome sequence, for example. We’ve published this, we know what the sequence is, in 1918 influenza pandemic, but we published and we know other sequences of really nasty pathogens so what would the secret be that we would require —
Anita Allen:
So maybe I’ve discovered in my lab some particular, organism may be too strong, but some particular application of the science, of the [inaudible] of information which probably shouldn’t just get out to any body, but yet my data-security isn’t so good and so things can be hacked into and leaked out. That’s the kind of thing I’m imagining, where some application based on readily available BioBricks, readily available data and technology is then available to those who make bad uses of it, and so should we have some special precaution here in this context because the implications of the data-breach would be so … And one of the reasons I’m asking this is because in some other fields totally unrelated to this one, say financial privacy, for example, there’s a huge interest in making sure that data doesn’t get out, inadvertently or vertently, intentionally let data that shouldn’t go beyond where it is get out. And so I’m just wondering, I was thinking earlier today that maybe we might need to focus more a data-security and some of your comments made me think that you were especially concerned about data-security.
Tom Murray:
I’ll have to think about that one, Anita. It seems to me that holding financial information closely because you don’t want people making investments based on - insider trading, for example. Well that’s dealing with a particular kind of set of forces and social environments. Very different from the scientific community which relies on openness in so far as possible, and is extremely reluctant ever to restrain publication and sharing of information. So I guess it’s a contingent answer. To the extent that something arises that could be so easily used in a malevolent way, absolutely we should be very mindful of security with that. That’s likely to be quite a rare occurrence at least in the near future.
Amy Gutmann:
So we are running out of time — given how much we have already learned from you, we can thank you again. You did not disappoint. And we know that we have an open if not cell phone line, email line which will not breach any security for us to pick your brain some more so thank you very much, Tom.
Amy Gutmann:
I’m just going to sum up and ask Jim if he has anything …
James Wagner:
I will be happy with your summary …
Amy Gutmann:
Okay, so I’m not going to try to summarize today, but simply to say, we have two principles yet to discuss tomorrow. Mainly democratic deliberation and justice and fairness and we also have a special guest tomorrow: Hugh Whittall from the Nuffield Council on Bioethics who will talk to us about what their work has been in this area and also allow us to actually engage in an international dialogue on synthetic biology.
I do think, I guess I’ll say one substantive comment in summing up: one of the things Tom Murray suggested that we’re doing that he was particularly pleased about — it is something that a number of the experts who’ve presented and the participants from the public have lead us to--is the principle of responsible stewardship. And the way we’re interpreting it, which John said, I thought quite wisely is a kind of Aristotelian mean between extreme pro-action and extreme precaution, I would say is both procedural and substantive. Just that there really are substantive values packed into that and the substantive value that’s in that is to say that what a society doesn’t do needs to be evaluated as seriously as what it does do. So if, by not developing science and therefore not developing cures for malaria or better vaccines or bio-friendly fuels, we don’t enhance people’s lives, don’t save lives that could be saved, that’s as serious a failing as if we don’t take precaution against risks that can be known and assessed ahead of time.
So what we’re trying to do in our report and in many of these recommendations is steer a course that takes both of those values seriously. Takes the value of what human innovation and creativity can contribute to saving lives and enhancing lives and also takes seriously the ability of us to think ahead and take precautions against risks. And that will always require something of an assessment and a balancing which we as a Commission aren’t capable of doing in a detail, but we are, in many of these recommendations recommending a very serious analysis by the agencies that really have the expertise to do it. And with that, I will say thank you all for being with us today and please come back tomorrow. We look forward to rolling out the rest of our recommendations publicly tomorrow. Thank you very much.

Share Your Views

We are asking all interested parties to share your views on synthetic biology. Send your comment to

This is a work of the U.S. Government and is not subject to copyright protection in the United States. Foreign copyrights may apply.