in this issue
© 2010 Nature America, Inc. All rights reserved.
Sterile bollworms bolster Bt control Evolution of resistance by insect pests threatens to cut short the benefits of crops engineered to produce insecticidal proteins from the bacterium Bacillus thuringiensis (Bt). Tabashnik and colleagues use computer simulations and data from an extensive 4-year field trial to demonstrate that a multitactical program, including releases of sterile pink bollworm moths, can thwart the emergence of the pest’s resistance to Bt cotton. Although the sterile insect technique is not new, this is the first time it has been used jointly with a transgenic crop. From 2006 to 2009, 8 billion pink bollworm caterpillars were reared to maturity (pictured), and moths sterilized by irradiation were released over the cotton fields of Arizona. This substantially increased the likelihood that any rare resistant moths emerging from Bt cotton would mate with sterile moths, blocking their reproduction. The results were no increase in pink bollworm resistance to Bt cotton, a 99.9% decrease in the size of Arizona’s pink bollworm population and a sharp reduction in the use of insecticide sprays. It remains to be seen whether the sterile insect technique will be as effective for other combinations of Bt crops and the insect pests that they target. [Letters, p. 1304; News and Views, p. 1273] PH
and optimized assembly protocols. Matzas et al. describe a novel use of next-generation sequencing, reducing error rates and selecting oligos simultaneously by reading oligos with a high-throughput sequencer to identify the ones that do not have errors and then manually picking these off the sequencing instrument with a micropipette. The authors use their methods to synthesize a variety of functioning proteins, including fluorescent proteins and therapeutic antibody fragments. The two approaches each reduce DNA synthesis costs to <$0.01 per nucleotide, a tenfold reduction over existing methods, which should make large-scale projects more feasible. [Letters, p. 1295, p. 1291; News and Views, p. 1272] CM
Modeling metabolite exchange Metabolic models have largely ignored the exchange of metabolites between different cell types, which may play important roles in the physiology of tissues. Palsson and colleagues model the flow of metabolites from one cell to another using ‘transport reactions’ that couple metabolic networks representing specific cell types and intracellular organelles. This procedure allows the authors to accurately model experimentally measured properties of neurons in the human brain. Palsson and colleagues create models of astrocytes coupled to either γ-aminobutyric acid (GABA)-ergic, cholinergic or glutamatergic neurons. These models are used to analyze the effects of Alzheimer’s disease on different types of neurons and to help understand links between the synthesis of the neurotransmitter acetylcholine and metabolism in mitochondria, which would not have been possible using existing coarse-grained models. The general procedure exemplified in this study may prove useful in modeling other tissues. [Analysis, p. 1279] CM
Synthesizing DNA aplenty
Nanoparticle tracking
Long synthetic DNA molecules have been assembled from inexpensive short oligonucleotides created on a microarray, but this approach has not been widely adopted owing to high error rates and limited scalability. Studies by two groups working with George Church address these problems. They describe clever approaches to selecting subsets of oligos from the complex pool released from the microarray. Without performing this selection step, the oligos from a microarray are not abundant enough and may cross-hybridize with each other, interfering with the assembly process. Kosuri et al. select subpools of oligos using PCR, and they lower error rates using high-fidelity microarrays
The inhalation of nanoparticles into the lungs can be deleterious (e.g., air pollution and hazardous occupational exposures) or beneficial (e.g., drug delivery by aerosols). But whether one’s goal is to get nanoparticles into the body or to keep them out, it’s important to understand their behavior after deposition on the lung epithelium. Frangioni, Tsuda and colleagues systematically vary the size and charge of near-infrared fluorescent nanoparticles, administer them to rats and study whether the nanoparticles remain in the lungs, enter regional lymph nodes, are transported to the blood or are excreted through the kidneys. Particles larger than ~34 nm remain in the lungs, whereas noncationic particles smaller than ~34 nm can be rapidly transported to mediastinal lymph nodes. A fraction of ~6 nm-size zwitterionic nanoparticles are quickly excreted in
Written by Kathy Aschheim, Michael Francisco, Peter Hare, Brady Huggett, Craig Mak & Lisa Melton
nature biotechnology volume 28 number 12 December 2010
vii
in this issue urine. The results of this study could help in designing nanoparticles with desired biodistribution properties after inhalation. [Letters, p. 1300; News and Views, p. 1275] KA
© 2010 Nature America, Inc. All rights reserved.
Patent roundup Identifying and isolating DNA without further manipulation does not constitute an invention and is not patentable, a US district court said in October. The court stated its position in response to a lawsuit involving Myriad Genetics’ (Salt Lake City, UT, USA) breast cancer genes BRCA1 and BRCA2. In a Commentary, Kenneth Chahine discusses a new framework for assessing the eligibility of DNA patents [News, p. 1226; Commentary, p. 1251] LM Patent owners who can claim a humanitarian use for their patents will be offered a fast-track re-examination voucher under a system proposed by the US Patent and Trademark Office. [News, p. 1226] LM ‘Blocking patents’ have killed many a startup. Here is advice on how to deal with them. [Building a Business, p. 1239] BH A whole host of approaches fall under the rubric ‘therapeutic nanoparticles’. Burgess et al. outline the reasons why innovative pharmaceutical products based on therapeutic nanoparticles offer particular advantages in terms of intellectual property protection from generic competition. [Patent Article, p. 1267] MF Recent patent applications in pharmacogenomics. [New patents, p. 1271] MF
viii
Minicircles for all Transgenes are often ferried into cells on plasmids, but an efficient way of producing minicircle DNA could make minicircles the method of choice for delivering exogenous genes to mammalian cells. Minicircles are circular expression vectors that lack the plasmid backbone. Longterm transgene expression from minicircles is much higher than from minicircles, but broad adoption of this technology has been hindered by a cumbersome production process. Kay and colleagues present an optimized protocol that yields purified minicircles in about the same time frame as that required for routine plasmid preparation. [Brief Communications, p. 1287] KA
Next month in • Safe harbors in human iPS cells • Genome-wide profiling of cytosine hydroxymethylation • Experimental halotype phasing • Improving in vivo shRNA screens with fluorescent reporters • Genetic modification of rodents with zinc finger nucleases
volume 28 number 12 December 2010 nature biotechnology
www.nature.com/naturebiotechnology
EDITORIAL OFFICE
[email protected] 75 Varick Street, Fl 9, New York, NY 10013-1917 Tel: (212) 726 9200, Fax: (212) 696 9635 Chief Editor: Andrew Marshall Senior Editors: Laura DeFrancesco (News & Features), Kathy Aschheim (Research), Peter Hare (Research), Michael Francisco (Resources and Special Projects) Business Editor: Brady Huggett Associate Business Editor: Victor Bethencourt News Editor: Lisa Melton Associate Editors: Markus Elsner (Research), Craig Mak (Research) Editor-at-Large: John Hodgson Contributing Editors: Mark Ratner, Chris Scott Contributing Writer: Jeffrey L. Fox Senior Copy Editor: Teresa Moogan Managing Production Editor: Ingrid McNamara Production Editor: Amanda Crawford Senior Illustrator: Katie Vicari Illustrator: Marina Corral Cover design: Erin DeWalt Senior Editorial Assistant: Ania Levinson
© 2010 Nature America, Inc. All rights reserved.
MANAGEMENT OFFICES NPG New York 75 Varick Street, Fl 9, New York, NY 10013-1917 Tel: (212) 726 9200, Fax: (212) 696 9006 Publisher: Melanie Brazil Exectutive Editor: Veronique Kiermer Chief Technology Officer: Howard Ratner Head of Nature Research & Reviews Marketing: Sara Girard Circulation Manager: Stacey Nelson Production Coordinator: Diane Temprano Head of Web Services: Anthony Barrera Senior Web Production Editor: Laura Goggin NPG London The Macmillan Building, 4 Crinan Street, London N1 9XW Tel: 44 207 833 4000, Fax: 44 207 843 4996 Managing Director: Steven Inchcoombe Publishing Director: Peter Collins Editor-in-Chief, Nature Publications: Philip Campbell Marketing Director: Della Sar Director of Web Publishing: Timo Hannay NPG Nature Asia-Pacific Chiyoda Building, 2-37 Ichigayatamachi, Shinjuku-ku, Tokyo 162-0843 Tel: 81 3 3267 8751, Fax: 81 3 3267 8746 Publishing Director — Asia-Pacific: David Swinbanks Associate Director: Antoine E. Bocquet Manager: Koichi Nakamura Operations Director: Hiroshi Minemura Marketing Manager: Masahiro Yamashita Asia-Pacific Sales Director: Kate Yoneyama Asia-Pacific Sales Manager: Ken Mikami DISPLAY ADVERTISING
[email protected] (US/Canada)
[email protected] (Europe)
[email protected] (Asia) Global Head of Advertising and Sponsorship: Dean Sanderson, Tel: (212) 726 9350, Fax: (212) 696 9482 Global Head of Display Advertising and Sponsorship: Andrew Douglas, Tel: 44 207 843 4975, Fax: 44 207 843 4996 Asia-Pacific Sales Director: Kate Yoneyama, Tel: 81 3 3267 8765, Fax: 81 3 3267 8746 Display Account Managers: New England: Sheila Reardon, Tel: (617) 399 4098, Fax: (617) 426 3717 New York/Mid-Atlantic/Southeast: Jim Breault, Tel: (212) 726 9334, Fax: (212) 696 9481 Midwest: Mike Rossi, Tel: (212) 726 9255, Fax: (212) 696 9481 West Coast: George Lui, Tel: (415) 781 3804, Fax: (415) 781 3805 Germany/Switzerland/Austria: Sabine Hugi-Fürst, Tel: 41 52761 3386, Fax: 41 52761 3419 UK/Ireland/Scandinavia/Spain/Portugal: Evelina Rubio-Hakansson, Tel: 44 207 014 4079, Fax: 44 207 843 4749 UK/Germany/Switzerland/Austria: Nancy Luksch, Tel: 44 207 843 4968, Fax: 44 207 843 4749 France/Belgium/The Netherlands/Luxembourg/Italy/Israel/Other Europe: Nicola Wright, Tel: 44 207 843 4959, Fax: 44 207 843 4749 Asia-Pacific Sales Manager: Ken Mikami, Tel: 81 3 3267 8765, Fax: 81 3 3267 8746 Greater China/Singapore: Gloria To, Tel: 852 2811 7191, Fax: 852 2811 0743 NATUREJOBS
[email protected] (US/Canada)
[email protected] (Europe)
[email protected] (Asia) US Sales Manager: Ken Finnegan, Tel: (212) 726 9248, Fax: (212) 696 9482 European Sales Manager: Dan Churchward, Tel: 44 207 843 4966, Fax: 44 207 843 4596 Asia-Pacific Sales & Business Development Manager: Yuki Fujiwara, Tel: 81 3 3267 8765, Fax: 81 3 3267 8752 SPONSORSHIP
[email protected] Global Head of Sponsorship: Gerard Preston, Tel: 44 207 843 4965, Fax: 44 207 843 4749 Business Development Executive: David Bagshaw, Tel: (212) 726 9215, Fax: (212) 696 9591 Business Development Executive: Graham Combe, Tel: 44 207 843 4914, Fax: 44 207 843 4749 Business Development Executive: Reya Silao, Tel: 44 207 843 4977, Fax: 44 207 843 4996 SITE LICENSE BUSINESS UNIT Americas: Tel: (888) 331 6288 Asia/Pacific: Tel: 81 3 3267 8751 Australia/New Zealand: Tel: 61 3 9825 1160 India: Tel: 91 124 2881054/55 ROW: Tel: 44 207 843 4759
[email protected] [email protected] [email protected] [email protected] [email protected]
CUSTOMER SERVICE www.nature.com/help Senior Global Customer Service Manager: Gerald Coppin For all print and online assistance, please visit www.nature.com/help Purchase subscriptions: Americas: Nature Biotechnology, Subscription Dept., 342 Broadway, PMB 301, New York, NY 100133910, USA. Tel: (866) 363 7860, Fax: (212) 334 0879 Europe/ROW: Nature Biotechnology, Subscription Dept., Macmillan Magazines Ltd., Brunel Road, Houndmills, Basingstoke RG21 6XS, United Kingdom. Tel: 44 1256 329 242, Fax: 44 1256 812 358 Asia-Pacific: Nature Biotechnology, NPG Nature Asia-Pacific, Chiyoda Building, 2-37 Ichigayatamachi, Shinjuku-ku, Tokyo 162-0843. Tel: 81 3 3267 8751, Fax: 81 3 3267 8746 India: Nature Biotechnology, NPG India, 3A, 4th Floor, DLF Corporate Park, Gurgaon 122002, India. Tel: 91 124 2881054/55, Tel/Fax: 91 124 2881052 REPRINTS
[email protected] Nature Biotechnology, Reprint Department, Nature Publishing Group, 75 Varick Street, Fl 9, New York, NY 10013-1917, USA. For commercial reprint orders of 600 or more, please contact: UK Reprints: Tel: 44 1256 302 923, Fax: 44 1256 321 531 US Reprints: Tel: (617) 494 4900, Fax: (617) 494 4960
volume 28 number 12 DECEMBER 2010
e d i tor i a l 1221 Vive la différence?
© 2010 Nature America, Inc. All rights reserved.
news Oligonucleotides generated on a microarray provide an abundant source of raw material for assembling long synthetic DNA molecules. Kosuri et al. and Matzas et al. reduce the error rate and improve the scalability of approaches that use microarray oligonucleotides to synthesize genes and other DNA elements (pp 1291 and 1295). Credit: Marina Corral, based on idea courtesy of Sriram Kosuri and Mark Matzas.
1223 Will J&J turbocharge Crucell’s vaccine portfolio? 1224 Amylin’s diabetes shock 1225 Synthetic DNA firms embrace hazardous agents guidance but remain wary of automated ‘best-match’ 1226 DNA not patentable 1226 USPTO’s humanitarian vouchers 1227 BARDA funds vaccine makers aiming to phase out eggs 1229 Ethanol blend hike promises jump start for cellulosic investment 1229 Singapore injects $12.5 billion 1230 Good ideas across borders 1230 Airlines ahead on algae 1232 News feature: Crossing the line 1236 News feature: New twists on proteasome inhibitors
B i oe n trepre n eur B u i l d i n g a bus i n ess 100
1239 Around the block Y Philip Zhang
Weak Moderate Strong
80
op i n i o n a n d comme n t
60
40
C O R R E S P O ND E N C E 20
0
0
500
1,000
1,500
Midpoint for antibody-based proteomics initiative, p 1248
1242 Wrong fixes for gene patents 1243 Stem cell clinics in the news 1246 Tracking and assessing the rise of state-funded stem cell research 1248 Towards a knowledge-based Human Protein Atlas
Nature Biotechnology (ISSN 1087-0156) is published monthly by Nature Publishing Group, a trading name of Nature America Inc. located at 75 Varick Street, Fl 9, New York, NY 10013-1917. Periodicals postage paid at New York, NY and additional mailing post offices. Editorial Office: 75 Varick Street, Fl 9, New York, NY 10013-1917. Tel: (212) 726 9335, Fax: (212) 696 9753. Annual subscription rates: USA/Canada: US$250 (personal), US$3,520 (institution), US$4,050 (corporate institution). Canada add 5% GST #104911595RT001; Euro-zone: €202 (personal), €2,795 (institution), €3,488 (corporate institution); Rest of world (excluding China, Japan, Korea): £130 (personal), £1,806 (institution), £2,250 (corporate institution); Japan: Contact NPG Nature Asia-Pacific, Chiyoda Building, 2-37 Ichigayatamachi, Shinjuku-ku, Tokyo 162-0843. Tel: 81 (03) 3267 8751, Fax: 81 (03) 3267 8746. POSTMASTER: Send address changes to Nature Biotechnology, Subscriptions Department, 342 Broadway, PMB 301, New York, NY 10013-3910. Authorization to photocopy material for internal or personal use, or internal or personal use of specific clients, is granted by Nature Publishing Group to libraries and others registered with the Copyright Clearance Center (CCC) Transactional Reporting Service, provided the relevant copyright fee is paid direct to CCC, 222 Rosewood Drive, Danvers, MA 01923, USA. Identification code for Nature Biotechnology: 1087-0156/04. Back issues: US$45, Canada add 7% for GST. CPC PUB AGREEMENT #40032744. Printed by Publishers Press, Inc., Lebanon Junction, KY, USA. Copyright © 2010 Nature America, Inc. All rights reserved. Printed in USA.
i
volume 28 number 12 DECEMBER 2010 C O M M E N TA R Y 1251 Anchoring gene patent eligibility to its constitutional mooring Kenneth G Chahine 1256 The environmental impact subterfuge Gregory Conko & Henry I Miller
f eature 1259 To selectivity and beyond George S Mack Curbing pink bollworm resistance to Bt, p 1273
pate n ts
© 2010 Nature America, Inc. All rights reserved.
1267 On firm ground: IP protection of therapeutic nanoparticles Paul Burgess, Peter Barton Hutt, Omid C Farokhzad, Robert Langer, Scott Minick & Stephen Zale 1271 Recent patent applications in pharmacogenomics
N E W S A ND V I E W S 1272 Megabases for kilodollars Mikkel Algire, Radha Krishnakumar & Chuck Merryman see also pp 1291 and 1295 1273 No refuge for insect pests see also p 1304 Kongming Wu 1275 Nanoparticles in the lung Wolfgang G Kreyling, Stephanie Hirn & Carsten Schleh
see also p 1300
1277 Research highlights Simulating metabolic interactions between cells, p 1279
computat i o n a l b i o l ogy A n a lys i s 1279 Large-scale in silico modeling of metabolic interactions between cell types in the human brain Nathan E Lewis, Gunnar Schramm, Aarash Bordbar, Jan Schellenberger, Michael P Andersen, Jeffrey K Cheng, Nilam Patel, Alex Yee, Randall A Lewis, Roland Eils, Rainer König & Bernhard Ø Palsson
research B R I E F C O M M U NI C AT I O N S 1287 A robust system for production of minicircle DNA vectors Mark A Kay, Cheng-Yi He & Zhi-Ying Chen l etters
Combining DNA reading and writing, p 1291
nature biotechnology
1291 High-fidelity gene synthesis by retrieval of sequence-verified DNA identified using high-throughput pyrosequencing Mark Matzas, Peer F Stähler, Nathalie Kefer, Nicole Siebelt, Valesca Boisguérin, Jack T Leonard, Andreas Keller, Cord F Stähler, Pamela Häberle, see also p 1272 Baback Gharizadeh, Farbod Babrzadeh & George M Church
iii
volume 28 number 12 DECEMBER 2010 1295 Scalable gene synthesis by selective amplification of DNA pools from high-fidelity microchips Sriram Kosuri, Nikolai Eroshenko, Emily M LeProust, Michael Super, Jeffrey Way, see also p 1272 Jin Billy Li & George M Church 1300 Rapid translocation of nanoparticles from the lung airspaces to the body Hak Soo Choi, Yoshitomo Ashitate, Jeong Heon Lee, Soon Hee Kim, Aya Matsui, Numpon Insin, Moungi G Bawendi, Manuela Semmler-Behnke, John V Frangioni & see also p 1275 Akira Tsuda
Scalable DNA synthesis, p 1295
1304 Suppressing resistance to Bt cotton with sterile insect releases Bruce E Tabashnik, Mark S Sisterson, Peter C Ellsworth, Timothy J Dennehy, Larry Antilla, Leighton Liesner, Mike Whitlow, Robert T Staten, Jeffrey A Fabrick, Gopalan C Unnithan, Alex J Yelich, Christa Ellers-Kirk, Virginia S Harpold, see also p 1273 Xianchun Li & Yves Carrière
© 2010 Nature America, Inc. All rights reserved.
1308 errata and corrigenda
careers a n d recru i tme n t 1309 Catching the wave in China Connie Johnson Hambley 1312 people
ADVERTISEMENT Nanoparticle behavior in the lungs, p 1300
Modern evolution of the ideal science park Since their inception, science parks have become much more than shelter for technical industries. Science parks are now a feature of the global landscape and their numbers are still growing in the 21st century. The report takes a look at the policies and dynamics that help to shape the modern science parks, especially those policies that facilitate growth of the biotech industry. The report follows Errata and Corrigenda after page 1308.
nature biotechnology
v
Editorial
Vive la différence? The Obama administration’s $1 billion tax program at the very least signals a continued commitment to innovative biotech. The same cannot be said of plans afoot by the French government.
© 2010 Nature America, Inc. All rights reserved.
A
t the beginning of November, the US Therapeutic Discovery Project Program (TDPP) handed out a billion dollars in tax credits/grants to small- to medium-sized (SME) companies in the life sciences. The program was overseen by the federal tax authorities and was a direct result of input by BIO, the Biotechnology Industry Organization. Although the amounts handed out to each company were rather small, the Obama administration should be congratulated on providing decisive support to the US biotech sector. But perhaps most important, this latest initiative continues decades of fiscal policy under different US administrations that has consistently supported the entrepreneurial life science sector. There is little doubt that the US biotech sector (and that in most other countries) is in need of a shot in the arm. BIO estimates that since 2007, more than 100 public biotech companies have closed their doors and countless more private companies have ceased operations. The percentage of companies with less than one year’s cash remains 25% (down only slightly from 29% in 2007). TDPP was a one-off program, approved as part of the 2010 Patient Protection and Affordable Care Act, designed specifically for companies with fewer than 250 employees. The credits cover up to 50% of eligible R&D expenses in tax years 2009 or 2010. Applications were accepted from companies for one month from June 21 and the submissions reviewed by the US National Institutes of Health (NIH). The program was a roaring success—almost too successful. The US Internal Revenue Service had expected ~1,500 applications; it was deluged with over 5,600. To cope with this onslaught, a decision was made to turn the $1 billion cake into 4,606 crumbs, each worth a maximum of $244,479. Although some companies applied for and were granted multiple project awards (28% of total), 2,923 organizations in 47 states got a piece of the action. Project reviewers for TDPP were asked to assess, among a very few other criteria, whether a project was ‘likely’ to result in new therapies, or reduce long-term healthcare costs or contribute to the goal of curing cancer. The fact that 4,000 projects got the thumbs-up on this criterion at the very least severely stretches the meaning of ‘likely’. The best that can realistically be claimed is that it is likely that a handful of the projects will succeed, and that the chances of success in every project will be marginally enhanced by an extra couple of hundred thousand dollars. That said, by casting a wide net, the US authorities have ensured that their biotech sector retains more shots on the goal—a wise investment in the future. The same cannot be said for policy proposals under discussion in France. Back in September, the government of Nicolas Sarkozy introduced its 2011 finance bill in the French parliament. Amongst the cost-cutnature biotechnology volume 28 number 12 december 2010
ting measures and edicts designed to cut back on government spending was a small section, Article 78, which will result in an estimated annual saving to the French exchequer of $57 million. Article 78 alters radically the conditions attached to the Jeune Enterprise Innovante (JEI; Young Innovative Company) status in France. Currently, any research-based independent SME under 8 years old is, in effect, excused from paying social taxes for any of its researcher employees. Because the social taxes represent 30–40% of salary costs, this represents a significant saving and an incentive to life science venture investment in the country (Nat. Biotechnol. 23, 1187, 2005). Article 78 proposes to cap the savings at €103,680 per year per company and to tail off the benefit between years 4 and 8 of a company’s lifetime. A vote in the French Senate on the 2011 Finance Bill is expected early this month, with implementation in January. Meanwhile, the French industry association, France Biotech is furiously lobbying ministers, senators and senior figures in the civil administration in a desperate bid to axe Article 78. They fear, with justification, that it will discourage entrepreneurs and investors, and worse, that it will be a disincentive to innovate. What is most disingenuous about the French policy shift, though, is that the government is saying the benefit will actually provide SMEs with a ‘softer landing’ before returning to full taxation. In other words, they are claiming to understand the long-term effects of this volte face in policy. And when it comes to stargazing, the French government doesn’t have a good track record in biotech. Twenty years ago, the ‘BioAvenir’ initiative invested a billion francs in academic life science projects, but elected to make one (French) company— Rhone-Poulenc—the sole beneficiary of any commercialization. We don’t know what happened to the hundreds of BioAvenir projects, but Rhone Poulenc didn’t turn into the biotech driver the government envisaged—it is now known as Sanofi-aventis. As the BioAvenir experience illustrates, if there is one thing that can be guaranteed about the future, it is that it will not look like the present. That is something the US government seems to have grasped with TDPP, arguably to a fault: its $1 billion has been flung into any company that is both innovative and not established, into any company that might be the future simply by virtue of its not being a big part of the present. The French government, conversely, still clings to the idea that it can extrapolate the future from the present: each year, it gives an estimated €1 billion in R&D tax credits to Sanofi-aventis. There are worse places to spend a €1 billion, no doubt, but providing such largesse to one of the world’s biggest pharmaceutical companies while begrudging a fraction ($57 million) of it to innovative biotech doesn’t look like a government investing in the future. 1221
news in this section Screening for malicious use of synthetic DNA
BARDA funds boost creative vaccine makers
p1225
p1227
Ethanol blend for cars rises after 35 years p1229
The emergence of contamination at Crucell’s production facility in Shingal, Yongin City, Korea, came at an awkward time. The vaccine maker, based in Leiden, The Netherlands, disclosed the news in late October as Johnson & Johnson (J&J) prepared to issue an update on progress of its $2.3 billion bid for outright ownership of the company. Nevertheless, J&J, of New Brunswick, New Jersey, intends to proceed according to plan. The large pharma’s tender offer for Crucell’s shares is due to be considered at a meeting of Crucell shareholders in Amsterdam on 10 December. The implications—if any—that the manufacturing disruption may have for the transaction are not clear at this point. “I don’t expect this to be a deal breaker, but there’s no guarantee it won’t be a deal adjuster,” says Jan Van den Bossche, analyst at Petercam in Brussels. “It will all depend on what’s going on out there.” Although shipments of the two products affected by the quality failures—the pediatric liquid pentavalent combination vaccine Quinvaxem (diphtheria (Corynebacterium diphtheriae), toxoid/tetanus (Clostridium tetani), toxoid/whole cell Bordetella pertussis (DTwP); Haemophilus influenzae type B (Hib) conjugate vaccine; and recombinant hepatitis B virus surface antigen (HBsAg) vaccine; trade name, Hepavax-Gene)—were, as Nature Biotechnology went to press, expected to resume by late November, full production will not be restored until February. Quinvaxem, which protects against five pediatric infections, is the company’s most important product, having secured tenders worth $910 million since the end of 2006, including a $110 million order from the New York–based United Nations Children’s Fund in May. Crucell has made provision for a €22.8 million ($31.8 million) charge on the inventory lost through the disruption. Even before the problems emerged, the company had been in the process of transferring production of the two vaccines to a new €50 million ($69.8 million) facility in Incheon, South Korea, which is scheduled to become fully operational next year. Crucell’s problems echo those of Shantha Biotechnics, of Hyderabad, India. Not long after Lyon-based Sanofi Pasteur
Crucell
© 2010 Nature America, Inc. All rights reserved.
Will J&J turbocharge Crucell’s vaccine portfolio?
The Dutch vaccine manufacturer Crucell (headquarters in Leiden pictured above) will run as an autonomous unit following J&J’s expected acquisition.
gained an 80% stake in Shantha, the latter firm’s Shan5 (DTwP-HBsAg-Hib) vaccine, a direct competitor to Quinvaxem, lost its accreditation on the World Health Organization’s (Geneva) list of recommended vaccines because of the appearance of white sediment on vaccine vials. “They’ve had massive manufacturing setbacks,” says Peter Welford, analyst at Jefferies International, in London. “Crucell [has] had the least problems.” Others competing in this product segment include Panacea Biotech, of New Delhi, India, and the GSK Biologicals arm of London-based GlaxoSmithKline. J&J’s interest in the Dutch company came to the fore in September last year, when it paid around €302 ($407) million, or €20.6 ($27.8) per share, to amass an 18% stake in Crucell. The healthcare giant’s current $2.3 billion bid for the rest of the company it does not already own is well in excess of a reported $1.35 billion bid that Wyeth of Madison, New Jersey, was preparing early last year. That deal was derailed by New York–based Pfizer’s $68 billion acquisition of Wyeth. Recent contamination problems are unlikely to affect J&J’s offer, although Welford says, this will eliminate the possibility that dissenting shareholders may hold out for a higher price than the €24.75 ($35.18) per share that J&J has tabled.
nature biotechnology volume 28 number 12 december 2010
Crucell’s history can be traced back to Leidenbased IntroGene, which was formed in 1993 to commercialize the PER.C6 human cell culture technology, which can produce as much as 27 g/l of IgGs in bioreactor culture. The Crucell name came into being in 2000, after a merger with Utrecht Biotechnology Systems in The Netherlands, an antibody discovery firm that had developed a ‘subtractive’ phage display technology called MAbstract. In that platform, a phage antibody library is admixed with a heterogeneous cell mixture, where the particular cell of interest is labeled with a fluorescent tag and sorted by flow cytometry to identify binding phage antibodies, dispensing with the need for repeatedly constructing new libraries. These technologies, as well as an AdVac adenoviral vector platform, underpin the company’s development pipeline. However, its commercial product portfolio is based on recent acquisitions of Stockholm-based SBL Vaccin, and, especially, of Bern, Switzerland–based Berna Biotech. Berna had previously acquired Rhein Biotech, of Maastricht, The Netherlands, but never managed to achieve its stated growth ambitions (Table 1). Crucell, in contrast, has proven adept at winning sales, particularly in emerging markets. “They’ve done a marvelous job of turning around Berna Biotech,” Van den Bossche says. J&J plans to run Crucell as an autonomous 1223
NEWS
in brief
© 2010 Nature America, Inc. All rights reserved.
iStockphoto/Andrius Gruzdaitis
Table 1 Vaccine mergers & acquisitions
Amylin’s diabetes shock
In a major setback for Amylin, the US Food and Drug Fears over long QT. Administration called for further tests on Amylin’s Bydureon, a onceweekly drug for type 2 diabetes. On October 19, the San Diego-based biotech Amylin, and partner Eli Lilly of Indianapolis, announced that the agency had issued a Complete Response Letter calling for a study known as thorough QT to further investigate the potential effect of the drug’s cardiac effects. Bydureon is a longacting formulation of Amylin’s approved drug Byetta (exenatide), an analog of the insulinboosting glucagon-like peptide 1 (GLP-1) (Nat. Biotechnol. 28, 109, 2010). Both contain exenatide, but Bydureon contains a higher concentration of the active agent, delivered into the blood by controlled release. The first-in-class synthetic gut hormone drug Byetta generated $677 million in 2009 in the US alone. But Bydureon’s formulation—once weekly injections compared with twice a day—is predicted to gain competitive edge over its predecessor if approved. Simos Simeonidis, managing director and senior biotech analyst at Rodman & Renshaw in New York, says that Bydureon had “blockbuster” potential and could have brought in at least $1 billion per year for Amylin after the first few years. The FDA’s concerns over supratherapeutic concentrations of exenatide may have been triggered by observations in a Byetta study, in which a few patients with blood levels of 500 pg/ml had lengthened QT intervals. In patients with kidney problems, once-weekly Bydureon can reach five times the normal 200 pg/ml seen with Byetta. Given the delay in approving Bydureon, alternate treatments for type 2 diabetes like Victoza (liraglutide) from Novo Nordisk, of Princeton, New Jersey, and Amylin’s own Byetta, are now expected to flourish. London-based GlaxoSmithKline’s investigational Syncria(R) (albiglutide), also a GLP-1 analog, is currently undergoing phase 3 clinical trials. “They were considerably behind,” says Yaron Werber, senior biotech analyst at Citigroup in New York. “But now I think they have a chance to close a big part of the gap.” Nidhi Subbaraman
in their words “At this juncture, I don’t know that individuals will have all that much to gain from publishing their data, but I think that Genomes Unzipped will help to prove that there’s not all that much to lose, either.” Linda Avey, cofounder of 23andMe, comments on a move by 11 Britishbased scientists and a US lawyer to make their own genetic tests publicly available in the hope of encouraging others to share their genome information. (The Times, 11 October 2010)
1224
Acquirer
Target (location)
Value
Year
Berna Biotech
Rhein Biotech
$257 million
2002
Crucell
Berna Biotech
$449 million
2005
Crucell
SBL Vaccin
$52 million
2006
GlaxoSmithKline
ID Biomedical (Vancouver, Canada)
$1.4 billion
2005
GlaxoSmithKline
Corixa (Seattle)
$300 million
2005
Intercell
Iomai (Gaithersburg, Maryland)
$189 million
2008
Novartis
Chiron (Emeryville, California)
$5.4 billiona
2005
Sanofi Pasteur
Acambis
$549 million
2008
Sanofi Pasteur
Shantha Biotechnics
$781 millionb
2009
Sanofi Pasteur
VaxDesign (Orlando, Florida)
$55 millionc
2010
aThe
transaction involved the remaining 58% of Chiron’s stock not already held by Novartis. bTotal valuation of the company implied by the terms of the deal. Sanofi Pasteur acquired an 80% stake. cThe deal includes $5 million in potential milestone payments.
unit, just as it has with several earlier acquisitions, such as antiviral drug developer Tibotec of Antwerp, Belgium, and antibody developer Centocor located in Horsham, Pennsylvania. J&J’s imminent acquisition of Crucell is further evidence that the vaccines area, although small in the context of overall pharma sales, is firmly back on the industry’s agenda, after a couple of decades during which it had fallen out of favor. Other acquisitions where vaccines have featured prominently (though not exclusively) are Pfizer’s takeover of Wyeth and Londonbased AstraZeneca’s $15.2 billion acquisition of Gaithersburg, Maryland–based MedImmune. The vaccines market’s steady growth equilibrium is punctuated by irregular spurts caused by significant new product introductions. For instance, Wyeth’s Prevnar (pneumococcal 7-valent conjugate) or Gardasil (human papillomavirus quadrivalent (types 6, 11, 16 and 18) recombinant vaccine) from Merck of Whitehouse Station, New Jersey, quickly added substantial chunks of revenue to the industry total. The complexities of manufacturing, combined with the economies of large-scale production, have conspired to make the sector strongly oligopolistic, with the market dominated by a small number of firms. The top five players, Sanofi Pasteur, GSK Biologicals, Merck, Pfizer and Novartis, control around 85% of the market, with around $18 billion of sales in 2009, according to market analysts VacZine Analytics, of Bishop’s Stortford, UK. “The industry last year grew by about 9, 10% over the previous year,” says VacZine Analytics director John Savopoulos. By his reckoning Crucell’s 2009 vaccine sales of €304 million ($424.3 million) would rank the company in ninth place, behind CSL, of Parkville, Australia, and London-based AstraZeneca’s MedImmune unit. The H1N1 influenza pandemic had the biggest impact on sales in 2009, bringing in around $3.5 billion in additional revenue. In the first three-quarters of 2010, the top five companies posted combined sales
of $16.9 billion, excluding the contribution of Sanofi Pasteur MSD, a European joint venture between Sanofi Pasteur and Merck. Nevertheless, the industry’s pipeline remains widely distributed. “Seventy companies are now targeting 40 plus pathogens,” Savopoulos says. Some 160 vaccines are in clinical development, around 100 of which are in phase 1 trials. Crucell has the industry’s fifth largest pipeline, he notes. It has clinical-stage programs in yellow fever, tuberculosis (TB), malaria, HIV, Ebola virus and Marburg virus as well as preclinical efforts in seasonal influenza, respiratory syncytial virus and in the development of a universal flu vaccine. Only GSK Biologicals, Sanofi Pasteur, Novartis and Merck have more vaccine programs in the clinic. “They look reasonably well positioned. The problem is when you probability-adjust their pipeline—malaria, TB—they’re tough areas to be in,” Savopoulos says. A preclinical antibody development program, which targets both seasonal and pandemic flu strains, and has attracted $40.7 million in US National Institutes’ of Health funding (with potentially $28.4 million more to come), was the main focus of J&J’s 2009 alliance with Crucell. “That technology, if it can be harnessed for an active vaccine, that’s a quantum leap above the existing flu market,” Savopoulos points out. Jan Van den Bossche identifies a combination monoclonal antibody development program for treating individuals exposed to rabies infection as particularly promising. “It’s a clear, understandable program,” he says. The program, in phase 2 trials, aims to replace an existing equine immunoglobulin therapy. Although J&J will seek further growth from Crucell’s existing product portfolio, J&J’s real measure of success will be in the extent to which it can convert pipeline promise into commercial reality. Sanofi Pasteur ‘turbocharged’ Acambis’s ChimeriVax platform when it acquired the Cambridge, UK–based firm, says Savopoulos. “The question is, can J&J do the same thing?” Cormac Sheridan, Dublin
volume 28 number 12 december 2010 nature biotechnology
NEWS
in brief
© 2010 Nature America, Inc. All rights reserved.
iStockphoto/Andrius Gruzdaitis
Table 1 Vaccine mergers & acquisitions
Amylin’s diabetes shock
In a major setback for Amylin, the US Food and Drug Fears over long QT. Administration called for further tests on Amylin’s Bydureon, a onceweekly drug for type 2 diabetes. On October 19, the San Diego-based biotech Amylin, and partner Eli Lilly of Indianapolis, announced that the agency had issued a Complete Response Letter calling for a study known as thorough QT to further investigate the potential effect of the drug’s cardiac effects. Bydureon is a longacting formulation of Amylin’s approved drug Byetta (exenatide), an analog of the insulinboosting glucagon-like peptide 1 (GLP-1) (Nat. Biotechnol. 28, 109, 2010). Both contain exenatide, but Bydureon contains a higher concentration of the active agent, delivered into the blood by controlled release. The first-in-class synthetic gut hormone drug Byetta generated $677 million in 2009 in the US alone. But Bydureon’s formulation—once weekly injections compared with twice a day—is predicted to gain competitive edge over its predecessor if approved. Simos Simeonidis, managing director and senior biotech analyst at Rodman & Renshaw in New York, says that Bydureon had “blockbuster” potential and could have brought in at least $1 billion per year for Amylin after the first few years. The FDA’s concerns over supratherapeutic concentrations of exenatide may have been triggered by observations in a Byetta study, in which a few patients with blood levels of 500 pg/ml had lengthened QT intervals. In patients with kidney problems, once-weekly Bydureon can reach five times the normal 200 pg/ml seen with Byetta. Given the delay in approving Bydureon, alternate treatments for type 2 diabetes like Victoza (liraglutide) from Novo Nordisk, of Princeton, New Jersey, and Amylin’s own Byetta, are now expected to flourish. London-based GlaxoSmithKline’s investigational Syncria(R) (albiglutide), also a GLP-1 analog, is currently undergoing phase 3 clinical trials. “They were considerably behind,” says Yaron Werber, senior biotech analyst at Citigroup in New York. “But now I think they have a chance to close a big part of the gap.” Nidhi Subbaraman
in their words “At this juncture, I don’t know that individuals will have all that much to gain from publishing their data, but I think that Genomes Unzipped will help to prove that there’s not all that much to lose, either.” Linda Avey, cofounder of 23andMe, comments on a move by 11 Britishbased scientists and a US lawyer to make their own genetic tests publicly available in the hope of encouraging others to share their genome information. (The Times, 11 October 2010)
1224
Acquirer
Target (location)
Value
Year
Berna Biotech
Rhein Biotech
$257 million
2002
Crucell
Berna Biotech
$449 million
2005
Crucell
SBL Vaccin
$52 million
2006
GlaxoSmithKline
ID Biomedical (Vancouver, Canada)
$1.4 billion
2005
GlaxoSmithKline
Corixa (Seattle)
$300 million
2005
Intercell
Iomai (Gaithersburg, Maryland)
$189 million
2008
Novartis
Chiron (Emeryville, California)
$5.4 billiona
2005
Sanofi Pasteur
Acambis
$549 million
2008
Sanofi Pasteur
Shantha Biotechnics
$781 millionb
2009
Sanofi Pasteur
VaxDesign (Orlando, Florida)
$55 millionc
2010
aThe
transaction involved the remaining 58% of Chiron’s stock not already held by Novartis. bTotal valuation of the company implied by the terms of the deal. Sanofi Pasteur acquired an 80% stake. cThe deal includes $5 million in potential milestone payments.
unit, just as it has with several earlier acquisitions, such as antiviral drug developer Tibotec of Antwerp, Belgium, and antibody developer Centocor located in Horsham, Pennsylvania. J&J’s imminent acquisition of Crucell is further evidence that the vaccines area, although small in the context of overall pharma sales, is firmly back on the industry’s agenda, after a couple of decades during which it had fallen out of favor. Other acquisitions where vaccines have featured prominently (though not exclusively) are Pfizer’s takeover of Wyeth and Londonbased AstraZeneca’s $15.2 billion acquisition of Gaithersburg, Maryland–based MedImmune. The vaccines market’s steady growth equilibrium is punctuated by irregular spurts caused by significant new product introductions. For instance, Wyeth’s Prevnar (pneumococcal 7-valent conjugate) or Gardasil (human papillomavirus quadrivalent (types 6, 11, 16 and 18) recombinant vaccine) from Merck of Whitehouse Station, New Jersey, quickly added substantial chunks of revenue to the industry total. The complexities of manufacturing, combined with the economies of large-scale production, have conspired to make the sector strongly oligopolistic, with the market dominated by a small number of firms. The top five players, Sanofi Pasteur, GSK Biologicals, Merck, Pfizer and Novartis, control around 85% of the market, with around $18 billion of sales in 2009, according to market analysts VacZine Analytics, of Bishop’s Stortford, UK. “The industry last year grew by about 9, 10% over the previous year,” says VacZine Analytics director John Savopoulos. By his reckoning Crucell’s 2009 vaccine sales of €304 million ($424.3 million) would rank the company in ninth place, behind CSL, of Parkville, Australia, and London-based AstraZeneca’s MedImmune unit. The H1N1 influenza pandemic had the biggest impact on sales in 2009, bringing in around $3.5 billion in additional revenue. In the first three-quarters of 2010, the top five companies posted combined sales
of $16.9 billion, excluding the contribution of Sanofi Pasteur MSD, a European joint venture between Sanofi Pasteur and Merck. Nevertheless, the industry’s pipeline remains widely distributed. “Seventy companies are now targeting 40 plus pathogens,” Savopoulos says. Some 160 vaccines are in clinical development, around 100 of which are in phase 1 trials. Crucell has the industry’s fifth largest pipeline, he notes. It has clinical-stage programs in yellow fever, tuberculosis (TB), malaria, HIV, Ebola virus and Marburg virus as well as preclinical efforts in seasonal influenza, respiratory syncytial virus and in the development of a universal flu vaccine. Only GSK Biologicals, Sanofi Pasteur, Novartis and Merck have more vaccine programs in the clinic. “They look reasonably well positioned. The problem is when you probability-adjust their pipeline—malaria, TB—they’re tough areas to be in,” Savopoulos says. A preclinical antibody development program, which targets both seasonal and pandemic flu strains, and has attracted $40.7 million in US National Institutes’ of Health funding (with potentially $28.4 million more to come), was the main focus of J&J’s 2009 alliance with Crucell. “That technology, if it can be harnessed for an active vaccine, that’s a quantum leap above the existing flu market,” Savopoulos points out. Jan Van den Bossche identifies a combination monoclonal antibody development program for treating individuals exposed to rabies infection as particularly promising. “It’s a clear, understandable program,” he says. The program, in phase 2 trials, aims to replace an existing equine immunoglobulin therapy. Although J&J will seek further growth from Crucell’s existing product portfolio, J&J’s real measure of success will be in the extent to which it can convert pipeline promise into commercial reality. Sanofi Pasteur ‘turbocharged’ Acambis’s ChimeriVax platform when it acquired the Cambridge, UK–based firm, says Savopoulos. “The question is, can J&J do the same thing?” Cormac Sheridan, Dublin
volume 28 number 12 december 2010 nature biotechnology
news
After more than four years of public and private discussion and review, the US government has officially issued its Screening Framework Guidance for Providers of Synthetic Double-Stranded DNA (http://www.phe. gov/Preparedness/legal/guidance/syndna/ Documents/syndna-guidance.pdf), a set of voluntary guidelines intended to help gene synthesis companies intercept unauthorized purchases of genetic components from human or agricultural pathogens. Many are relieved that standardized guidelines have finally been established, enabling the industry to harmonize practices and provide reassurance to large corporate clients that represent their bread and butter. “Overall it’s not a bad framework, and I think it’s been designed with a lot of expertise,” says Markus Fischer, director and cofounder of Entelechon in Regensburg, Germany, a gene synthesis provider, also part of the International Association Synthetic Biology (IASB), one of two major industry groups representing gene synthesis companies. In the absence of clear government guidance, the IASB and its counterpart, the International Gene Synthesis Consortium (IGSC), each developed their own ‘best practices’ for screening both DNA orders and the customers that place them. To draw up its official guideline, the US Department of Health and Human Services (HHS), in Washington, DC, received comments from 22 organizations and individuals since publishing its draft Guidance in November 2009 (Table 1). The final version—released in October—includes only a few notable changes, such as the elimination of a size cut-off for screening decisions on double-stranded segments. Although the Guidance structurally resembles preexisting protocols, critics such as Stephen Maurer of the University of California at Berkeley are concerned that its recommendations are weaker than what is needed and may encourage companies to cut corners in the future. “Industry had embraced a higher standard, and now the government is going to lead us to a lower standard,” he says. Chief among his concerns is the proposed mechanism for screening sequences. The government proposes a ‘best match’ strategy, in which orders are compared against GenBank in 200-bp segments, based on both nucleotide and all six possible peptide sequences. If the top ‘hit’ is from a pathogen on the government’s list of select agents and toxins (http://www. selectagents.gov/Select%20Agents%20and%20 Toxins%20List.html) or, for international orders, the ‘Commerce Control List’ (http://www.gpo.
CMSP
© 2010 Nature America, Inc. All rights reserved.
Synthetic DNA firms embrace hazardous agents guidance but remain wary of automated ‘best-match’
The select agents list omits many known human pathogens, such as the SARS coronavirus.
gov/bis/ear/pdf/ccl1.pdf), it should be considered a ‘sequence of concern’ for further expert analysis, in conjunction with a careful assessment of the ordering customer’s credentials. Maurer suggests that by not expressly calling for human review of database matches— regardless of whether or not they are on the select agents list—this strategy is inherently less effective than the ‘top homology’ method already in use at several companies, including Entelechon, in which all GenBank results are manually assessed. “We mandate that one of our employees reviews the complete list of hits, and not just the ones that have been automatically flagged,” says Fischer. “A fully automated screening system leaves significant biosecurity questions unanswered.” According to Theresa Lawrence, a senior science advisor with HHS, top homology was rejected in the interest of applying a consistent standard for distinguishing potential threats based on analysis of an established data source. “There was concern with the top homology approach that we would have to designate an arbitrary threshold,” she says, “and this approach needs human screeners, which can represent an inconsistent mechanism from provider to provider.” Although GenBank represents a rich resource for genetic data and is therefore a powerful foundation for such screens, it is nevertheless a product of community curation and potential ‘sequences of concern’ may be inconsistently designated. “GenBank is just a repository,” says Sean Eddy, a computational biologist at the Howard Hughes Medical Institute’s Janelia Farm Research Campus in Loudon County, Virginia.
nature biotechnology volume 28 number 12 december 2010
1225
NEWS
in brief
© 2010 Nature America, Inc. All rights reserved.
DNA not patentable In a spine-chilling announcement, the US Department of Justice in October said unmodified human DNA should not be eligible for patent. The department’s stance conflicts with the body of case law on the matter and a longstanding position held by the US Patent and Trademark Office, which has issued more than 10,000 of these patents. The Justice Department announced its position in response to a lawsuit involving patents on breast cancer genes BRCA1 and BRCA2. A US district court in March declared the patents invalid, saying that the genes are products of nature rather than human-made inventions. Patent holders University of Utah and Myriad Genetics, based in Salt Lake City, appealed in June. In an amicus brief filed with the Federal Circuit Court of Appeals the Justice Department agreed that identifying and isolating DNA without further manipulation is not an invention, or patent eligible. How the agency’s declaration will influence justices and the patent office worries biotech companies. But the patent office isn’t easily swayed, says Thomas Kowalski, an attorney with Vedder Price in New York. “The patent office is not going to change what it’s doing in view of what the Department of Justice says,” he says. Besides, international agreements between the American, European and Japanese patent offices, known as Trilateral Co-operation, have concluded that unmodified DNA is patentable. Emily Waltz
USPTO’s do-good vouchers Patent owners may soon be able to cut the time needed to have a patent reexamined by up to two-thirds if they can demonstrate a humanitarian use. The US Patent & Trademark Office (USPTO) is proposing a system that would offer fast-track reexamination vouchers as an incentive to stimulate “creation or licensing that addresses humanitarian needs.” Because patents under reexamination are often the most valuable commercially, a fast-track procedure would let patent owners “more readily and less expensively affirm the validity of their patents,” according to the Federal Register notice. The USPTO currently takes 19 to 20 months for such reexaminations, whereas the expedited review promises a six-month turnaround. The system is modeled on the US Food & Drug Administration’s priority review vouchers given to entities that develop drugs to treat neglected tropical diseases. In this case, patent holders who receive the fast-track reexamination voucher could use it on any other patent they own or transfer it to the open market. Although the intent is worthy, there are too many unanswered questions, worries Thomas Kowalski of Vedder Price. “What will the USPTO do to ensure that those in the developing world as well as the poor in the developed world can gain access to the technology? Also, the voucher should be tied specifically to the technology with the humanitarian use instead of being independent and transferable.” Michael Francisco
1226
Table 1 Timeline of events leading up to the synthetic DNA guidance December 2006
The US government’s National Science Advisory Board for Biosecurity issues a report recommending the establishment of “uniform and standardized screening practices among providers of synthetic DNA.”
June 2007
The US Government convenes an interagency working group on synthetic nucleic acid screening.
November 3, 2009
Companies of the International Association Synthetic Biology (Entelechon, ATG:biosynthetics, Biomax, febit and Sloning Biotechnology) present their “Code of Conduct” for the screening of gene orders and customers.
November 19, 2009 Companies of the International Gene Synthesis Consortium (Blue Heron Biotechnology, DNA2.0, GENEART, GenScript and Integrated DNA Technologies) release a similar set of ‘best practices’ guidelines, the “Harmonized Screening Protocol.” November 29, 2009 The US Department of Health and Human Services releases a draft version of its Screening Framework Guidance for Providers of Synthetic Double-Stranded DNA for public comment. January 11, 2010
The American Association for the Advancement of Science (AAAS) hosts a meeting for policy specialists and scientists from academia and industry on “Minimizing the Risks of Synthetic DNA: Scientists’ Views on the U.S. Government’s Guidance on Synthetic Genomics.”
January 26, 2010
Public comment period ends on the draft guidance.
October 13, 2010
HHS publishes its final version of the Screening Framework Guidance.
He adds, “The annotation is as provided by the person that deposited the sequence.” Screening effectiveness could also be constrained by biases in the database contents, according to James Diggans, a researcher at MITRE, a notfor-profit national technology resource that focuses on security issues, located in Bedford, Massachusetts and McLean, Virginia. “There are far more harmless sequences in these databases than there are sequences that could be used to harm human health.” For other scientists, the reliance on the select agents list is also problematic. Eighty-two items currently listed represent known risks to human, plant or animal health, and are unambiguously regulated by federal law. But many known human pathogens, such as severe acute respiratory syndrome virus, are omitted, and others worry about the problems that could be posed by the yet-unknown sequence variants. “If you synthesize a genome without creating the actual organism it encodes—and where now you aren’t even limited to the variability found in nature— how do you taxonomically classify that genome sequence?” says Eddy. Eddy and other scientists recently partnered with the US National Research Council in an effort to bring some clarity to the characterization of high-risk genes. The resulting report, Sequence-Based Classification of Select Agents: A Brighter Line (http://www.nap.edu/catalog/ 12970.html), concludes that although it is presently impossible to reliably predict gene function based on sequence, it should nevertheless be within reach to develop mechanisms that can help categorize sequences as belonging to predefined ‘hazardous’ or ‘safe’ classes of genes, an effort that could greatly improve the future efficiency of synthetic gene order screening. Several parallel efforts are also underway to develop more sophisticated and comprehensive pathogen databases. Fischer and Maurer are
collaborating on Virulence Factor Information Repository (VIREP), a repository for annotated information about known virulence genes, based at UC, Berkeley. The IGSC has also stated its intention to develop an extensive regulated pathogens database, which could offer a broadly useful community resource. However, both groups are waiting on government support to help move these projects forward. For now, the member companies of the IGSC, which are predominantly based in the US, are moving to adapt their standards to comply with the HHS recommendations. However, the guidance also invites companies to apply their own “equivalent or superior” screening standards and several companies indicate that they will continue to err on the side of caution in their screening procedures. “If we get a gene in, we screen it,” says Robert Dawson, director of bioinformatics at Coralville, Iowa–based Integrated DNA Technologies. “There’s never a case where we would have a gene go right into production without a human being having looked at both the sequence and the prospective customer.” HHS has also made it clear that these are minimum screening recommendations and not the final word, and discussions are ongoing. Given the early stage of the field, when the risk from synthetic biology is still seen as relatively low—to date, no IGSC member company reports having received an order for a ‘sequence of concern’ that also came from a dubious customer—some hope that there will be sufficient opportunity for these guidelines to grow into a more effective monitoring strategy. “It’s a line in the sand drawn by the US government that now serves as something to be improved over time,” says Diggans. “All of these things make a direct contribution to maintaining near-term biosecurity, but it will need to evolve quickly— the technology is moving ever faster.” Michael Eisenstein, Philadelphia
volume 28 number 12 december 2010 nature biotechnology
NEWS
in brief
© 2010 Nature America, Inc. All rights reserved.
DNA not patentable In a spine-chilling announcement, the US Department of Justice in October said unmodified human DNA should not be eligible for patent. The department’s stance conflicts with the body of case law on the matter and a longstanding position held by the US Patent and Trademark Office, which has issued more than 10,000 of these patents. The Justice Department announced its position in response to a lawsuit involving patents on breast cancer genes BRCA1 and BRCA2. A US district court in March declared the patents invalid, saying that the genes are products of nature rather than human-made inventions. Patent holders University of Utah and Myriad Genetics, based in Salt Lake City, appealed in June. In an amicus brief filed with the Federal Circuit Court of Appeals the Justice Department agreed that identifying and isolating DNA without further manipulation is not an invention, or patent eligible. How the agency’s declaration will influence justices and the patent office worries biotech companies. But the patent office isn’t easily swayed, says Thomas Kowalski, an attorney with Vedder Price in New York. “The patent office is not going to change what it’s doing in view of what the Department of Justice says,” he says. Besides, international agreements between the American, European and Japanese patent offices, known as Trilateral Co-operation, have concluded that unmodified DNA is patentable. Emily Waltz
USPTO’s do-good vouchers Patent owners may soon be able to cut the time needed to have a patent reexamined by up to two-thirds if they can demonstrate a humanitarian use. The US Patent & Trademark Office (USPTO) is proposing a system that would offer fast-track reexamination vouchers as an incentive to stimulate “creation or licensing that addresses humanitarian needs.” Because patents under reexamination are often the most valuable commercially, a fast-track procedure would let patent owners “more readily and less expensively affirm the validity of their patents,” according to the Federal Register notice. The USPTO currently takes 19 to 20 months for such reexaminations, whereas the expedited review promises a six-month turnaround. The system is modeled on the US Food & Drug Administration’s priority review vouchers given to entities that develop drugs to treat neglected tropical diseases. In this case, patent holders who receive the fast-track reexamination voucher could use it on any other patent they own or transfer it to the open market. Although the intent is worthy, there are too many unanswered questions, worries Thomas Kowalski of Vedder Price. “What will the USPTO do to ensure that those in the developing world as well as the poor in the developed world can gain access to the technology? Also, the voucher should be tied specifically to the technology with the humanitarian use instead of being independent and transferable.” Michael Francisco
1226
Table 1 Timeline of events leading up to the synthetic DNA guidance December 2006
The US government’s National Science Advisory Board for Biosecurity issues a report recommending the establishment of “uniform and standardized screening practices among providers of synthetic DNA.”
June 2007
The US Government convenes an interagency working group on synthetic nucleic acid screening.
November 3, 2009
Companies of the International Association Synthetic Biology (Entelechon, ATG:biosynthetics, Biomax, febit and Sloning Biotechnology) present their “Code of Conduct” for the screening of gene orders and customers.
November 19, 2009 Companies of the International Gene Synthesis Consortium (Blue Heron Biotechnology, DNA2.0, GENEART, GenScript and Integrated DNA Technologies) release a similar set of ‘best practices’ guidelines, the “Harmonized Screening Protocol.” November 29, 2009 The US Department of Health and Human Services releases a draft version of its Screening Framework Guidance for Providers of Synthetic Double-Stranded DNA for public comment. January 11, 2010
The American Association for the Advancement of Science (AAAS) hosts a meeting for policy specialists and scientists from academia and industry on “Minimizing the Risks of Synthetic DNA: Scientists’ Views on the U.S. Government’s Guidance on Synthetic Genomics.”
January 26, 2010
Public comment period ends on the draft guidance.
October 13, 2010
HHS publishes its final version of the Screening Framework Guidance.
He adds, “The annotation is as provided by the person that deposited the sequence.” Screening effectiveness could also be constrained by biases in the database contents, according to James Diggans, a researcher at MITRE, a notfor-profit national technology resource that focuses on security issues, located in Bedford, Massachusetts and McLean, Virginia. “There are far more harmless sequences in these databases than there are sequences that could be used to harm human health.” For other scientists, the reliance on the select agents list is also problematic. Eighty-two items currently listed represent known risks to human, plant or animal health, and are unambiguously regulated by federal law. But many known human pathogens, such as severe acute respiratory syndrome virus, are omitted, and others worry about the problems that could be posed by the yet-unknown sequence variants. “If you synthesize a genome without creating the actual organism it encodes—and where now you aren’t even limited to the variability found in nature— how do you taxonomically classify that genome sequence?” says Eddy. Eddy and other scientists recently partnered with the US National Research Council in an effort to bring some clarity to the characterization of high-risk genes. The resulting report, Sequence-Based Classification of Select Agents: A Brighter Line (http://www.nap.edu/catalog/ 12970.html), concludes that although it is presently impossible to reliably predict gene function based on sequence, it should nevertheless be within reach to develop mechanisms that can help categorize sequences as belonging to predefined ‘hazardous’ or ‘safe’ classes of genes, an effort that could greatly improve the future efficiency of synthetic gene order screening. Several parallel efforts are also underway to develop more sophisticated and comprehensive pathogen databases. Fischer and Maurer are
collaborating on Virulence Factor Information Repository (VIREP), a repository for annotated information about known virulence genes, based at UC, Berkeley. The IGSC has also stated its intention to develop an extensive regulated pathogens database, which could offer a broadly useful community resource. However, both groups are waiting on government support to help move these projects forward. For now, the member companies of the IGSC, which are predominantly based in the US, are moving to adapt their standards to comply with the HHS recommendations. However, the guidance also invites companies to apply their own “equivalent or superior” screening standards and several companies indicate that they will continue to err on the side of caution in their screening procedures. “If we get a gene in, we screen it,” says Robert Dawson, director of bioinformatics at Coralville, Iowa–based Integrated DNA Technologies. “There’s never a case where we would have a gene go right into production without a human being having looked at both the sequence and the prospective customer.” HHS has also made it clear that these are minimum screening recommendations and not the final word, and discussions are ongoing. Given the early stage of the field, when the risk from synthetic biology is still seen as relatively low—to date, no IGSC member company reports having received an order for a ‘sequence of concern’ that also came from a dubious customer—some hope that there will be sufficient opportunity for these guidelines to grow into a more effective monitoring strategy. “It’s a line in the sand drawn by the US government that now serves as something to be improved over time,” says Diggans. “All of these things make a direct contribution to maintaining near-term biosecurity, but it will need to evolve quickly— the technology is moving ever faster.” Michael Eisenstein, Philadelphia
volume 28 number 12 december 2010 nature biotechnology
news
The US government handed out contracts worth potentially more than $100 million to boost innovative technologies that can deliver vaccines in large quantities—and fast—as part of a $1.9 billion initiative to protect Americans from biologic threats of the future. The Biomedical Advanced Research and Development Authority (BARDA) has awarded money to eight firms engaged in creative ways to make vaccines against naturally occurring diseases, such as the H1N1 influenza, which was pandemic last year, and others it views as potential bioterrorist weapons. The goal, Kathleen Sebelius, Department of Health and Human Services (HHS) secretary says, is to create “nimble, flexible capacity to produce medical countermeasures rapidly in the face of any attack or threat.” Contract winners announced in September (Table 1) include VaxDesign, Novartis partnered with Synthetic Genomics Vaccines, the nonprofits PATH (Program for Appropriate Technology in Health) and the Infectious Disease Research Institute, Pfenex, Rapid Micro Biosystems, Emergent Biosolutions, 3M and Northrop Grumman Security Systems. BARDA’s push to make the vaccine development and production process move beyond half-century-old systems that rely on fertilized eggs to culture most vaccines has been several months in gestation. In August, two reports looked at the US’s ability to produce vaccines quickly and efficiently and came away holding their noses. “The closer we looked at the countermeasure pipeline, the more leaks, choke points and dead ends we saw. In this age of new threats, we aren’t generating enough new products,” said Sebelius
AP Photo/Greg Baker
© 2010 Nature America, Inc. All rights reserved.
BARDA funds vaccine makers aiming to phase out eggs
A half-century-old technology. Fertilized eggs were used to produce a vaccine for the H1N1 flu virus at the Sinovac plant in Beijing during the 2009 outbreak.
at a press conference on August 19 during which both reports were presented. The first report, The Public Health Emergency Medical Countermeasures Enterprise Review (https://www.medicalcountermeasures.gov/ documents/MCMReviewFinalcover-508. pdf), commissioned by the HHS called for modernization and speed-up of the regulatory procedures, the development of more flexible vaccine manufacturing processes to allow more than one product to be produced at the same facility, and an upgrade and modernization of everything related to rapid influenza vaccine production. Nearly $2 billion was put aside to reach the report’s goals.
More quantitative was the President’s Council of Advisors on Science and Technology’s (PCAST) report entitled Reengineering the Influenza Vaccine Production Enterprise to Meet the Challenges of Pandemic Influenza (http://www.whitehouse.gov/sites/ default/files/microsites/ostp/Influenza%20 Vaccinology.pdf). It outlines every stumbling block to speeding up vaccine development but more significantly lists how much time could be saved by adapting improvements already in the pipeline. The Influenza Vaccinology Working group, chaired by Eric Lander, director of the Broad Institute of Harvard and MIT in Cambridge, Massachusetts, and Harold
Table 1 Vaccine manufacturers awarded BARDA contracts Company/location
Project aims
Contract value
Novartis Vaccines and Diagnostics/ Cambridge, Massachusetts
Novartis Vaccines and Diagnostics will investigate techniques for the rapid development of optimized influenza seed virus.
~ $24 million over three years
Pfenex/ San Diego
Pfenex will apply its Pfenex Expression Technology Platform to the development of optimized bioprocesses for ~$18.8 million high-yield production of a stable candidate anthrax vaccine. over three years
VaxDesign/ Orlando, Florida
VaxDesign will further develop its MIMIC platform, an in vitro human immune system mimetic designed to accelerate evaluation of candidate and stockpiled vaccine safety and effectiveness by supplementing animal testing.
Northrop Grumman Security Systems/ Northup will develop integrated diagnostic capabilities for rapid, high-throughput surveillance and molecular Baltimore diagnostics.
~$17.1 million over three years ~$9.8 million over one year
PATH/ Seattle
PATH will test multiple innovative formulation chemistries and strategies to increase the shelf life of influenza ~$9.4 million vaccines, which has implications for the national vaccine stockpile as well as cold-chain requirements domes- over three years tically and in developing countries.
Infectious Disease Research Institute (IDRI)/ Seattle
IDRI will develop and evaluate innovative adjuvant formulations to enhance influenza vaccine immunogenicity and cross-protection to make them more effective against novel viral strains that may cause the next pandemic.
Rapid Micro Biosystems/ Bedford, Massachusetts
Rapid Micro Biosystems will develop methods for accelerated sterility testing. Together, these improvements could ~$6.8 million shave weeks off the influenza vaccine manufacturing and product release schedule. over three years
3M/ St. Paul, Minnesota
3M will develop integrated diagnostic capabilities for rapid, high-throughput surveillance and molecular diagnostics.
nature biotechnology volume 28 number 12 december 2010
~$8.5 million over three years
~$6 million over two years
1227
NEWS
© 2010 Nature America, Inc. All rights reserved.
Vaccine makers’ immunity questioned in court The US Supreme Court has begun considering how much liability vaccine makers have if the side effects of their products are believed to have injured or killed someone. The case was brought against Wyeth (now merged with Pfizer of New York) by parents of Hannah Bruesewitz, who in 1992 began suffering seizures and developmental problems after being given the combined Corynebacterium diphtheriae toxoid/Clostridium tetani toxoid/ polio (DTP) vaccine against diphtheria, tetanus and pertussis (whooping cough). A few years later, DTP was removed from the market and replaced by a vaccine with fewer side effects. The Bruesewitzes believed their daughter’s injuries were avoidable because Wyeth should have put a product with fewer side effects on the market earlier. What is most notable about the Bruesewitz v. Wyeth case, which was argued on October 12 in Washington, DC, is that many in the US drug industry had believed that the issue had been completely resolved with the adoption in 1986 of the National Childhood Vaccine Injury Act. The act set up a Vaccine Court to adjudicate claims of injury on a nofault basis and pay successful claimants with money generated from a tax on vaccines. The Vaccine Act was put into effect because of a fear at the time that lawsuits claiming ‘design defects’ would force companies to stop making vaccines. Accordingly, the act says suits cannot be filed against manufacturers “if the injury or death resulted from side effects that were unavoidable, even though the vaccine was properly prepared and was accompanied by proper directions and warnings.” There is a back door to the law that allows families to go to a federal court if they lose in Vaccine Court or they don’t like the amount of its judgment. However, those suits are governed by the Vaccine Act, too. But neither the Vaccine Court nor a lower US federal court accepted the Bruesewitzes’ argument that their daughter’s injuries could have been avoided by the manufacturer. However, the justices found the wording in the Act, and especially its use of the word “unavoidable” quite confused. Justice Stephen Breyer remarked “it’s pretty hard to say the word unavoidable means avoidable.” A final judgment is expected in early spring of 2011. Stephen Strauss
Varmus, then at the Memorial Sloan-Kettering Cancer Center, also estimates the time it would take before these changes could be instituted. From rapid sterility testing, to accelerated virus seed production and improved adjuvants, each advance could slice several weeks off the time for the first dose to reach the market. The PCAST committee members believed these changes individually could be put in place within 1 to 3 years. To change egg-based vaccine production systems for alternative cell or recombinant DNA platforms would require longer—up to a decade—to reach market penetration. BARDA’s R&D money will help push a broad swath of potentially game-changing new technologies, but deputy assistant secretary of BARDA, Robin Robinson, admits that it doesn’t cover the gamut of vaccine innovations in development. In particular, Robinson points to efforts to grow vaccines in plants and insect cells. Some of these projects are being funded by other US government agencies, most notably the Defense Advanced Research Project Agency, which is supporting four tobaccobased vaccine production platforms. In plants, the process is quicker than in eggs. Andy Sheldon, president and CEO of Medicago of Quebec City, Canada, says “it 1228
takes five weeks to grow the tobacco, the plants start expressing the protein in five days, and then it takes two days to purify the VLPs [virus-like particles].” He compares this to the six months egg-based vaccine production takes. Medicago is entering phase 2 clinical trials with its plant-derived flu vaccine. Plant-based production is cheaper too. The manufacturing facilities Medicago plans for Raleigh, North Carolina, will cost $25 million to build, a far smaller investment than the $250 million required for an egg-based production plant and the $1 billion that Novartis recently spent on a new Holly Springs, North Carolina facility. If approval is granted, it will become the first facility in the United States licensed to use mammalian cells to produce flu vaccines and is expected to be operational in 2013. A seasonal influenza vaccine, FluBlok, produced in insect cell culture, could be on the market next year. Protein Sciences of Meriden, Connecticut, received a BARDA contract in 2009 to use cells from fall armyworm (Spodoptera frugiperda) with a baculovirus system to generate influenza VLPs. Protein Science’s president and CEO, Manon Cox, says it takes about two months from virus discovery to vaccine production using insect
cells for production. It is also cheaper than egg-based vaccine manufacturing. “Licensed vaccines cost approximately $1 a dose to make the active ingredients. Our estimates are that we can make three times more product for that price,” claims Cox. Protein Sciences is waiting for US Food and Drug Administration approval of FluBlok. Pfenex, too, says its technology is nearing the market. Last year, the Defense Threat Reduction Agency provided the company with a DNA sequence of an unknown antigen and challenged them to develop both a production strain and a high-speed, high-quality, low-cost, antigen-production process. In conjunction with partner organizations, Pfenex used its screening technology to do this within 42 days. And there were cost savings. “If you scaled up to production levels, the antigen can be produced for ~50 cents per dose,” says Patrick Lucy, Pfenex’s vice president of business development. But the issue that looms largest in the push to modernize and speed up vaccine development relates to the business end of things. How are these innovative technologies going to fit into an existing vaccine marketplace that— influenza pandemics and potential terrorist bio-attacks aside—generally satisfies the world’s vaccine needs? For example, the PCAST report pointed out that although the Novartis cell culture facility was likely to generate annual profits of $30 million, it “would take over 30 years to recover the [$1 billion] investment in nominal dollars (leaving aside the need for a return on investment).” Protein Science’s Cox argues this naturally leads vaccine manufacturers, using egg-based technologies, to resist any change. “They are not going to easily let that [advantage] be taken away by a new technology in which their learning curve is going to be as steep as anybody’s else’s,” she says. Rafick-Pierre Sékaly, co-director and scientific director of the Vaccine and Gene Therapy Institute of Port St. Lucie, Florida, concurs. “The president and the committee can make all the recommendations they want but if the big vaccine makers say it is too costly or there is too much R&D, then changes are going to be treated not as a solution but as an added burden.” On this point, BARDA’s Robinson says, “we understand, and that is why we are pushing things that will definitely benefit all vaccines, including eggs.” Indeed, in October BARDA awarded Sanofi Pasteur of Lyon, France, a 3-year, $57 million contract to make more fertilized eggs available for vaccine production on a year-round basis. Stephen Strauss, Toronto
volume 28 number 12 december 2010 nature biotechnology
NEWS
© 2010 Nature America, Inc. All rights reserved.
Vaccine makers’ immunity questioned in court The US Supreme Court has begun considering how much liability vaccine makers have if the side effects of their products are believed to have injured or killed someone. The case was brought against Wyeth (now merged with Pfizer of New York) by parents of Hannah Bruesewitz, who in 1992 began suffering seizures and developmental problems after being given the combined Corynebacterium diphtheriae toxoid/Clostridium tetani toxoid/ polio (DTP) vaccine against diphtheria, tetanus and pertussis (whooping cough). A few years later, DTP was removed from the market and replaced by a vaccine with fewer side effects. The Bruesewitzes believed their daughter’s injuries were avoidable because Wyeth should have put a product with fewer side effects on the market earlier. What is most notable about the Bruesewitz v. Wyeth case, which was argued on October 12 in Washington, DC, is that many in the US drug industry had believed that the issue had been completely resolved with the adoption in 1986 of the National Childhood Vaccine Injury Act. The act set up a Vaccine Court to adjudicate claims of injury on a nofault basis and pay successful claimants with money generated from a tax on vaccines. The Vaccine Act was put into effect because of a fear at the time that lawsuits claiming ‘design defects’ would force companies to stop making vaccines. Accordingly, the act says suits cannot be filed against manufacturers “if the injury or death resulted from side effects that were unavoidable, even though the vaccine was properly prepared and was accompanied by proper directions and warnings.” There is a back door to the law that allows families to go to a federal court if they lose in Vaccine Court or they don’t like the amount of its judgment. However, those suits are governed by the Vaccine Act, too. But neither the Vaccine Court nor a lower US federal court accepted the Bruesewitzes’ argument that their daughter’s injuries could have been avoided by the manufacturer. However, the justices found the wording in the Act, and especially its use of the word “unavoidable” quite confused. Justice Stephen Breyer remarked “it’s pretty hard to say the word unavoidable means avoidable.” A final judgment is expected in early spring of 2011. Stephen Strauss
Varmus, then at the Memorial Sloan-Kettering Cancer Center, also estimates the time it would take before these changes could be instituted. From rapid sterility testing, to accelerated virus seed production and improved adjuvants, each advance could slice several weeks off the time for the first dose to reach the market. The PCAST committee members believed these changes individually could be put in place within 1 to 3 years. To change egg-based vaccine production systems for alternative cell or recombinant DNA platforms would require longer—up to a decade—to reach market penetration. BARDA’s R&D money will help push a broad swath of potentially game-changing new technologies, but deputy assistant secretary of BARDA, Robin Robinson, admits that it doesn’t cover the gamut of vaccine innovations in development. In particular, Robinson points to efforts to grow vaccines in plants and insect cells. Some of these projects are being funded by other US government agencies, most notably the Defense Advanced Research Project Agency, which is supporting four tobaccobased vaccine production platforms. In plants, the process is quicker than in eggs. Andy Sheldon, president and CEO of Medicago of Quebec City, Canada, says “it 1228
takes five weeks to grow the tobacco, the plants start expressing the protein in five days, and then it takes two days to purify the VLPs [virus-like particles].” He compares this to the six months egg-based vaccine production takes. Medicago is entering phase 2 clinical trials with its plant-derived flu vaccine. Plant-based production is cheaper too. The manufacturing facilities Medicago plans for Raleigh, North Carolina, will cost $25 million to build, a far smaller investment than the $250 million required for an egg-based production plant and the $1 billion that Novartis recently spent on a new Holly Springs, North Carolina facility. If approval is granted, it will become the first facility in the United States licensed to use mammalian cells to produce flu vaccines and is expected to be operational in 2013. A seasonal influenza vaccine, FluBlok, produced in insect cell culture, could be on the market next year. Protein Sciences of Meriden, Connecticut, received a BARDA contract in 2009 to use cells from fall armyworm (Spodoptera frugiperda) with a baculovirus system to generate influenza VLPs. Protein Science’s president and CEO, Manon Cox, says it takes about two months from virus discovery to vaccine production using insect
cells for production. It is also cheaper than egg-based vaccine manufacturing. “Licensed vaccines cost approximately $1 a dose to make the active ingredients. Our estimates are that we can make three times more product for that price,” claims Cox. Protein Sciences is waiting for US Food and Drug Administration approval of FluBlok. Pfenex, too, says its technology is nearing the market. Last year, the Defense Threat Reduction Agency provided the company with a DNA sequence of an unknown antigen and challenged them to develop both a production strain and a high-speed, high-quality, low-cost, antigen-production process. In conjunction with partner organizations, Pfenex used its screening technology to do this within 42 days. And there were cost savings. “If you scaled up to production levels, the antigen can be produced for ~50 cents per dose,” says Patrick Lucy, Pfenex’s vice president of business development. But the issue that looms largest in the push to modernize and speed up vaccine development relates to the business end of things. How are these innovative technologies going to fit into an existing vaccine marketplace that— influenza pandemics and potential terrorist bio-attacks aside—generally satisfies the world’s vaccine needs? For example, the PCAST report pointed out that although the Novartis cell culture facility was likely to generate annual profits of $30 million, it “would take over 30 years to recover the [$1 billion] investment in nominal dollars (leaving aside the need for a return on investment).” Protein Science’s Cox argues this naturally leads vaccine manufacturers, using egg-based technologies, to resist any change. “They are not going to easily let that [advantage] be taken away by a new technology in which their learning curve is going to be as steep as anybody’s else’s,” she says. Rafick-Pierre Sékaly, co-director and scientific director of the Vaccine and Gene Therapy Institute of Port St. Lucie, Florida, concurs. “The president and the committee can make all the recommendations they want but if the big vaccine makers say it is too costly or there is too much R&D, then changes are going to be treated not as a solution but as an added burden.” On this point, BARDA’s Robinson says, “we understand, and that is why we are pushing things that will definitely benefit all vaccines, including eggs.” Indeed, in October BARDA awarded Sanofi Pasteur of Lyon, France, a 3-year, $57 million contract to make more fertilized eggs available for vaccine production on a year-round basis. Stephen Strauss, Toronto
volume 28 number 12 december 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
news
Ethanol blend hike to jump start cellulosic investment
in brief
A move announced on October 13 by the US Environmental Protection Agency (EPA) to raise the permitted ethanol blend in gasoline for motor vehicles has been widely welcomed by ethanol manufacturers and biofuel advocate groups. The increase of ethanol in gasoline, from 10% to 15%, revises a mandate that has stood for 35 years and is thought to be a critical step toward bringing cellulosic ethanol to consumers. The EPA’s decision applies only to light motor vehicles made since 2007—a second decision on vehicles made from 2001 to 2006 is expected by the year end—but it’s still enough to give hope to biotechs working on cellulosic ethanol (and other non-corn feedstock ‘second-generation’ biofuels). The problem for cellulosic ethanol producers was that the so-called E10 blend limit—10% ethanol—left the market saturated with corn ethanol, says Jim Sturdevant, director of Project Liberty, the cellulosic operation of Poet, of Sioux Falls, South Dakota. Raising that limit should open the door to companies like his—critical, Sturdevant says, because “without a demand, it’s not worth investing in cellulosic ethanol.” The E15 decision could indeed revive investor interest in non-grain-based ethanol companies. Investor interest—in both biofuels and bio-based materials—grew during the first few years of this decade, reaching a peak in 2007 (Fig. 1). That meant companies were able to quickly acquire seed money and take their technologies to the pilot plant level, but as financing became harder to acquire and interest petered out, progress stalled. That was all part of “a natural dropoff ” in the cycle of early technology development, says David Berry, a principal at Flagship Ventures in Cambridge, Massachusetts, but the blend wall decision could start money flowing again. Berry says the biggest impact would be on the later-stage companies in need of financing to “cross the chasm” from pilotscale facilities to large manufacturing plants. An example is Mascoma, of Lebanon, New Hampshire, which is in the process of securing funding for its first industrial-scale cellulosic plant, in Kinross, Michigan. After the EPA’s decision, it will be “considerably easier to raise equity for this sort of facility,” says Justin van Rooyen, Mascoma’s director of business development. But corn ethanol—for all the roads it has paved—can still be an impediment to
The Singaporean government will spend S$16.1 ($12.5) billion on research innovation over the next 4 years—a 20% increase over the previous budget. A quarter of the funding—S$3.7 ($2.9) billion —is allotted to biomedical science, according to the September announcement. The funding boost “reflects our steady commitment to transforming our economy,” says Beh Kian Teik, director of biomedical sciences of Singapore’s Economic Development Board. Since 2000, when the Singaporean government outlined its long-term plan to transform this tiny nation-state into a knowledge-based economy, the government ploughed roughly S$25 ($19.3) billion to shore up research infrastructure and create a talent pool to attract private investment. The figures suggest Singapore has succeeded. The country’s biomedical manufacturing output alone more than tripled from S$6.3 billion ($4.9 billion) in 2000 to S$21 billion ($16.2 billion) in 2009. The biomedical sector now represents 10% of the country’s manufacturing output, up from 4% in 2000. Most major pharma companies have a presence in Singapore, although in October, Eli Lilly of Indianapolis announced the closure of its Singapore Centre for Drug Discovery. The new funding budget will make “more money available for everyone,” says Beh, with an emphasis on initiatives that favor “economic outcomes.” In fact, some are questioning whether the shift in emphasis toward translational outcomes and away from blue sky research is too myopic. Some of the funding has been earmarked to support the Biomedical Science Industry Partnership Office (IPO), a fledgling agency that supports public-private research projects and helps companies pool expertise across Singaporean research institutes and medical centers. The IPO office has succeeded in setting up collaborations for Basel-based Roche and London-based GlaxoSmithKline with Singaporean researchers. Roche plans to spend S$130 ($100.6) million to support a center for translational medicine to develop drugs for the Asian market; whereas GlaxoSmithKline will participate in four public-private research collaborations focused on early-stage research in ophthalmology, regenerative medicine and neurodegeneration. “This should be good news for the biotech sector,” says Jan-Anders Karlsson, CEO of S*BIO, a drug discovery company set up as a joint venture between the Singapore Economic Development Board Investments and Chiron Corporation in 2000. Government estimates that the nation’s economy will have grown by 13–15% by year’s end—higher than estimated growth for China and India. In October, Basel-based agribusiness Syngenta opened an R&D facility in Singapore to support technology development in the Asia Pacific region. Gunjan Sinha
Poet
Singapore injects $12.5 billion
Car drivers in the US can now fill their tanks with a higher blend of ethanol. But cellulosic biofuel producers want to see the limit rise further.
second-generation biofuels. It’s part of the reason 54 ethanol manufacturers and Growth Energy, an ethanol supporters’ group based in Washington, DC, submitted the E15 Green Jobs Waiver to the EPA in the spring of 2009, petitioning the organization to amend the ethanol blend limit to 15%. The lobby group realized the 10% ethanol opportunity in gasoline was becoming saturated by existing corn ethanol manufacturers. Cynthia Bryant, marketing manager, global fuels at Novozymes North America, with headquarters in Bagsvaerd, Denmark calls the EPA’s decision “a validation of what we’ve been saying all along—that higher blends of ethanol can be used in our cars today.” A 15% blend puts ethanol into 43 million cars; if the EPA allows the blend into cars built in and after 2001, it would add another 83 million cars. That could, in effect, reach 54% of the cars in the US. That’s a huge step, though the government is also assisting the growth of biofuels in other ways. In October the US Department of Agriculture said its Biomass Crop Assistance Program will supply $525 million in subsidies to farmers growing second-generation biomass crops over a 15-year period. But the future of cellulosic ethanol in the US will eventually come down to the hypothetical consumer, and he or she is already being discussed. The EPA E15 decision came with a suggested label to identify pumps dispensing the new blend: bright orange, with the word ‘caution’ printed in bold at the top. It is not final, but has raised eyebrows anyway, as those in the biofuels industry say the label could scare away customers. “It’s a little ominous looking,” says Dreyer, particularly because “all of our testing has shown that E15 is perfectly safe.” Growth Energy is seeking federal legislation that will require country-of-origin
nature biotechnology volume 28 number 12 december 2010
1229
in brief
© 2010 Nature America, Inc. All rights reserved.
Good ideas across borders The European Commission (EC) has made a bold move to improve Europe’s competitiveness and create an environment conducive to innovation with the launch of Innovation Union. Part of the Europe 2020 initiative for recovery from the recession and economic growth over the next decade, the program, announced October 6, aims to “boost the whole innovation chain from ‘research to retail’,” says Mark English, spokesperson for the Commissioner for Research, Innovation and Science in Brussels. The EC is calling on member states to spend 3% of the gross domestic product (GDP) on R&D by 2020, a hike from the current 1.9% of GDP. Europe is the largest market in the world, but its policies and support for R&D remain fragmented. The EC is proposing to boost cooperation across borders, among companies, and between private and public sectors. Plans to adopt a single EU patent system, and provide tax incentives and legislation that will allow venture capital funds to invest freely anywhere in the EU are also part of the Innovation Union’s commitments. Monika Wcislo, press officer for Research, Innovation and Science, notes that funding is not earmarked for particular sectors so they expect the impact to be widespread. “[Innovation Union] will generate more R&D, more startups and more major EU companies with the potential to be international players,” says English. Nidhi Subbaraman
Airlines ahead on algae Plans to grow algal biomass in airport grounds for aircraft fuel are moving forward supported by Airbus, British Airways and Cranfield University. On September 16, European aircraft manufacturer Airbus of Blagnac, France, London-based British Airways, and Gatwick airport in Surrey, announced a collaboration with Cranfield University, UK, through the Sustainable Use of Renewable Fuels (SURF) consortium. For the past three years, Cranfield University scientists have been researching algal-derived biofuel for use in aviation. Now, with their partners at SURF, they are working to scale up from their pilot plant (a thousand gallons per batch) for commercial output. Unlike the automobile industry, the aviation sector lacks the option to use electric alternatives for energy, says Feargal Brennan, head of Cranfield University’s Department of Offshore, Process and Energy Engineering. “For the foreseeable future the aviation industry will depend on biofuels, so they have to take a lead in commercial biofuel replacements,” Brennan says. Paul Nash, Airbus’ head of New Energies and Environmental Affairs notes that the firm is collaborating on biofuel projects with research groups from Brazil and Qatar. Later this year Brazilian airline TAM will test a fuel mixture of bio-kerosene derived from the native jatropha plant in an Airbus aircraft. “What we’re finding today is that the industry is moving faster [through partnerships] than R&D or governments would do,” says Nash. “As an industry, we can say: ‘our aircrafts are ready for this’.” Nidhi Subbaraman
1230
Total transaction value (millions)
NEWS
$1,500
1,378.3 1,190.9
$1,200 $900
876.7
788.7
$600 $300 $0
88.0
112.8
2004
2005
2006
2007
2008
2009
Year
Figure 1 Investment into industrial biotech. After cresting in 2007, total investment into biofuels and bio-based materials has been in decline.
labeling on all fuel and fuel sources to entice consumers by letting them know where their fuel is coming from. “You have labels on individual pieces of fruit to tell you where they came from,” says Glenn Nedwin of Genencor, in Rochester, New York, a division of Danish Danisco, and one of the biggest developers of industrial enzymes in the world. “But you don’t know where fuel is coming from. If [the pump] said it was coming from Venezuela, Saudi Arabia or Iowa, let the consumer make the choice.” In fact, much of the current biofuels market is being propped up by government decrees and subsidies. The EPA’s Renewable Fuel Standard (RFS) mandate of 2007 requires that 36 billion gallons of biofuel be blended with gasoline tanks by the year 2022. Cellulosic ethanol itself, the mandate says, will furnish 16% of that, with the rest supplied by corn ethanol and other secondgeneration biofuels such as biodiesel. That RFS mandate has helped drive demand, and experts like Nedwin believe an aggressive infrastructural overhaul is neces-
sary to eventually cement biofuels’ place in everyday life. Growth Energy has a ‘Fueling Freedom’ plan designed to do just that, calling for additional tax credits for retailers to build 200,000 blender pumps (which allow consumers to regulate the percentage of ethanol they put in their car using a mixture of gasoline and E85), a loan guarantee for an ethanol pipeline and flex-fuel vehicles to be released on the roads. Those moves would require more involvement by the government, but there is still another concern: price. Without subsidies, gasoline remains the cheapest fuel drivers can put into their cars. “We’re seeing technologies that are projecting the ability to be competitive [with gasoline],” says Flagship Ventures’ Berry, “But…people aren’t ready to be cost competitive without subsidy yet.” The unknowns with consumers and price explain why the move to a 15% blend falls short of making cellulosic producers feel the golden years are just ahead. “I think this is a positive first step and investors will look at this, see it as a good sign. [But i]t doesn’t add enough gallons to where we ultimately need to get to,” says Wes Bolsen, chief marketing officer and vice president, government affairs at Coskata, of Warrenville, Illinois. As for the next steps, Sturdevant would like to see the limit moved to 20%, 30% and higher. “I believe that the scientific data supports moving more quickly now to higher blends. I believe that the EPA and [Department of Energy] will see that, and they’ll see that there’s no harm to engines at higher blends, and I can only hope that they will move faster in the future.” Nidhi Subbaraman, New York
in their words “There’s no scientist out there that has been wary of me, and guys in the venture-capital world I dealt with before—it’s as if I wasn’t gone a day!” Ex-convict Sam Waksal, former ImClone Systems CEO, is out of jail and back in business, raising $50 million to acquire hepatitis C treatment maker Three Rivers Pharmaceuticals. (Pharmalot, 26 October 2010) “The pendulum has swung too far at the FDA if they influence Advisory Committees to vote no for a drug that poses very little risk to the public at large…and where there is a demonstrable clinical
benefit in terms of additional weight loss and reduction in cardiovascular risk factors.” A petition from ‘Citizens for Lorcaserin’ to FDA Commissioner Margaret Hamburg protesting the 16 September advisory committee decision to turn down approval of Arena Pharmaceuticals diet drug. “There are things I know something about, and things I know little about. The stock market is one of the latter.” Sangamo CEO Ed Lanphier admits, following a 52-week slide in the company’s stock value. (Xconomy, 18 October 2010) “We are left relying on 20th century approaches for the cures of the 21st century.” FDA Commissioner Margaret Hamburg argues for a move away from randomized controlled trials to testing drugs in combination and using biomarkers. (Financial Times, 15 October 2010)
volume 28 number 12 december 2010 nature biotechnology
N E W S f eat u r e
Crossing the line
© 2010 Nature America, Inc. All rights reserved.
The list of drug companies forced into several hundred million dollar settlements for making fraudulent product claims continues to lengthen. And the signs are that the US government will continue to ramp up its efforts, using new theories of liability and handing out even stiffer penalties. Mark Ratner investigates.
to assure a steady flow of informants and creative new theories of potential liability. On the other hand, if whistle-blowing is effective at rooting out fraud scenarios, it is a relatively inefficient way to protect patient interests. “I think it’s specious for someone to go to the government because they think something is dangerous,” says Retta Riordan, an independent compliance consultant. “We know how long it takes for the government to develop a case— four to six years. If you really want the activity to stop you need to do that immediately and the best way to do that is in-house.” To that end, almost every fraud settlement is accompanied with a corporate integrity agreement (CIA) between the company and the government—specifically the US Department of Health & Human Services (HHS) Office of the Inspector General (OIG). CIAs mandate a set of procedures and reporting responsibilities to monitor compliance and prevent similar conduct from recurring. CIAs also help ensure that internal processes exist to bring future bad conduct to light (Box 1).
It has been more than a year since the US and judgments in False Claims Act (FCA) Department of Justice (DoJ) announced its matters alleging healthcare fraud in the fis$2.3 billion settlement with New York–based cal year ending September 30, 2010. This Pfizer for Pfizer’s illegal, off-label marketing was more than ever before obtained in a of four drugs—the largest fraud suit to hit the single year and represents a 66% increase pharmaceutical industry (Nat. Biotechnol. 27, over fiscal year 2009, in which $1.68 billion 961–962, 2009). And the beat goes on, with was obtained. Just after the Pfizer settlement several other high-profile fraud settlements in September 2009, recoveries over the last coming to light (Table 1). 15 years in the federal judicial district of Over the past five years, there has been no letup in government investigations of the marketing and pricing practices of pharma, biotech and medical device companies. These activities are being heavily scrutinized on many fronts—by Medical affairs or promotion? the US Food and Drug Administration, Recent cases and the resulting terms of the DoJ, US Congress and even investors. CIAs show that government officials have But whereas industry watchers consider been focusing on fraud arising from medithese ‘plain vanilla’ off-label suits as ‘legcal affairs operations, both internally and acy conduct’—part of a bygone era—it is externally. In many situations, the misconunclear whether the government’s and duct has centered on some form of kickindustry’s perspectives on this are truly back. “It’s a somewhat consistent theme aligned. Indeed, the government is now in our investigations,” says Mary Riordan, focusing on new theories of liability. senior counsel in the HHS OIG. But the “There is a very careful mining of form of those kickbacks has changed over operations at pharmaceutical companies time. “I think we see less blatant cash-toto find cases,” says T. Reed Stephens, physicians type[s] of kickbacks,” she says. a former DoJ lawyer now with the “Now the type of activity we are looking Washington, DC, law firm McDermott at is more sophisticated—contractual Will & Emery. “The government now arrangements between doctors and the has the resources available to it to evaludrug companies who engage them.” ate a higher number of new cases,” and The recent AstraZeneca settlement, to find cases that are consistent with the for one, involved clinical trials activities, Although Pfizer’s multi-billion dollar 2009 settlement dual policy goals of deterring inappro- remains the poster child for healthcare fraud, significant publications and grants in addition to priate conduct and maintaining patient settlements continued to pile up this year including a inappropriate product promotion. The safety, he says. By most accounts, there half-billion dollars or more each by drug makers Novartis, Pfizer case, which focused largely on the may be 150 or more cases still in the AstraZeneca and Allergan. (© Novartis AG) promotion of their pain medication Bextra prosecutorial pipeline. (valdecoxib), also had a medical affairs– Massachusetts alone, where that suit was publications component. These areas had not Filling government coffers prosecuted, had reached $6.8 billion. “It’s a traditionally been part of the government’s For every dollar the government puts into no-brainer in terms of financial investment focus. Companies are also now being investihealthcare fraud enforcement, it gets back by the government,” says Slade. gated more often for whether they may have more than fifteen times that amount, notes Whistle-blowers also have a vested inter- retained clinicians to conduct ‘sham’ clinical triShelley Slade, a former DoJ attorney now ested in finding new theories they can file on, als—essentially another form of kickback. Such representing corporate whistle-blowers Stephens points out, as do the attorneys who a claim was made against Boston Scientific’s through the Washington, DC, law firm of specialize in whistle-blower cases. The private Guidant subsidiary, in Indianapolis, as part of a Vogel, Slade & Goldstein. Government press incentive alone—whistle-blowers may receive $22 million December 2009 settlement in a case releases tout the amounts: according to DoJ, it upwards of 15% of any amounts recovered in that included the implantation of pacemakers secured more than $2.5 billion in settlements claims by the federal government—is enough and defibrillators in clinical trials. 1232
volume 28 number 12 DECEMBER 2010 nature biotechnology
news f eat u r e Table 1 Recent settlements of healthcare fraud cases Company (date announced) Novartis (Basel) (September 2010)
422.5
Criminal and civil charges relating to the off-label marketing of Trileptal (oxcarbazepine) and five other drugs in 2000 and 2001. Whistle-blowers, all former employees of Novartis, receive over $25 million from the settlement.
Allergan (Irvine, California) (September 2010)
600.0
Criminal and civil charges relating to the off-label marketing of Botox (onabotulinumtoxinA) in 2003. Whistle-blowers receive $37.8 million.
Forest Laboratories (New York) (September 2010)
313.0
Criminal charges (including obstruction of justice) around the distribution of an unapproved drug, Levothroid (levothyroxine), civil liabilities relating to the off-label marketing of Celexa (citalopram) and fraud in the pricing of those drugs as well as Lexapro (escitalopram). Conduct occurred between 2001 and 2003. Whistle-blowers receive ~$14 million.
Novartis (May 2010)
72.5
Civil liabilities relating to the off-label marketing of inhaled tobramycin between 2001 and 2006. Whistleblowers, all former employees of Chiron (now part of Novartis), receive $7.8 million.
Ortho-McNeil-Janssen (Titusville, New Jersey) (April 2010)
81.0
Criminal and civil charges relating to the off-label marketing of Topamax (topiramate). Whistle-blowers receive more than $9 million.
Schwarz Pharma (UCB, Brussels) (April 2010)
22.0
Civil charges relating to the billing of two unapproved drugs to the Centers for Medicare and Medicaid Services. Whistle-blowers receive more than $1.8 million.
AstraZeneca (London) (April 2010)
© 2010 Nature America, Inc. All rights reserved.
Amount ($ millions) Issues settled
520.0
Civil charges relating to the off-label marketing of Seroquel (quetiapine) from 2001 to 2006. The whistle-blower receives more than $45 million.
Alpharma (King Pharmaceuticals, Bristol, Tennessee) (March 2010)
42.5
Civil charges relating to bribes paid to healthcare providers to induce them to promote or prescribe Kadian (morphine sulfate), and misrepresentations about the drug’s safety and efficacy, from 2000 to 2006. The whistle-blower receives $5.33 million.
Boston Scientific (Guidant division, Indianapolis) (December 2009)
22.0
Civil charges that its subsidiary, Guidant, used post-market studies as vehicles to funnel kickbacks to physicians to induce them to use the company’s pacemakers and defibrillators.
Biovail (Valeant, Mississauga, Canada) (September 2009)
24.6
Guilty plea on criminal conspiracy and kickback charges relating to its drug Cardizen (diltiazem) beginning in 2003. Amount includes a $2.4 million civil fine.
Elan (Dublin) (settlement pending)
203.5
Company has announced an agreement with the US Attorney’s Office in Massachusetts to settle claims around the marketing of Zongran (zonisamide).
Teva (Jerusalem) (settlement pending)
169.0
Texas Attorney General’s Office announced an agreement with the company to settle claims relating to improper price reporting for its drugs. Whistle-blower Ven-A-Care triggered the investigation.
Source: Press releases from the US Department of Justice; US Attorney’s Office District of Massachusetts; and Attorney General of Texas.
There’s also been more focus on the intersection of medical affairs and commercial functions within companies—the grant evaluation process, how publications are developed and whether they are being used as disguised promotion. An abundance of ten-patient studies in off-label areas could be seen as geared merely to generate interest in providers. The government also is concerned that physicians are brought on to company boards so that these key opinion leaders can be influenced favorably toward in-house products and promote the message in the community, rather than using the physicians in their advisory capacity. Or if a speakers’ bureau brings 500 speakers to a nice hotel for training but only uses 100 of them, it would in effect be a promotional activity. “I think our analysis of the situations is becoming more complex, as are the situations we are seeing,” says Mary Riordan. Within a company, even the seemingly simple act of responding to an information request can blur the line between medical and commercial activities and put a company in a difficult situation. The Frazer, Pennsylvania– based biotech Cephalon gets thousands of such requests every quarter, says chief compliance officer Valli Baldassano. “We get all kinds of requests, not just for off-label uses—about the
science of the drugs, their pharmacokinetics, their uses and what data are available for both on- and off-label uses. They can come in through a phone call, e-mail or because a doctor has asked a sales rep, which is how many of the sensational cases arise.” Follow the money By dint of its settlement over off-label promotion of its narcolepsy drug Provigil (modafinil), which included a CIA, Cephalon has helped set the industry standard for the processes for tracking spending on speakers, consultants and the like—a focus of DoJ’s inquiry into the company and also an element in the Physicians Payment Sunshine Act of 2009, now incorporated into the recent federal healthcare reform legislation, which takes effect for all companies in 2013. ProPublica now posts these payments by Cephalon on its website, along with those from the six large drug companies that have also made them public (http://projects.propublica.org/docdollars/). The mere disclosure of payments is an area where consumers, providers, competitors and the public at large can see if there are particular trends or payments. “With ‘Sunshine,’ presumably comes a desire to reduce potential conflicts of interest,” says Howard Young, an
nature biotechnology volume 28 number 12 DECEMBER 2010
attorney with Morgan Lewis in Washington, DC. (Academic institutions, for their part, are also cracking down on this potential conflict of interest; Harvard University, for example, is implementing a policy starting January 1, 2011, prohibiting faculty from giving promotional talks for drug and device makers or receiving gifts, travel stipends and meals.) Pharmaceutical companies pay doctors to do services for them under fee-for-service arrangements. These can include speaker programs, promotional talks or participation advisory boards, including commercially oriented boards that brainstorm marketing messages. There’s also a bucket of spending for fees around research participation, including conducting clinical trials. What’s more, there are incidental expenses for travel and meals. “The popular thought is that pharma companies can push a button and miraculously every dollar you paid to Dr. X shows up in a database,” says Baldassano. “That’s just not true. It’s a huge undertaking and different for every company.” When the government wanted Cephalon to calculate aggregate expenditure as part of its CIA, the company balked. “We said we’ve been trying to build it, have some components of it for state reporting, but we don’t have it the way you are envisioning it,” she says. “So 1233
N E W S f eat u r e
© 2010 Nature America, Inc. All rights reserved.
you need to give us time to build it.” As a result, the aggregate expenditure system in the CIA is staged, with various deadlines. For example, it’s easy to capture a consulting fee paid for services; there’s usually a contract and a direct payment. Cephalon is already posting those payments publicly and every quarter going forward. But other spending— for a pizza or sandwich, and for travel fees ancillary to research services—may come through an expense report system or a logistics vendor. “It doesn’t sound like it may be complicated,” says Baldassano. “But it is, to make sure you capture it all accurately.” Trend spotting A close examination of the provisions in CIAs can help other companies determine if their own processes represent the best practices within the industry, and whether they should be adopting other measures. “From an educational perspective, they’ve had that kind of broad-based effect on the industry,” says Young.
“Every CIA is an opportunity to look at new provisions and see what the government is focusing on,” adds Retta Riordan. “It’s an opportunity to see what went wrong. Was it a lack of knowledge? A lack of processes in place—policies and procedures? A lack of training? And use that as a tool to reassess an existing program.” When Indianapolis-based Eli Lilly, for example, settled a case with DoJ in 2005 over allegations it had been promoting its osteoporosis drug Evista (raloxifene) for three indications where it was only approved for one, Riordan pondered how it could have happened, “especially in a company the size of Lilly that has a very good compliance program,” she says. There was evidence that Lilly had developed an internal best practices video emphasizing the off-label message. Yet although all companies, including Lilly, review material going out to the public, as it turned out, Lilly’s promotional materials review committee was not reviewing internal training materials. Experienced whistle-blower attorneys will focus on these kinds of situations: products
that have been on the market for a long time and have a track record of sales and profits that will potentially constitute the measure of damages to gauge the expected payoff from the case, says McDermott Will & Emery’s Stephens. “You have to be looking at cases where there’s been a systematic approach to a claim backed by the organization and it is a strategy they have undertaken. The cases with the greatest exposure will be those where the company took a sustained approach and where the plaintiff ’s counsel can at least theorize a substantial amount—tens of millions or hundreds of millions of revenue and on top of that a proportionate amount of profit that involves federal dollars.” Warning letters to companies from FDA, many of them directed at communications to medical professionals about their products that do not adequately discuss the risks of therapies, may also tip off a whistle-blower attorney to possible fertile ground. They could even foreshadow a future DoJ investigation—or a lawsuit by competitors—as DoJ
Box 1 Getting the house in order Many pharma and biotech companies have responded to increased government scrutiny of their sales and marketing practices by adding substantially more training of staff with respect to off-label promotion. “We are seeing a real effect,” says Howard Young, an attorney with Morgan Lewis in Washington, DC, and former senior attorney and deputy branch chief with the Office of the Inspector General (OIG) of the Department of Health & Human Services. “The prosecutors themselves have stated that the industry appears to be moving in the right direction,” he notes. “I think the amount of money being spent and the number of FTEs [full-time employees] being devoted toward regulatory compliance over the past five to ten years reflects a real understanding of most of the industry that these are obligations that are here to stay and they are being institutionalized in a meaningful way,” adds T. Reed Stephens, a former DoJ lawyer now with the Washington, DC, law firm McDermott Will & Emery. OIG came out with a guidance for pharmaceutical companies on compliance practices in 2003, to prevent and reduce fraud and abuse in federal healthcare programs (http://oig.hhs.gov/authorities/ docs/03/050503FRCPGPharmac.pdf). Since then, most large pharma companies have established formal programs, as have most mid-sized biotech companies and even many smaller companies. For companies that have run afoul of the law, OIG requires—almost without exception—that those companies sign a CIA mandating the establishment of compliance procedures aimed at preventing the prior contentious conduct from recurring. This is not unique to the healthcare industry: Any company involved with civil or even criminal claims is going to be entering into a CIA whether in the insurance business, the defense business or pharma. “At that point, DoJ has taken care of your sins of the past and OIG is there to make sure you fly right in the future,” says Thomas Gunning, general counsel of EMD Serono in Rockland, Massachusetts, which settled a case involving off-label promotion of its human growth hormone, Serostim, with DoJ in 2005.
1234
In virtually every case over the past six years, CIAs have included provisions forcing a separation of the compliance function from the legal department, in effect instituting another set of checks and balances. CIAs also call for the retention of an outside independent review organization, which performs annual audits of compliance processes and training, and reports to the company as a financial auditor would. In cases involving offlabel promotion, a CIA would likely have provisions that require the company to get ‘verbatims’ from doctors—an independent review of what was said in face-to-face sessions with sales reps. If a case is focused on kickbacks, the CIA might focus on how consultants and speakers are hired and the exact use of those monies. CIAs and investigations that focused on best pricing issues, rebates and whether Medicare and Medicaid bill the right price would get deep into the internal policies and practices for price reporting. Compliance may be “a further opportunity to embed, from a business standpoint, the entire area of process improvement,” notes Paul Silver, managing director of The Huron Group, a consultancy in Atlanta that often acts as an independent review organization. For example, after Cephalon’s settlement of claims that the company promoted its narcolepsy drug Provigil (modafinil) off-label to treat fatigue associated with multiple sclerosis, it essentially “redesigned how we did business,” says chief compliance officer Valli Baldassano. Compliance systems also help to identify areas of abuse proactively. “We’ve made it into a much broader risk management function that is much more engrained into the business,” Baldassano says. “If you can design a good way to slice and dice the data, you may be able to find the trend you are looking for to see if you’ve got a problem out there. Then you can either do more investigation, stronger training, some kind of intervention that can affect that behavior. You have to have the system to smoke out the behavior, but the system itself cannot be the panacea.”
volume 28 number 12 DECEMBER 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
news f eat u r e closely examines marketing claims that are regulated by FDA to determine whether they also constitute off-label violations. Expanding on the claims beyond what are in FDA-approved statements is a little bit different from off-label promotion, Stephens says. It’s typically an FDA-focused issue but also raises the specter of making comparative claims about a product that run afoul of the federal civil FCA. The FCA is the principal mechanism for pursuing claims of off-label promotion and, in cases where the government is the customer, kickbacks and illicit pricing schemes, including rebates to Medicare. The so-called qui tam provisions of FCA also keep in place a structure for whistle-blowers to come forward and alert the government to some of those schemes. And once it becomes known that a company is under investigation, many states then start class actions. Because of the huge advertising potential of social media—Twitter, Facebook, Wikipedia and the like—their use is bound to be fertile ground for pharmaceutical companies to get out their message, and for investigation. “It’s going to create some real challenges for pharmaceutical companies to track where their content goes,” says Stephens. FDA is already working on guidelines for how the pharmaceutical industry can use social media (Nat. Biotechnol. 28, 641–643, 2010). But the mere fact that FDA is going to regulate those types of interactions means there are going to be opportunities for mistakes to be made and for companies to be overly aggressive, before guidelines come into place, according to Stephens. In addition, companies operating under CIAs are required to submit marketing and promotional plans to the government ahead of time. “That’s one of the most exacting aspects of these CIAs, giving the government the ability to preview what the company is planning on doing,” Stephens says. “DoJ could make the allegation, it’s not just an FDA violation but that you are promoting off-label by saying something lacking a scientific basis,” he says. “[In addition], they could solicit whistle-blowers from within that company—they would explain what the theory is and hope they could bring to them factual allegations they could attach to their theory and quickly file a case,” Stephens says. Criminal prosecutions According to Shelley Slade, government officials are coming to the conclusion that although the FCA “is somewhat of a deterrent, it’s not enough.” Increasingly, DoJ is turning
to the Foreign Corrupt Practices Act (FCPA), which gives it some jurisdiction over conduct outside the US. For example, this August, Merck disclosed that it is being investigated for violations under the FCPA, which DoJ had used to penalize Novo Nordisk in 2009, for kickbacks paid to the former Iraqi government as an incentive to obtain insulin supply contracts. FCPA-related investigations are also ongoing for several other pharmas. But the real big stick will come in the form of criminal prosecutions of individuals spearheading these schemes, Slade says. Indeed, a trend over the past several years has been for DoJ and OIG to assess individual liability and responsibility for fraudulent schemes, along with requiring boards and senior management to be more active in compliance programs. As with any industry, compliance units within pharma and biotech companies come up against the fact that people are rewarded for improving a firm’s financial performance— right up the chain to the CEO, Slade says. “Their concerns are pleasing Wall Street and shareholders, and bonuses are driven by the financial performance. There’s a real conflict between what the compliance folks want them to do and what’s in their immediate financial self-interest. Sometimes the compliance people don’t get the full story from the people in operations.” Thus, despite the best of intentions, prosecutors know that when they speak at compliance conferences they are basically preaching to the choir. “Holding individuals accountable, and criminal settlements—the whole combination of steps—will lead to better compliance,” says OIG’s Mary Riordan. “We are increasingly looking at ways of holding individuals accountable,” she says, both through CIAs and also through so-called exclusion actions against individuals, which would bar them from participating in the industry. OIG currently has an initiative underway to target the responsible corporate executives through exclusion actions of individuals— individuals they believe were in positions to prevent alleged fraudulent conduct or should have been aware of it and stopped it. The most dramatic case in point has been the criminal conviction (now under appeal) of three individuals from Purdue Pharma in Stamford, Connecticut, who were found responsible for not preventing deceptive marketing practices for the narcotic Oxycontin (oxycodone). And on November 8, DoJ indicted a former attorney for London-based GlaxoSmithKline (GSK, Brentford, UK) charging her with obstruction of justice and making
nature biotechnology volume 28 number 12 DECEMBER 2010
false statements to FDA in connection with an investigation into GSK’s promotion of the antidepressant Welbutrin (buproprion) early in the decade. That case is still pending, but in the fourth quarter of 2008, GSK set aside $400 million, anticipating a settlement. To date, the Purdue case is the only situation where OIG has exercised its authority in this way. But there’s whispering among attorneys that more such cases are coming down the pike. Given an appropriate case with clear criminal violations and egregious circumstances where patient health is being compromised by the off-label use, and executives well aware of what the sales force has been doing and of the risk of harm to patients, “the only way the executives’ behavior is going to change is if the government treats these as criminal violations and goes after the executives,” says Slade. “If they see their peers going to jail, it will offset the financial calculus for them.” OIG could also exclude an entire company from the industry, but that’s a ‘nuclear bomb’ scenario that would raise serious issues of patients’ rights and access to drugs, as well as antitrust. Too early to know In the meantime, the number of fines and prosecutions continues to increase. And industry is devoting more and more resources to ensure that it does not fall foul of government officials. “We know that companies are absolutely continuing to ramp up their compliance efforts and staffing up their compliance groups internally,” says New York attorney Arnold Friede. “They are hiring more people and propagating their compliance policies more forcefully internally. But whether that’s accompanied by a cultural shift, particularly in an era of organizational uncertainty, is an open question. Process and culture are not necessarily coincident, and it probably takes an equal measure of both to make sure that you are working towards effective compliance.” Overall, “it’s too early to tell” whether the recent ramp-up of enforcement activities will have the desired deterrent effect, says Jerry Avorn, professor of medicine at Harvard Medical School and the author of Powerful Medicines, one of the first books to examine pharmaceutical detailing practices—the oneon-one conversations between sales reps and physicians—in detail. “We don’t have a good objective way to measure changes in corporate behavior, but we certainly hope the CIAs are an effective tool,” says HHS’s Mary Riordan. Mark Ratner, Cambridge, Massachusetts
1235
N E W S f e at u r e
New twists on proteasome inhibitors
© 2010 Nature America, Inc. All rights reserved.
Only one drug targeting the proteasome protein degradation pathway has been approved, but several second-generation inhibitors are making progress in trials. Jim Kling reports. In July, Onyx Pharmaceuticals of Emeryville, California, announced positive phase 2 results for its proteasome inhibitor carfilzomib, which is currently being tested in multiple myeloma. The drug achieved a 24% response rate in individuals with advanced cancer, who had failed on all prior therapies, more than double the expected response rate (11%). This was not only good news for the company—whose stock jumped 21% and which clinched a marketing deal worth >$300 million with Japanese firm Ono Pharmaceuticals of Osada—but also a boost for the field of proteasome inhibition, which thus far has only a single success story, Velcade (bortezomib), which was first approved in multiple myeloma in 2003. Carfilzomib, however, is one of several proteasome inhibitors currently progressing through the clinic. Many of the compounds causing the most buzz are those that target another part of the cell’s proteolytic machinery, the ubiquitin ligases, which number in the hundreds (Fig. 1) and could potentially provide greater clinical specificity in the clinic. The trailblazer First approved as a third-line therapy for refractory multiple myeloma, Velcade was extended to mantle cell lymphoma in 2006 and became a frontline treatment for multiple myeloma in the US in 2008. An injectable tripeptide (modified with a boronic acid at one end, Pyz-Phe-boroLeu), it binds the catalytic site of the 26S proteasome (Box 1). The mechanism of action, with respect to multiple myeloma, is straightforward: myeloma cells are essentially protein factories, having descended from B lymphocytes, which create antibodies. “They fail to turn off their protein production machinery, causing proteotoxic stress. Cells deal with it by upregulating the proteasome. So the cell is gummed up with proteins anyway, and then we shut off the primary mechanism by which they’re getting rid of the stress. That’s why Velcade is effective,” says Joe Bolen, chief scientific officer of Millennium (now part of Takeda in Osaka), the company that brought the drug to market. Sensitivity of tumor cells to proteasome inhibitors has been documented1, and such observations have prompted companies to 1236
consider them for treating other cancers, particularly solid tumors. But Velcade turns out to be not all that precise in its targeting. In disrupting the proteasome, it affects the breakdown of a range of proteins and interferes with some proteases, which is thought to be associated with the observed neurotoxicity. The next generation Velcade is an unqualified success in the marketplace—the drug generates about $1.5 billion in revenues worldwide annually, according to David Moskowitz, who follows Onyx in his role as managing director with Madison Williams investment bank in New York. But its limitations, which include the development of resistance to it in addition to neurotoxicity, have inspired the development of a second generation of proteasome inhibitors, including compounds like carfilzomib. Carfilzomib is an irreversible epoxomicin-related peptide that binds specifically the chymotrypsin-like protease activity of the proteasome, hitting fewer nontarget proteases than Velcade, which may account for its reduced toxicity. Nereus Pharmaceuticals in San Diego is developing an entirely different class of compounds represented by NPI-0052 (salinosporamide A), a β-lactam isolated from the marine bacterium Salinispora tropica. NPI0052 blocks all three catalytic activities— chymotrypsin-like hydrolysis, trypsin-like hydrolysis and peptidylglutamyl peptide hydrolysis—whereas Velcade inhibits just the chymotrypsin-like activity. NPI-0052 has
shown activity against multiple myeloma cells resistant to Velcade, and in animal models it inhibits colon, pancreatic and lung cancers. The drug is currently in phase 1 clinical trials for multiple myeloma and lymphoma, as a single agent and in combinations (Table 1). In addition to carfilzomib, which entered the Onyx pipeline with its 2009 acquisition of the biotech company Proteolix, Onyx is developing two other Proteolix compounds. ONX 0912, an oral proteasome inhibitor, is currently in phase 1 clinical trials for advanced refractory or recurring solid tumors. ONX 0914 inhibits the related immunoproteasome, a specialized type of proteasome mainly found in monocytes and lymphocytes. In preclinical models, it was shown to be 10- to 50-fold more selective for the immunoproteasome than for the general proteasome. Possible indications include rheumatoid arthritis, inflammatory bowel disease and lupus. Michael Kauffman, Onyx’s chief medical officer, thinks carfilzomib has the potential to be a frontline treatment for multiple myeloma because it shows low levels of neuropathy, neutropenia and other toxicities associated with other multiple myeloma treatments. “We’re seeing very minimal long-term side effects in patients, and none so far that have precluded indefinite dosing,” he adds. That’s important because the goal is to make multiple myeloma a chronic condition. Kauffman also believes that the ability to keep patients on the drug should help prevent tumors from developing resistance. Millennium is also working on a secondgeneration proteasome inhibitor, an orally active peptide boronic acid analog, MLN 9708. Unlike Velcade, MLN 9708 has activity against a range of solid tumors in preclinical studies and is currently in phase 1 trials for hematologic malignancies and solid tumors. “We’re really striving to bring this next generation of proteasome inhibitors beyond the hematologic malignancies,” says Bolen.
Box 1 A protein degrading machine The proteasome is a multicatalytic enzyme comprising a hollow 20S core particle complexed with two 19S regulatory particles to form a 26S structure where protein degradation takes place. The two regulatory caps are bound to the core particle through ubiquitin-binding sites. The 20S particle has three catalytic actions: chymotrypsin-like activity, trypsin-like activity and peptidylglutamyl peptide hydrolyzing activity. Proteasome activity occurs through a multistep process. Proteins are marked for destruction by ubiquitin tags. This process begins with activation of ubiquitin (E1 enzyme), followed by conjugation (E2 enzyme) and finally specific ligases (E3) that mediate attachment of ubiquitin chains to target proteins. The pathway is hierarchical, with a single (or small number of) E1 enzyme(s) interacting with a small number of E2 enzymes, and several hundred E3 ubiquitin ligases serving specific subsets of proteins (Fig. 1). Specific ubiquitin ligases tag target proteins with a string of ubiquitins, which act as a signal to the proteasome to destroy it.
volume 28 number 12 DECEMBER 2010 nature biotechnology
news f e at u r e
© 2010 Nature America, Inc. All rights reserved.
Even so, achieving success in solid tumors rather than hematological cancers will be a challenge as blood cells have high concentrations of proteasomes; indeed, Millennium researchers suspect that they act as a sink for tight-binding proteasome inhibitors, such as Velcade, thereby conferring efficacy in blood cancers. “We reasoned that a proteasome inhibitor with a shorter half-life [on the proteasome] might have better tissue distribution. This is of particular interest for the potential utility of proteasome inhibitors for treating nonhematological malignancies,” says Bolen. From a drug discovery textbook point of view, however, it was completely counterintuitive. “The mantra of drug development is to be specific and to stick to the target with high affinity or irreversibly,” he adds. Two hits are better than one Preclinical data support using proteasome inhibitors in combination with other drugs. “They should combine favorably with pretty much any type of chemotherapy,” says Kauffman. Some combination strategies have a theoretical basis as well, according to Robert Orlowski, a professor of cancer medicine at the University of Texas M.D. Anderson Cancer Center (Houston)2. The website ClinicalTrials.gov lists almost 300 clinical trials with Velcade in combination with other drugs, including other targeting agents, such as Bristol-Myers Squibb’s (Princeton, NJ, USA) elotuzumab (an antibody targeting the myeloma cell-surface protein CS-1), Centocor Ortho Biotech’s (Horsham, PA, USA) siltuximab (an anti-interleukin-6 antibody) as well as Pfizer’s (New York) Akt inhibitors (Viracept, nelfinavir mesylate) and heat shock protein 90 inhibitors (tanespimycin, Bristol-Meyers Squibb, New York). Velcade has also been combined with DNA-damaging therapies such as radiation and anthracyclines, because proteasome inhibitors suppress DNA
E1
E2s ? ? E3s ?
?
? ?
?
Substrates
Figure 1 The pyramidal ubiquitin cascade. The ubiquitin cascade is pyramidal in design. A single E1-activating enzyme transfers ubiquitin to roughly three dozen E2s, which function together with several hundred different E3 ubiquitin ligases to ubiquitinate thousands of substrates. (Reprinted with permission from Nalepa, G. et al. Nat. Rev. Drug Discov. 5, 596–613, 2006)
repair proteins, says Ken Anderson, a professor of medicine at the Dana-Farber Cancer Institute and Harvard Medical School and collaborator with Millennium and Nereus. “We’re nowhere near yet to learning the best combination, nor are we near to maximizing the benefit of this drug for patients,” says Orlowski. Palladino and others are hopeful that these combinations will allow proteasome inhibitors to expand beyond multiple myeloma, but not everyone is convinced. “It feels like they’re just throwing stuff up against the wall, looking for a signal. The risk-reward ratio (for solid tumors) is viewed favorably [by investors], but expectations are very low,” says Moskowitz. Beyond the proteasome Farther on the horizon, some companies are looking to attack the other half of the proteasome-ubiquitin pathway. Ubiquitin ligases, of which there are hundreds in mammalian cells, tag proteins with ubiquitin, marking them
for destruction by the proteasome (Box 1). If researchers can find ways to modulate specific ligases, or their counterparts, the deubiquitinases, they may obtain greater specificity. “Proteasome inhibitors block the whole protein degradation machinery. I think at the end of the day, people will realize that this is not where selective therapy is going,” says Tauseef Butt, president and CEO of Progenra (Malvern, PA, USA), which is dedicated to drug discovery in the ubiquitin area. Millennium has such an inhibitor in early trials: MLN 4924, currently in phase 1 trials for hematological and solid tumors, inhibits NEDD8-activing enzyme. NEDD8 is a ubiquitin-like protein that, when activated, controls the activity of cullin-RING ligases, the largest class of E3 ligases. The company also has early-stage programs in ubiquitin and ubiquitin-like protein pathways. To date, ubiquitin ligases have shown a high degree of substrate specificity, making
Table 1 Selected agents targeting protein degradation in trials Company
Drug/drug target
Phase of development
Takeda/Millennium
Velcade/proteasome
Approved for multiple myeloma (5/03) Approved for NHL (12/06) Phase 2 for breast cancer, ovarian cancer, myelodysplastic syndrome or Waldenstrom macroglobulinemia Phase 1/2 for amyloidosis or multiple myeloma (relapsed, refractory) Phase 1 for small cell lung cancer or solid tumors Phase 1/2 for multiple myeloma
MLN 9708/proteasome MLN 4924/Nedd8-activating enzyme MLN 519 Onyx Pharmaceuticals
Carfilzomib/proteasome ONX 0912/proteasome
Phase 3 for multiple myeloma Phase 1/2 for solid tumors Phase 1 for solid tumors
Cephalon (Fraser, PA, USA)
CEP-1877/proteasome
Phase 1/2 for multiple myeloma
Nereus Pharmaceuticals
NPI-0052/proteasome
Phase 1 for solid tumor or multiple myeloma
Johnson & Johnson
JNJ-26854165/MDM2
Phase 1 for solid tumors
Roche
RG7112/MDM2
Phase 1 for hematological cancers or solid tumors
NHL, non-Hodgkin lymphoma.
nature biotechnology volume 28 number 12 DECEMBER 2010
1237
N E W S f e at u r e
Box 2 The other protein degrading system
© 2010 Nature America, Inc. All rights reserved.
The proteasome and ubiquitin systems are not the sole systems for degrading protein in cells. Autophagy—whereby proteins are first packaged in vesicles that then fuse with lysozymes—is another process that has attracted attention. Tony Wyss-Coray, at the Stanford University School of Medicine, along with Beth Levine, at the University of Texas Southwestern Medical Center (Dallas), found that knocking out a key autophagy protein, Beclin-1, in mice could lead to greater neurodegeneration and beta-amyloid accumulation in the brain. Furthermore, there is evidence that Beclin-1 levels are low in Alzheimer’s disease patients’ brains, whereas some other proteins involved in autophagy are unchanged. In mice, restoration of beclin-1 levels can reduce β-amyloid accumulation and reverse some of the degenerative changes. “I think there is a possibility that this may work in humans as well,” says Wyss-Coray. His team is now screening for
them attractive drug targets. “These [ubiquitin ligases] are the regulatory proteins of the ubiquitin system. It is the most selective intervention point,” says Avishai Levy, CEO of Proteologics (Orangeburg, NY, USA). His company is collaborating with Teva (Tel Aviv, Israel) to discover inhibitors of ring finger E3 ligases, including drugs against inhibitors of the POSH polypeptide currently in preclinical testing for the potential treatment of HIV infection and cancer. Several groups are looking at key points in signal transduction pathways where ubiquitin ligases play a role. “There is [evidence] that G protein–coupled receptors are being managed by ubiquitination. Channels are being ubiquinated and recycled,” says Butt. For example, growth hormone receptors are internalized after binding growth hormone, followed by phosphorylation, ubiquitination and deposition into the lysosome. There, the ligand drops off the receptor, the terminal ubiquitin is cleaved by a de-ubiquitinase, and the receptor is returned to the cell surface, which can contribute to tumorigenesis. “If you inhibit the deubiquitinases involved, the ubiquitin remains in place and the receptor becomes a target for the proteasome. Now the cell doesn’t make a commitment to proliferation,” says Butt. The key is determining the ubiquitin status of disease-causing proteins. “It’s like the phosphorylation status of a kinase substrate,” says Donald Payan, chief scientific officer at Rigel Pharmaceuticals, which has a ubiquitin ligase program in cancer that focuses on the SCF/p27 complex. The p27 protein is a cyclin-dependent kinase inhibitor involved in the transition of a cell from G1 to S phase. SCF ubiquitinates p27. Inhibiting SCF should leave more p27 in place to act as a brake on cell cycle progression. 1238
compounds that can restore autophagy in cells that are deficient in Beclin-1. Autophagy also has a strong connection to cancer. The beclin1–deficient mice are more prone to developing cancer at an advanced age than normal mice, and Beclin-1 is deleted in humans in about 40% of prostate cancers, 50% of breast cancers and 75% of ovarian cancers, according to Shengkan Jin, at Rutgers University in Piscataway, New Jersey. Autophagy also provides a means for cells to survive periods of stress, such as starvation, and some tumor cells may use it to survive the onslaught of chemotherapy. Like the proteasome, autophagy appears able to show some specificity in the proteins it degrades, but that process isn’t well understood. The two processes are somewhat complementary. “There are suggestions that if you inhibit one pathway, the other can take over. But I’m not aware of key proteins that play roles in both processes,” says Wyss-Coray.
Just as with kinases, which took 20 years to become established as drug targets, it could easily take as long for the ubiquitin ligase field to mature. One problem with designing U3 ligase inhibitors is the difficulty of interfering with protein-protein interactions. “You have to look for sites or regions that are druggable,” says Payan. Even so, Orlicky et al. described a small-molecule inhibitor of the yeast F-box protein Cdc4, which is a central component of the cullin-RING ligases3. The inhibitor targets the WD40 propeller domain, and the structural changes induced by the inhibitor affect the substrate-binding pocket. If similar interactions occur in mammalian ligases, the strategy could open up a number of novel drug targets. Indeed, Payan claims an ongoing collaboration between Rigel and Daiichi Sankyo (Tokyo), initiated in 2002, is about to start human testing of a ubiquitin ligase inhibitor in an undisclosed cancer. Elsewhere, Apeiron Biologics (Vienna) is investigating short interfering RNAs that silence the E3 ubiquitin ligase Casitas B-cell lymphoma-b (Cbl-b) as a means of modulating T-cell activity for cancer treatment. “It’s a field where people have been wringing their hands about how the targets are not druggable, but there are four or five companies (nearing) the clinic,” he says Rigel’s own focus is not on cancer, but muscle atrophy. The ligases atrogin-1 and MuRF-1 are upregulated in patients who experience muscle atrophy when being intubated4. “If you could block MuRF-1 and/or atrogin, you might have a big impact on muscle atrophy, which is a big opportunity,” says Payan. In many ways, the ubiquitin field is even more diverse than the kinase field, and significantly more difficult. “Key proteins all have their ligase hovering above them, ready
to pounce. What has held the field back is that these are intracellular targets, so you’re stuck with small molecules. Companies must determine what’s the best way to go after these [protein interactions] chemically,” says Payan. The yin and the yang Not everyone is convinced of the potency of ligases. “There have been failures with ligases,” says Butt. He thinks the future may lie in deubiquitinases, which remove ubiquitin from proteins, about 100 of which have been identified in humans. “If the yin is the ligase, the yang is the deubiquitinase, and the yang is really taking off at the moment,” says Butt. Progenra has an inhibitor of the de-ubiquitinase USP7, which acts on the p53 tumor suppressor protein. The drug candidate has shown good anti-tumor activity in animal models and will be developed for multiple myeloma, Butt says. Ubiquitinases and de-ubiquitinases are just the latest targets in protein homeostasis, which will continue to evolve (Box 2). The ubiquitin field has significant challenges ahead. Researchers must achieve selectivity for the ligase of interest, and then build the infrastructure to study it. It’s a story Payan has heard before. “The guys who were involved in developing Velcade kept pushing and pushing, when people said it would never work. Kudos to the proteasome guys [for getting to the market]. That’s probably the least specific site to intervene.” Jim Kling, Bellingham, Washington 1. Adams, J. Cancer Cell 5, 417–421 (2004). 2. Rumpold, H. Biochem. Biophys. Res. Commun. 361, 549–554 (2007). 3. Orlicky, S. et al. Nat. Biotechnol. 28, 733–737 (2010). 4. Levine, S. et al. N. Engl. J. Med. 358, 1327–1335 (2008).
volume 28 number 12 DECEMBER 2010 nature biotechnology
building a business
Around the block Y Philip Zhang A blocking patent can kill your business before it’s off the ground.
© 2010 Nature America, Inc. All rights reserved.
O
ne of the most unpleasant surprises for any entrepreneur at a startup company is learning that a blocking patent threatens the commercial viability of their lead product. This could seriously jeopardize the company’s ability to raise financing, its attractiveness to strategic partners and ultimately its overall viability as a business. In this article, I provide a few pointers on how to avoid blocking patents in the first place and then go on to explain how to take evasive action if you are in the unfortunate position of sweating one out. First, avoid Officially, a blocking patent is one that is valid and enforceable and could prevent your potential product from coming to or perhaps staying on the market. In the United States and most industrialized countries, an actual or imminent patent infringement is grounds for government-sanctioned injunctions and monetary damages. Faced with hefty capital demands and long, perilous clinical trials, drug developers tend to avoid product candidates that present patent uncertainties and litigation risks, even if the drug satisfies unmet needs of patients. Concerns over patent infringement routinely interrupt and sometimes derail commercialization of drugs. Thus, freedom to operate (FTO)—the ability to develop and market a drug or device without infringing the valid and enforceable patent rights of others—should be high on your CEO agenda. The starting point for securing FTO is to clearly identify the business space for your company. A blocking patent is relevant only if it affects your business. A first-level inquiry asks: What are the current and future (including Y. Philip Zhang is cofounder and co–managing principal at Milstein Zhang & Wu, Newton, Massachusetts, USA. e-mail:
[email protected]
potential) businesses that you are conducting and plan to conduct? A company in the drug discovery space faces different challenges than a company that is in the in vitro diagnostics space, for instance. A developer of new drugs typically must have a good understanding of discrete patent spaces around its drug targets and candidates in the pipeline. A company in the diagnostic or genetic testing business, on the other hand, may have to walk through the minefield of ever-increasingly patented biomarkers and immunoassays. A small-molecule drug developer generally is not too concerned with drug manufacturing. If you are developing recombinant proteins, however, manufacturing patents are plentiful and should be watched for. Next, you should identify the technologies that are vital to your company, and this may be tougher than determining your business space. It is important for you to have a mental list of the critical and enabling technologies for successful commercialization of your products. You need to know what technologies are necessary to build your products and clarify which are your own innovations and which are other people’s technology that you need access to. A developer of a single-molecule DNA-sequencing technology, for example, must evaluate FTO for its core sequencing platform but may also need to review reagents, optics, electronics and other enabling technologies. A bioethanol producer developing a new fermentation process may have to review its biomass pretreatment protocols and microorganism production steps as well as overall process integration. Once you understand your business and technology space, you’ve paved the way for the next step: understanding the patent environment in which your business operates. The patent landscape affects your patent needs for protecting your business and your FTO. A common misconception is that if you have a patent on something then you are free to practice the technology disclosed or claimed in your patent. This is not true.
nature biotechnology volume 28 number 12 DECEMBER 2010
A patent grants its owner the right to exclude others from practicing the claimed invention without the owner’s authorization, but it does not grant the owner a right to practice the claimed invention or technology disclosed in the patent. So even if you patented a technology, it doesn’t mean that you necessarily can use it yourself because your patented technology could be dominated by another patent. Thus, it is important early on to have a solid grasp of the patent environment surrounding key product candidates through rigorous patent search and review. The patent search and review process is preferably conducted by experienced legal counsel working closely with your project team. Scientists should resist the temptation to play patent lawyer. Understanding the science in a patent is important, but it’s only the first step in understanding the impact of the patent, which is better done by, or with the guidance of, experienced counsel. A good patent lawyer is one who understands your business and the industry you operate in and can effectively communicate with your management and scientists. He or she understands the business ramifications of the patent issue at hand and helps you make the right judgment calls. You and your counsel should map out the existing patent landscape along the entire commercialization path, even if your business plan is to seek exit before market entrance. If you find a patent that presents FTO risks, do not panic. The good thing is that not everything is patented (even though it might seem like it), and more often than not you will be able to find a path forward. Ideally, patent searches should cover both granted patents and published patent applications and additionally encompass all intended international markets for the product. Knowing where a patent is granted or pending helps with assessing its overall impact and affords you an early appreciation for the scope of your need for licensing, design-around options or patent challenge. 1239
building a business
© 2010 Nature America, Inc. All rights reserved.
Box 1 Blocking patent? No Determining patent infringement involves a two-step analysis: first, construction of the claim to determine the coverage of a patent, and second, comparison of the alleged infringing composition, device or method with the construed claims. Thus, overcoming a patent means understanding its claim scope and designing the product outside such scope. Consider the patent fight between Novartis Pharmaceuticals and Eon Labs manufacturing. Novartis has a patent on a drug formulation that claims a hydrosol, which includes solid particles of the drug compound and a stabilizer that maintains the size distribution of the particles. The claim further specifies the water solubility of the drug compound as well as the ratios of drug compound to water and drug compound to the stabilizer in the hydrosol. Along comes Eon. It sells a drug product that includes the same drug compound but not in hydrosol form. Eon instead makes capsules that contain the drug compound dissolved in a small amount of ethanol. There is no water in these capsules, and the drug compound inside the capsule is completely dissolved in ethanol (not in particle form). Novartis sued Eon for infringement, asserting that when Eon’s capsule is ingested by a patient, an infringing hydrosol is formed when the capsule mixes with the aqueous environment of the patient’s stomach. This unusual infringement theory did not survive summary judgment at the trial court (and on appeal) as Novartis’ hydrosol claim was construed to be limited to a medicinal preparation consisting of a dispersion of solid particles in an aqueous colloidal solution prepared outside of the body, which Eon’s capsules clearly are not 1.
Another factor that may require some soul searching on your part is the risk profile of your company. Clearly articulating your risk tolerance can be difficult, but in many ways it shapes the appropriate approach to a patent problem. Your risk profile may change over time and is determined by a number of factors, including the business sector you operate in, the stage of your company, your financial situation, the liability exposure and the personalities of the management. The life science community in general and venture capital investors in particular are quite risk averse when it comes to patents and FTO. Because of the large capital need and lengthy and risky development process, investors do not like to deal with patent uncertainties and litigation threats. Technology and regulatory risks are already plentiful in biotech—you don’t need anything more. Heightened patent risks could drive your board and the investors nuts. A review of your FTO should not be a onetime occurrence, because a thorough search and review doesn’t necessarily uncover all relevant patents. For one thing, patent applications are published 18 months after filing, so there is always a black box in the patent space. For another thing, your product design could change and your need for enabling technologies may evolve over time. A good grasp of the patent landscape affords you the ability to make adjustments along the product development path. In the life science sector, the point of each significant go or no-go decision is often a good time to revisit the patent situation. Competent 1240
counsel can guide you through the process. Ignorance is rarely a viable approach— sophisticated investors and collaborators will conduct thorough due diligence on your patent position. It is better that you know your shortcomings before potential investors or partners point them out. Next, overcome If the worst happens and you do identify a potential blocking patent or patent application, it is critical to fully understand how it impacts your business. For example, does it affect the technology platform at the core level, a particular product composition, a manufacturing step or a specific use of a product? You and your patent counsel should understand the patent’s legal status and the claim scope. Information on the patent (whether the claims are allowed, issued, on appeal, in interference, under reexamination or challenged in litigation) can be obtained by your counsel. Often a review of the patent file history is needed to fully grasp the scope of a patent and its real or potential weaknesses. Claims in different countries quite often are not identical; therefore, you should consider obtaining legal advice from a lawyer in each of the countries of interest. In addition, you should gather background information about the patent owner as a prelude to a licensing effort or a preparation for challenge. Where the patent is licensed to a third party, information on the licensee could be valuable as well. They could become your competitor, collaborator, licensor, licensee
or all of the above. Knowing their business situation, their financial clout, their technology and their patent needs can help you gain leverage and affect how you deal with the blocking patent and its owner. Patent ownership and licensing information may be obtained from appropriate databases, such as ones found at the US Patent and Trademark Office, the World Intellectual Property Organization and the US Securities Exchange Commission. You’ll then need to explore design-around options. Quite often, a blocking patent is relevant only because of one or two features of a candidate compound or device. When such features can be removed or modified without sacrificing the needed functionality, you might find a successful design-around solution. Working closely with your scientific team, an experienced lawyer can make significant contributions in identifying and finetuning successful design-around options. In a hypothetical drug development scenario, a drug is approved by the US Food and Drug Administration for treating disease A. The owner of the drug has obtained patents on the drug compound, formulation and use of the compound to treat disease A. The drug, however, is now suspected to have potency against disease B. The suspicion is further borne out by your preclinical studies. In this scenario, FTO issues are clearly present due to the patent estate around the approved drug. These patents, however, might be limited to the approved drug compound itself and certain close derivatives. Some chemical compound patents suffer from insufficient enablement regarding distant derivatives and analogs. Analysis of the patents might reveal that all valid patent claims are tied to a certain core chemical scaffold or more commonly to certain specific functionalities on the core chemical scaffold. Working together, your scientists and patent counsel can design exploratory compound libraries that investigate the surrounding chemical space uncovered by the patent. Knowledge of the structure-activity relationship and the patent landscape is critical in successfully navigating the hit-to-lead evolution process. A good practice from a patent counsel’s perspective is to fully grasp both technical and legal parameters while communicating effectively and working intimately with your project team. The objective should be to obtain a design-around solution that minimizes infringement risks and allows solid, fresh patent protection for the redesigned compound (Box 1). When faced with a blocking patent, you should also explore the availability and cost of a
volume 28 number 12 DECEMBER 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
building a business patent license. Too often the project team drops a compound after the discovery that it is covered by a third-party patent. Licensing opportunities often are not carefully investigated and followed up, resulting in premature loss of good drug candidates. It is more desirable from efficiency and productivity perspectives that biotech startups become more open to licensing both at the receiving and giving ends. You’ll need your counsel to closely guide the licensing process, as many pitfalls exist that could entrap the unwary. A misstep could expose you to various risks, ranging from prematurely exposing your licensing needs to the patent owner to a loss of your right to later challenge the patent. It is generally a good idea to enter a confidentiality agreement before exchanging sensitive information and starting negotiations. You can also sometimes enter a joint privilege agreement if sensitive opinions or legal documents are shared with the other side. Keep in mind, though, that the patent licensing process could be lengthy and exhaustive, and it can easily take months to complete. This means it’s important to set a timetable and move forward accordingly. Your knowledge of the weaknesses of the desired patent estate and its holder could help steer the process favorably for you. In this regard, due diligence on the desired patent estate and its owner is must-do homework that should be completed before approaching the owner. Understanding the value of the license to you is important because the deal inevitably involves setting licensing fees and royalty rates. Experienced counsel
might know the prevailing licensing rates and industry terms. Your ultimate objective in a licensing discussion is to fully explore the potential of a license in view of all other options available to you so that you can make a cogent business decision. Though rarely enjoyable, sometimes the best approach is to simply challenge a blocking patent face on. It’s preferable to have proactive patent challenges at favorable times and venues rather than mere reactive defenses. Well-planned and executed strategies for patent challenges often help realize the full commercial potential of your product. Legal procedures vary from country to country, but some forms of patent validity challenge are available in all major jurisdictions. For example, Europe, Japan, China, Australia and Canada all have post-grant opposition or cancellation procedures. In the United States, the primary procedures for validity challenge at the US Patent and Trademark Office are ex parte and inter parte reexaminations. Currently, post-grant opposition is not available in the United States, but it is a part of most patent reform bills and could become available soon, as it’s believed that some iteration of patent reform is likely to pass into law. At any time after US patent issuance and before expiration, anyone may request reexamination on the grounds of a substantial new question of patentability based on
printed prior art or patent references. A notable benefit is that the outcome of the ex parte reexamination does not have a binding effect on the requester, which means that you may raise the same basis of invalidity again in a pending or future patent litigation, essentially allowing you two bites at the apple. A potential downside, however, is that you do not have much direct participation in the proceedings, and the patent holder could steer that reexamination process to strengthen his or her patent. Inter parte reexamination, in contrast, allows more direct involvement, but the outcome has a binding effect on the requester. Heed your counsel’s advice when formulating and implementing effective patent challenge strategies. Conclusions FTO is critical to life science startups. Not having it from the beginning of a venture is like a building without a foundation. A biotech startup is wise to craft and implement an appropriate patent strategy early on that steers it clear of and minimizes the impact of hostile patents. But should one appear, there are approaches to try in order to maneuver around it. Still, nothing can substitute for the commitment of the company and the teamwork of management, scientists and counsel. 1. Novartis Pharms. Corp. v. Eon Labs Mfg., Inc., 363 F. 3d 1306 (Fed. Cir. 2004).
To discuss the contents of this article, join the Bioentrepreneur forum on Nature Network:
http://network.nature.com/groups/bioentrepreneur/forum/topics
nature biotechnology volume 28 number 12 DECEMBER 2010
1241
correspondence
© 2010 Nature America, Inc. All rights reserved.
Wrong fixes for gene patents To the Editor: In your August issue, the Commentary entitled “DNA patents and diagnostics: not a pretty picture” by Robert Cook-Deegan and colleagues1 draws conclusions and recommendations that are unwarranted by the full scope of available data and unfairly criticizes the activities of the Biotechnology Industry Organization (BIO; Washington, DC, USA) with respect to this issue. As general counsel and senior vice president for legal & intellectual property for BIO, I would like to respond to the criticisms in this article. The authors admit that there is “limited” evidence that patents are negatively impacting research on, or the accessibility of, genetic diagnostic tests, but then go on to assert that, despite this lack of evidence, “concerns about DNA patents persist.” They reference many studies for this proposition, yet the cumulative evidence contained in those studies does not support any systemic basis for such concerns. Anecdotal examples of access problems, such as those contained in the specific case studies heavily relied upon by the authors, may be helpful to consider but, as the authors admit, the evidence of problems from these particular examples is “not uniform, consistent or pervasive.” To draw from such anecdotal evidence the general conclusion that patents, or the exclusive licensing thereof, are the cause of such problems, or are not necessary to incentivize the development and marketing of diagnostic tests except in rare cases, is simply not justified. Yet the authors assert that these case studies—which, although well done, are truly case studies and are not purported to be representative of the entire field of genetic diagnostic testing—show that “Exclusive licensing practices consistently reduce availability” of genetic diagnostic tests. The weakness in this argument, beyond the unsupported generalization noted above, is found in the phrase immediately following this conclusion—“at least as measured by the number of available laboratories offering a test” (emphasis added). Of course, it should come as no surprise—and certainly we did not need a series of case studies to teach us— 1242
that exclusive licensing limits the number of laboratories offering a test and, as the authors further complain, “reduce[s] competition.” In fact, this is true of every product that is patented or exclusively licensed, regardless of the field. The key fact that should concern us, however, is not how many entities provide the test, but whether individuals can reasonably gain access to it. Just because several academic research laboratories may offer a particular test absent an exclusive license to another party, it does not automatically follow that people would have greater access to that test. Indeed, it is just as reasonable to project that, without a single entity having a financial incentive to develop not just the test but also the market for the test, many patients who should be screened (and their doctors) may never learn about the availability and usefulness of the test at all. Exclusive licensees also would seemingly have a much greater incentive to validate and improve the test, and to help obtain coverage for such tests from payors and insurers. As the studies do reveal, lack of insurance coverage can be a major obstacle to patient access to diagnostic tests, particularly for poor women. In addition, concern about reduced competition in and of itself is misplaced; the concern should be whether, in light of the reduced competition, there is public harm in the form of higher prices or inferior products. Again, the case studies do not reveal any significant evidence of either. The Commentary then criticizes BIO for having “chosen not to acknowledge the real problems” that exist with respect to the genetic diagnostics market, and for its “quick and vociferous” opposition to the recommendations of the US Secretary’s Advisory Committee on Genetics, Health & Society (SACGHS). To the contrary, BIO publicly testified before SACGHS on its original, more balanced draft report and praised the work of the committee. BIO cautioned the committee that some of the proposals under consideration could undermine the goal of enhancing patient access to genetic tests and urged the committee to partner with BIO in crafting
sensible recommendations that would actually enhance patient access to such tests (e.g., expanded insurance coverage). Instead, the committee rewrote its original draft report in an apparent attempt to support a series of divisive, controversial and highly impractical recommendations to alter patent law and impose one-size-fits-all licensing strategies enforced by the federal government. Not only were these recommendations not supported by the committee’s own review and case studies, but they also chased red herrings such as patenting and exclusive licensing practices, while avoiding more practical approaches that would not only have garnered more consensus among external stakeholders and the committee’s own members (three of whom dissented in a highly unusual maneuver) but also would have been more effective in achieving our shared goals. The authors also chastise BIO for not proposing constructive solutions, such as the development of a ‘gene patent supermarket’ or similar licensing pools that could help facilitate broader licensing of such patents, while maintaining appropriate incentives. In actuality, BIO has been discussing such approaches with our members and with the US National Institutes of Health for several months now—not because there is evidence that DNA patents are impeding the development of more complex or ‘whole’ genomic testing products, but to help avoid any chilling effect that might arise in the future. BIO also has been working with other organizations in developing ideas to address the ‘second opinion’ quandary that may arise when there is a sole provider of an important medical test that drives a particular treatment regimen. The key point, however, is that proposed solutions must be targeted to deal with specific problems, while avoiding unintended and counterproductive consequences for innovation and patient care. Finally, the authors criticize BIO for “failing to enforce the established norms” of nonexclusive licensing and for not “criticiz[ing] deviations from them” by our members. Leaving aside the obvious antitrust and other legal and practical concerns about
volume 28 number 12 DECEMBER 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
correspondence a trade association seeking to require or ‘enforce’ specific licensing practices by its membership, the authors’ suggestion that nonparties to a specific licensing transaction would have the information necessary to reasonably judge (presumably, after the fact) whether an exclusive agreement is or is not appropriate under the particular circumstances involved is simply not realistic, as much of the relevant information would likely be proprietary. In addition, BIO’s membership is largely made up of therapeutic research and development companies, rather than the type of companies whose business models are the focus of the authors’ critique. In short, BIO never has denied that certain problems exist with respect to access to genetic diagnostic testing; we just disagree as to the causes of such problems and how best to fix them. BIO will continue its efforts to work with other organizations to help improve patient access to genetic testing, but will also continue to oppose—vigorously when necessary—any ill-considered and misguided proposals that would undermine the development of new diagnostics and therapies and do more harm than good for the patients of today and tomorrow. COMPETING FINANCIAL INTERESTS The author declares no competing financial interests.
Tom DiLenge Biotechnology Industry Organization, Washington, DC, USA. e-mail:
[email protected] 1. Carbone, J. et al. Nat. Biotechnol. 28, 784–791 (2010).
Robert Cook-Deegan, Subhashini Chandrasekharan, Misha Angrist, Bhaven Sampat, E Richard Gold, Julia Carbone & Lori Knowles reply: We thank Tom DiLenge of BIO for his thoughtful comments. We agree with many points, but focus here on remaining points of disagreement. First, although we agree there is no evidence of systematic and pervasive harm from patenting and licensing in DNA diagnostics, we reiterate that there is unequivocal evidence of problems in some cases. We agree there may well be a role for patent incentives in DNA testing; we do not believe, however, that this means carte blanche for patent holders. We are particularly wary of exclusive licensing to sole providers of genetic tests unless nonexclusive licensing will fail to bring a product to market. This is decidedly not the case in empirical studies to date. We say this
for three main reasons. First, in instances where no test is available and yet patents are being enforced, as was the case with long-QT testing from 2002 to 2004, there are clearly access problems by any definition. These situations may be rare, and we hope they are, but denying a problem that has historically occurred is not a winning argument. The BIO letter is silent on such problems. Second, it is simply not true that exclusive licensing needs to lead to monopolies. If a particular laboratory does not offer a particular form of service (e.g., prenatal testing), does not have a payment agreement with an insurer or health plan, or has already gotten paid to do a test, and the patient (or doctor) wants verification, then prudent business practice would suggest sublicensing, a permissive testing policy or some other way to ensure testing can be done by others. Policies on sublicensing or testing by others are under control of the patent holder and could be remedied by them without breaking patents. It is thus puzzling that patentholders have not adopted such policies. Third, although we agree that reducing the number of laboratories offering a test does not necessarily reduce patient access, there is a very consistent pattern revealed in our case studies and the survey of laboratory directors that we cited by Cho et al.1: the holder of exclusive patent rights is consistently not first to market with a genetic test. The effect of patents has been solely to reduce competition, not to create new products that would not otherwise exist. Suppression of competitors who have beaten the holder of exclusive rights to market is not what is usually observed with patents. Pharmaceutical firms and instrument companies generally do not enforce patents against universities and research institutions, for example, and yet this is what we find in DNA diagnostics in several cases. In this
respect, diagnostics are unusual compared with other domains where patent exclusivity has a role. We agree the evidence of harms from exclusive licensing is not systematic, but the evidence of benefit from patents in genetic diagnostics historically is even weaker. Finally, we appreciate there are indeed limits to BIO’s actions when questions of antitrust would arise in enforcing the existing norms on patenting and licensing genomic inventions. The licensing norms developed by the Organization for Economic Cooperation and Development2 (Paris), the US National Institutes of Health3 and the ‘Nine Points’ document on university technology licensing4 are all pro-competitive however, not anti-competitive. If a company is deviating from those norms, therefore, antitrust concerns would not arise; quite the reverse. We don’t suggest BIO act when antitrust would loom as an issue, but commenting on policies—such as enforcing patents when no test is available to patients— would rarely confront antitrust policy. The main underlying point is that problems with patents and exclusive licensing distinctive to diagnostics can be identified and dealt with, but only if the problems are acknowledged and acted upon. If BIO is turning its attention to these issues, then we will all benefit. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. 1. Cho, M.K., Illangasekare, S., Weaver, M.A., Leonard, D.G.B. & Merz, J.F. J. Mol. Diagn. 5, 3–8 (2003). 2. Organisation for Economic Co-operation and Development. Guidelines for the Licensing of Genetic Inventions (OECD, Paris, 2006). 3. National Institutes of Health. “Best Practices for the Licensing of Genomic Inventions,” Federal Register 70 (No. 68): 18412–18415. 4. Association of University Technology Managers. In the Public Interest: Nine Points to Consider in Licensing University Technology (AUTM, Deerfield, Illinois, USA, 2007).
Stem cell clinics in the news To the Editor: As highlighted in a News Feature “Trading on hope” published in this journal1, stem cell tourism is a growing and increasingly contentious phenomenon. By ‘stem cell tourism’, we refer to the emerging practice that sees patients travel abroad to receive (largely) unproven stem cell treatments that are generally not approved or available in their home country2. Although precise numbers are unknown, current information suggests that
nature biotechnology volume 28 number 12 DECEMBER 2010
potentially thousands of patients each year from various countries are travelling around the world to receive stem cell therapies for a wide range of conditions3–5. The stem cell tourism phenomenon is highly controversial. The therapeutic possibilities promised by the clinics involved engage the hopes of often desperate patients and their families, including those who feel there are no other options. These individuals are understandably anxious to have a 1243
© 2010 Nature America, Inc. All rights reserved.
correspondence a trade association seeking to require or ‘enforce’ specific licensing practices by its membership, the authors’ suggestion that nonparties to a specific licensing transaction would have the information necessary to reasonably judge (presumably, after the fact) whether an exclusive agreement is or is not appropriate under the particular circumstances involved is simply not realistic, as much of the relevant information would likely be proprietary. In addition, BIO’s membership is largely made up of therapeutic research and development companies, rather than the type of companies whose business models are the focus of the authors’ critique. In short, BIO never has denied that certain problems exist with respect to access to genetic diagnostic testing; we just disagree as to the causes of such problems and how best to fix them. BIO will continue its efforts to work with other organizations to help improve patient access to genetic testing, but will also continue to oppose—vigorously when necessary—any ill-considered and misguided proposals that would undermine the development of new diagnostics and therapies and do more harm than good for the patients of today and tomorrow. COMPETING FINANCIAL INTERESTS The author declares no competing financial interests.
Tom DiLenge Biotechnology Industry Organization, Washington, DC, USA. e-mail:
[email protected] 1. Carbone, J. et al. Nat. Biotechnol. 28, 784–791 (2010).
Robert Cook-Deegan, Subhashini Chandrasekharan, Misha Angrist, Bhaven Sampat, E Richard Gold, Julia Carbone & Lori Knowles reply: We thank Tom DiLenge of BIO for his thoughtful comments. We agree with many points, but focus here on remaining points of disagreement. First, although we agree there is no evidence of systematic and pervasive harm from patenting and licensing in DNA diagnostics, we reiterate that there is unequivocal evidence of problems in some cases. We agree there may well be a role for patent incentives in DNA testing; we do not believe, however, that this means carte blanche for patent holders. We are particularly wary of exclusive licensing to sole providers of genetic tests unless nonexclusive licensing will fail to bring a product to market. This is decidedly not the case in empirical studies to date. We say this
for three main reasons. First, in instances where no test is available and yet patents are being enforced, as was the case with long-QT testing from 2002 to 2004, there are clearly access problems by any definition. These situations may be rare, and we hope they are, but denying a problem that has historically occurred is not a winning argument. The BIO letter is silent on such problems. Second, it is simply not true that exclusive licensing needs to lead to monopolies. If a particular laboratory does not offer a particular form of service (e.g., prenatal testing), does not have a payment agreement with an insurer or health plan, or has already gotten paid to do a test, and the patient (or doctor) wants verification, then prudent business practice would suggest sublicensing, a permissive testing policy or some other way to ensure testing can be done by others. Policies on sublicensing or testing by others are under control of the patent holder and could be remedied by them without breaking patents. It is thus puzzling that patentholders have not adopted such policies. Third, although we agree that reducing the number of laboratories offering a test does not necessarily reduce patient access, there is a very consistent pattern revealed in our case studies and the survey of laboratory directors that we cited by Cho et al.1: the holder of exclusive patent rights is consistently not first to market with a genetic test. The effect of patents has been solely to reduce competition, not to create new products that would not otherwise exist. Suppression of competitors who have beaten the holder of exclusive rights to market is not what is usually observed with patents. Pharmaceutical firms and instrument companies generally do not enforce patents against universities and research institutions, for example, and yet this is what we find in DNA diagnostics in several cases. In this
respect, diagnostics are unusual compared with other domains where patent exclusivity has a role. We agree the evidence of harms from exclusive licensing is not systematic, but the evidence of benefit from patents in genetic diagnostics historically is even weaker. Finally, we appreciate there are indeed limits to BIO’s actions when questions of antitrust would arise in enforcing the existing norms on patenting and licensing genomic inventions. The licensing norms developed by the Organization for Economic Cooperation and Development2 (Paris), the US National Institutes of Health3 and the ‘Nine Points’ document on university technology licensing4 are all pro-competitive however, not anti-competitive. If a company is deviating from those norms, therefore, antitrust concerns would not arise; quite the reverse. We don’t suggest BIO act when antitrust would loom as an issue, but commenting on policies—such as enforcing patents when no test is available to patients— would rarely confront antitrust policy. The main underlying point is that problems with patents and exclusive licensing distinctive to diagnostics can be identified and dealt with, but only if the problems are acknowledged and acted upon. If BIO is turning its attention to these issues, then we will all benefit. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. 1. Cho, M.K., Illangasekare, S., Weaver, M.A., Leonard, D.G.B. & Merz, J.F. J. Mol. Diagn. 5, 3–8 (2003). 2. Organisation for Economic Co-operation and Development. Guidelines for the Licensing of Genetic Inventions (OECD, Paris, 2006). 3. National Institutes of Health. “Best Practices for the Licensing of Genomic Inventions,” Federal Register 70 (No. 68): 18412–18415. 4. Association of University Technology Managers. In the Public Interest: Nine Points to Consider in Licensing University Technology (AUTM, Deerfield, Illinois, USA, 2007).
Stem cell clinics in the news To the Editor: As highlighted in a News Feature “Trading on hope” published in this journal1, stem cell tourism is a growing and increasingly contentious phenomenon. By ‘stem cell tourism’, we refer to the emerging practice that sees patients travel abroad to receive (largely) unproven stem cell treatments that are generally not approved or available in their home country2. Although precise numbers are unknown, current information suggests that
nature biotechnology volume 28 number 12 DECEMBER 2010
potentially thousands of patients each year from various countries are travelling around the world to receive stem cell therapies for a wide range of conditions3–5. The stem cell tourism phenomenon is highly controversial. The therapeutic possibilities promised by the clinics involved engage the hopes of often desperate patients and their families, including those who feel there are no other options. These individuals are understandably anxious to have a 1243
correspondence able 1 Demographics of patient undergoing stem cell treatment as reported in news T media articles from October 2006 to September 2009 Five most common conditions
Three most common treatment destinations
Male Adult US (84; 37.5%) (134; 59.8%) (126; 56.3%)
Multiple sclerosis (22; 9.8%)
China (100; 44.6%)
Female (87; 38.8%)
UK (80; 35.7%)
Cerebral palsy (18; 8.0%)
Germany (17; 7.6%)
Australia (32; 14.3%)
Septo-optic dysplasia (15; 6.7%)
Mexico (17; 7.6%)
Canada (12; 5.4%)
Optic nerve hypoplasia (13, 5.8%)
Gender
Unspecified (3; 1.3%)
Age (adult Country of origin (in versus minor) descending order)
Minors (98; 43.8%)
New Zealand (12; 5.4%) Unspecified blindness (13; 5.8%) Israel (1; 0.5%)
© 2010 Nature America, Inc. All rights reserved.
Brazil (1; 0.5%)
reason to feel optimistic about their medical prospects, and often frustrated with their native medical systems. Patients pay for these treatments personally, and research reveals costs ranging from $5,000 to $39,500 (ref. 4), or an average cost of $21,500 (excluding travel and accommodation)3. These treatments generally have not proceeded through the clinical trial process, and published research reveals little to no evidence of efficacy for the services advertised by these clinics3. There also appears to be an overall lack of transparency regarding the specifics of treatment protocols and an absence of comprehensive posttreatment follow-up. Accordingly, many experts are highly skeptical of the claims made by stem cell clinics and, particularly in light of emerging evidence regarding adverse effects from such treatments6, concerned about potential risks7,8. Other emerging issues include a lack of informed consent because of inadequate information provided to prospective patients4,9, the potential for vulnerable individuals (including children) to be taken advantage of or put at risk10, and the obligation to provide continuing care upon a patient’s return. Not surprisingly, treatment providers are often highly motivated to defend their work and to advocate for the use of novel treatments. This range of conflicting and competing perspectives regarding stem cell tourism serves to create a confusing picture of the field in the news media, especially for patients and their families. Previous research points to two key mechanisms by which news media portrayals are likely to influence audiences. First, through a process of agenda setting, the news media calls attention to specific issues, events or developments11, such as the claims of stem cell clinics or developments in the field. Given high levels of motivation, patients and their families are likely to pay closer attention to stem cell-related news coverage and to actively 1244
seek information about therapies online (Supplementary Discussion 1). Not only do the news media—and newspapers especially—call the public’s attention to medical claims and (unproven) therapies, but news coverage also selectively frames the nature of these claims. ‘Frames’ is the conceptual term for interpretative storylines that emphasize specific dimensions of a complex topic over others, often reducing complexity and uncertainty, and leading audiences to consider certain considerations over others in reaching judgments and making decisions (Supplementary Discussion 1)12. Given the likely importance of the news media—and newspapers in particular—in setting the agenda of the public and in shaping public interpretations, we thought it essential to investigate the level of attention to stem cell tourism in the print media and how the emerging industry is characterized. Our analysis also provides information about the individuals accessing these services, where the clinics reside and what services are provided. We searched the Factiva database for newspaper articles about stem cell tourism from Canada, the United States, the United Kingdom, Australia and New Zealand between October 1, 2006, and September 30, 2009. Our search used ‘stem cell treatment’ or ‘stem cell therapy’ and one or more terms related to overseas travel (e.g., overseas, abroad, China or India; Supplementary Methods). Our data set contained a combined 445 articles across countries and outlets (Supplementary Table 1). Our coding frame was informed by previous work that identified communication issues associated with the stem cell tourism phenomenon, particularly issues relevant to representations of efficacy and risk3,4. The coding frame included patient demographic information, clinic location, treatment details, cost, donation information, discussions of policy, relevant risks, state of the science,
presence of patient testimonials and the tone of the article. Given the subjectivity of news content analysis, we tested 10% of the articles for inter-coder reliability using Cohen’s kappa, which produced a mean score of k = 0.857 (Supplementary Methods). We found coverage of stem cell tourism in each country. The majority of articles (234 or 52.6%) stemmed from the United Kingdom, followed by the United States with 99 articles (22.2%), Australia with 74 articles (16.6%), New Zealand with 21 articles (4.7%) and Canada with 17 articles (3.8%). These 445 articles discussed 224 different patients and, as a result, we were able to obtain some interesting patient demographic information (Table 1). Of course, there are various limitations with these data; newspapers represent only one form of media and our data only reflect individuals whose stories were covered by the press captured in our sample currently available in the Factiva database. In total, over 19 different countries were discussed as treatment destinations, supporting the findings of earlier research indicating the global nature of this phenomenon4,5. Destinations in addition to those outlined included India, the Dominican Republic, the Netherlands, Thailand, Russia, Costa Rica, the United States, Portugal, the Ukraine, Israel, the United Kingdom, Argentina, Ecuador, Chile and Brazil. Our results also verified the high financial costs associated with these treatments. The average cost noted was ~$47,315. The lowest was ~$3,500 and the highest (paid for multiple treatments over time) was $399,687. In many cases, articles discussed the intense fundraising efforts made by patients, their families, friends and supporters. In fact, 134 (30%) of the articles provided readers with information on how they could donate funds. Given these treatments’ unproven status and the inconsistent information about them that is generally available to the public, the involvement of the media in helping raise funds for these efforts adds another potentially concerning element to the dynamic of stem cell tourism. In addition, the high prevalence of minor patients in our data set seems particularly worrisome, especially when one considers the concerns outlined above, including the potential for adverse events and the lack of evidence regarding safety or efficacy. Examining the articles’ content for discussions of policy (e.g., legality of the treatment or regulatory environment), risk (e.g., tumors or rejection), evidence of treatment efficacy (e.g., cautions from stem cell scientists or anecdotal reports from
volume 28 number 12 DECEMBER 2010 nature biotechnology
nature biotechnology volume 28 number 12 DECEMBER 2010
Positive
Neutral
Negative
100 90 80 70 60 50 40 30 20 10 0 9
9
/0 09
9
/0 06
9– /0
07
8
/0
9– /0
04
8
/0
04
12
9– /0
01
8
/0 09
8– /0
10
8
/0 06
8– /0
07
7
/0 03
8– /0
04
7
/0 12
8– /0
01
7
/0
/0
09
7– /0
06
7– /0
7–
07
10
6
7
/0
/0
of the stem cell treatment being addressed (118 articles, ~25%). In addition, the sources of this information varied widely (e.g., stem cell scientists and experts, clinics providing the therapy, patient groups and family members). The evidence or information regarding efficacy ranged from being supportive of the treatment to cautioning against it (Supplementary Discussion 2). It is important to note that in many cases, the discussions of efficacy did not constitute a particularly important aspect of the article as a whole, nor were they necessarily predictive of its overall tone, discussed below. Interestingly, 227 articles (51%) included patient testimonials from the patient, friends or relatives, other patients with similar conditions, the clinic’s previous patients or anecdotal reports (e.g., from media sources). Considering the prominent role patient testimonials have on many clinic websites, this confirmation of their significant presence in media reports as well further highlights the dominance this method of information dissemination and promotion may have to the growth of this market. Finally, and perhaps most important, the tone of the discussions is intriguing (Fig. 2). Our coders determined whether, overall, each article was positive, neutral or negative towards stem cell tourism. The results show that over time, the tone of articles has become increasingly positive (Supplementary Table 2). One exception is the period between October and December 2008 (Fig. 2), which may be linked with the release of the International Society for Stem Cell Research (ISSCR)’s Guidelines for the Clinical Translation of Stem Cells in December, 2008 (ref. 7). Articles coded as positive presented stem cell therapy as being a hopeful and/ or successful alternative for patients that indicated little or no limitations or possible risks of the treatment. Given that the news media—and newspapers in particular—are the dominant source of information about stem cell tourism for the public, this apparent
/0
Figure 1 Percentage of news media articles originating for different countries, which focus on policy, risk, evidence and the procedure associated with stem cell treatments.
12
Country
03
New Zealand
7–
Australia
trend to frame stem cell tourism in terms of promise and hope with limited emphasis on uncertainty may have important implications for public evaluations of clinic claims, especially among patients and their families. On the whole, these data suggest that print media portrayals of stem cell tourism are largely and increasingly positive in nature. Furthermore, their focus is primarily on the individual patient, their hopes and specific treatment plans or approach, rather than on the potential risks associated with these unproven treatments, current scientific and clinical limitations, or the various policy issues implicated by this emerging phenomenon. In many respects, these results are unsurprising; the circumstances surrounding an individual’s pursuit of stem cell tourism often make for a powerful personal interest story. It is impossible not to empathize with the plights of those desperately seeking treatment for themselves or their loved ones, and important to be wary of overly broad generalizations of a field that appears to encompass an incredibly broad range of potential treatments and treatment providers. Nonetheless, there are serious concerns associated with the rise of stem cell tourism, including physical and financial risks, lack of transparency and appropriate review procedures, exploitation of vulnerable patients (including minors) and various policy issues, which must not be minimized. Although some of these issues are making their way into the print media, it is clear that continued efforts are necessary to improve the balance in media reporting of this topic. When medical experts and organizations do actively seek to engage journalists and the public on the uncertainty of stem cell tourism and the need for regulation, as was the case in 2008 with the ISSCR’s guidelines, our findings do point to an influence on coverage, with the effort balancing the otherwise overwhelmingly positive portrayal in the press. This balance is an important element in promoting a knowledgeable public, necessary both for facilitating
/0
United Kingdom
Stem cells
04
Canada
Evidence
01
United States
Risk
Percentage of all articles
Policy
6–
80 70 60 50 40 30 20 10 0
/0
Percentage of articles from each country
the clinics) and procedural information (that is, types of stem cells and method of application) revealed interesting jurisdictional differences in the relative prominence of these topics (Fig. 1). Policy issues arose in just 21% of the articles, the majority of which raised them only in passing (e.g., “it’s so experimental it can’t legally be done in the United States because it isn’t approved by the Food and Drug Administration,” which appeared in an article entitled “Mailia’s Miracle” in the 24 August 2009 issue of the Tri-City Herald, Kennewick, WA, USA) as opposed to including nuanced discussion of policy issues or debates. An even smaller proportion of articles (13%) mentioned the concept of risk. Of those that did, the majority (70.5%) did not discuss any specific risks; some of the risks that were identified included infection, tumors, inflammation, immune rejection and inadequate screening (Supplementary Discussion 2). This relative lack of focus in the print media on policy issues or risks associated with stem cell tourism as compared with emerging commentary in the academic literature on the topic9,13 reveals an apparent disconnect between these two realms. It also raises the question of whether the general public (which includes prospective stem cell tourists) is receiving sufficient information to reach effective decisions and judgments on the issue. In contrast to the comparably lower prevalence of discussions regarding policy and risk, 225 articles (50.3%) contained information about the stem cells being used in the treatment, the application procedures or both. The following types of stem cells were referenced: umbilical cord (107), embryonic (27), bone marrow (27), adult (24), stem cells from pelvis (2), stem cells from hip (2), fetal (2), stem cells from fat tissue (1) and a combination of bone marrow and umbilical cord blood (5). In total, 20 different application procedures were discussed. The five most common included the following: injection (location not disclosed) (77), injection into spinal cord and/or bloodstream (including intravenous injection) (69), injection into brain (8), intravenous injection through hairline towards optic nerve (8) and catheter/injection into heart (8). The average length of treatment noted was 4–5 weeks, with a range of only a few days up to 3 months. It is important to note that these simply reflect media references. It remains unclear what, if any, kind of stem cells were injected in the procedures used at clinics. It is also surprising that few articles contained information regarding the efficacy
10
© 2010 Nature America, Inc. All rights reserved.
correspondence
Figure 2 Overall ‘slant’ of different news media reports that covered stem cell treatments and appeared from October 2006 to September 2009.
1245
correspondence informed debate regarding stem cell tourism and for protecting potentially vulnerable individuals. Note: Supplementary information is available on the Nature Biotechnology website.
© 2010 Nature America, Inc. All rights reserved.
Acknowledgments We would like to thank Canada’s Stem Cell Network for funding and the University of Alberta’s Health Law Institute for administrative support. We also gratefully acknowledge C. Scott, J. McCormick, T. Bubela and C. Murdoch for their helpful suggestions regarding data collection. Finally, we thank C. Toole for her research support, and G. Barr and T. Adido for their assistance in coding the data. AUTHOR CONTRIBUTIONS All authors contributed to this work. T.C., C.R. and A.Z. conceived the research concept. C.R. collected the data. C.R. and A.Z. coordinated the coding of the data. A.Z. and T.C. prepared the draft manuscript and M.N. helped with interpretation and with theoretical analysis. All authors discussed the results and implications and commented on the manuscript at all stages. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests.
Amy Zarzeczny1, Christen Rachul1, Matthew Nisbet2 & Timothy Caulfield1,3
1Health Law Institute, Law Centre, University of
Alberta, Edmonton, Alberta, Canada. 2School of Communication, American University, Washington, DC, USA3. Faculty of Law and School of Public Health, Law Centre, University of Alberta, Edmonton, Alberta, Canada. e-mail:
[email protected]
1. Qui, J. Nat. Biotechnol. 27, 790–792 (2009) 2. Lindvall, O. & Hyun, I. Science 324, 1664–1665 (2009). 3. Lau, D. et al. Cell Stem Cell 3, 591–594 (2008). 4. Regenberg, A., Hutchinson, L., Schanker, B. & Matthews, D. Stem Cells 27, 2312–2319 (2009). 5. Ryan, K., Sanders, A., Wang, D. & Levine, A. Regenerative Med. 5, 27–33 (2010). 6. Amariglio, N. et al. PLoS Med 6, e1000029 (2009). 7. http://www.isscr.org/clinical_trans/pdfs/ISSCRGL ClinicalTrans.pdf 8. MacReady, N. Lancet Oncol. 10, 317–318 (2009). 9. Kiatpongsan, S. & Sipp, D. Nat. Rep. Stem Cells published online 3 December, 2008, doi:10.1038/ stemcells.2008.151 (2008). 10. Zarzeczny, A. & Caulfield, T. Am. J. Bioeth. 10, 1–13 (2010). 11. Iyengar, S. & Kinder, D. News that Matters: Television and American Public Opinion. (University of Chicago Press, Chicago, IL, USA, 1987). 12. Nisbet, M.C. & Mooney, C. Science, 316, 56 (2007). 13. Creasy, G. & Scott, C. Nat. Biotechnol. 27, 21–22 (2009).
Tracking and assessing the rise of state-funded stem cell research To the Editor: The editorial in your October issue1 highlights the legal challenges to new guidelines issued by the US National Institutes of Health (NIH) in July 2009 for the federal funding of human embryonic stem cell (hESC) research. In the eight-year period preceding these most recent NIH guidelines, only a small number of cell lines could be studied with federal funds. During this time, six states—California, Connecticut, Illinois, Maryland, New Jersey and New York—took on a role typically played by the NIH and created funding programs specifically designed to support stem cell research, including hESC research. These are not the first state programs to fund scientific research but their commitment to basic research is atypical, as most state science and technology programs have focused on science closer to commercialization2. Although the state stem cell programs differ, they each share at least two goals: advancing promising science, including research not eligible for federal funding during the Bush Administration, and returning economic benefits to their state. In this article, we report an initial attempt to track and assess the impact of 1246
these state funding programs. Existing work on state stem cell policy has focused on identifying policy differences between various jurisdictions3,4, assessing the impact of state decisions to support or restrict hESC science5–7 and examining the role of states in governing controversial science8,9. The analysis reported here extends this literature though use of a novel data set of the grants these states have awarded. These data provide insight into how states have prioritized their funding, including the extent to which they have supported hESC research generally and hESC research not eligible for federal funding during the Bush Administration more specifically, as well as the extent to which these states have drawn new scientists into the field. The underlying data have been publicly released on a new website (http:// www.stemcellstates.net) designed to facilitate additional analysis of state-funded stem cell science and improve public awareness of these programs. Given ongoing legal uncertainties surrounding federal funding of hESC research and the likelihood that voters, at least in California, will be asked to approve additional state stem cell funding in the future, understanding and evaluating the
effects of state-funded stem cell research is both timely and useful. The database that forms the basis for the analysis described here contains the title, principal investigator, institution, abstract and amount for each grant awarded by the agency overseeing stem cell research funding in these six states (Supplementary Methods). In all, between December 2005 when New Jersey awarded the first state stem cell grants and the end of 2009, the six stem cell states awarded nearly 750 grants totaling just over $1.25 billion. The scale of these programs varies substantially, ranging from the roughly $15 million awarded by Illinois and New Jersey to the $1.02 billion awarded by California. On a per capita basis, funding awarded through the end of 2009 ranges from just over $1 in Illinois to nearly $28 in California (Table 1). States funding stem cell research can choose to support several different activities, ranging from investigatorinitiated research grants to new facilities to workforce development. To investigate how states prioritized these various types of funding, each grant was classified by its primary purpose (Supplementary Methods). Research grants and support for scientific infrastructure were the two largest categories, accounting for more than 90% of all state stem cell funding (Table 1). The infrastructure category was dominated by the $271 million California awarded for the construction of 12 major stem cell research facilities, although several other states also dedicated a substantial portion of their funding to infrastructure, such as shared equipment or core laboratories. In contrast to supporting basic investigator-initiated research, spending money on infrastructure is a classic state economic development approach, but, in these cases, spending was motivated, at least in part, by the need to create separate laboratories to facilitate research on unapproved hESC lines. The restrictions on federal funding for hESC research instituted by former President George W. Bush were an important rationale behind the adoption of most state stem cell programs, yet it is not clear to what extent state programs focused on hESC research generally or hESC research not eligible for federal funding more specifically. To address these questions, each research grant awarded through the end of 2009 was analyzed to assess if funded research used hESCs and, if applicable, appeared ineligible for federal funding under the Bush Administration rules (Supplementary Methods). The percentage of grants that supported hESC
volume 28 number 12 DECEMBER 2010 nature biotechnology
correspondence informed debate regarding stem cell tourism and for protecting potentially vulnerable individuals. Note: Supplementary information is available on the Nature Biotechnology website.
© 2010 Nature America, Inc. All rights reserved.
Acknowledgments We would like to thank Canada’s Stem Cell Network for funding and the University of Alberta’s Health Law Institute for administrative support. We also gratefully acknowledge C. Scott, J. McCormick, T. Bubela and C. Murdoch for their helpful suggestions regarding data collection. Finally, we thank C. Toole for her research support, and G. Barr and T. Adido for their assistance in coding the data. AUTHOR CONTRIBUTIONS All authors contributed to this work. T.C., C.R. and A.Z. conceived the research concept. C.R. collected the data. C.R. and A.Z. coordinated the coding of the data. A.Z. and T.C. prepared the draft manuscript and M.N. helped with interpretation and with theoretical analysis. All authors discussed the results and implications and commented on the manuscript at all stages. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests.
Amy Zarzeczny1, Christen Rachul1, Matthew Nisbet2 & Timothy Caulfield1,3
1Health Law Institute, Law Centre, University of
Alberta, Edmonton, Alberta, Canada. 2School of Communication, American University, Washington, DC, USA3. Faculty of Law and School of Public Health, Law Centre, University of Alberta, Edmonton, Alberta, Canada. e-mail:
[email protected]
1. Qui, J. Nat. Biotechnol. 27, 790–792 (2009) 2. Lindvall, O. & Hyun, I. Science 324, 1664–1665 (2009). 3. Lau, D. et al. Cell Stem Cell 3, 591–594 (2008). 4. Regenberg, A., Hutchinson, L., Schanker, B. & Matthews, D. Stem Cells 27, 2312–2319 (2009). 5. Ryan, K., Sanders, A., Wang, D. & Levine, A. Regenerative Med. 5, 27–33 (2010). 6. Amariglio, N. et al. PLoS Med 6, e1000029 (2009). 7. http://www.isscr.org/clinical_trans/pdfs/ISSCRGL ClinicalTrans.pdf 8. MacReady, N. Lancet Oncol. 10, 317–318 (2009). 9. Kiatpongsan, S. & Sipp, D. Nat. Rep. Stem Cells published online 3 December, 2008, doi:10.1038/ stemcells.2008.151 (2008). 10. Zarzeczny, A. & Caulfield, T. Am. J. Bioeth. 10, 1–13 (2010). 11. Iyengar, S. & Kinder, D. News that Matters: Television and American Public Opinion. (University of Chicago Press, Chicago, IL, USA, 1987). 12. Nisbet, M.C. & Mooney, C. Science, 316, 56 (2007). 13. Creasy, G. & Scott, C. Nat. Biotechnol. 27, 21–22 (2009).
Tracking and assessing the rise of state-funded stem cell research To the Editor: The editorial in your October issue1 highlights the legal challenges to new guidelines issued by the US National Institutes of Health (NIH) in July 2009 for the federal funding of human embryonic stem cell (hESC) research. In the eight-year period preceding these most recent NIH guidelines, only a small number of cell lines could be studied with federal funds. During this time, six states—California, Connecticut, Illinois, Maryland, New Jersey and New York—took on a role typically played by the NIH and created funding programs specifically designed to support stem cell research, including hESC research. These are not the first state programs to fund scientific research but their commitment to basic research is atypical, as most state science and technology programs have focused on science closer to commercialization2. Although the state stem cell programs differ, they each share at least two goals: advancing promising science, including research not eligible for federal funding during the Bush Administration, and returning economic benefits to their state. In this article, we report an initial attempt to track and assess the impact of 1246
these state funding programs. Existing work on state stem cell policy has focused on identifying policy differences between various jurisdictions3,4, assessing the impact of state decisions to support or restrict hESC science5–7 and examining the role of states in governing controversial science8,9. The analysis reported here extends this literature though use of a novel data set of the grants these states have awarded. These data provide insight into how states have prioritized their funding, including the extent to which they have supported hESC research generally and hESC research not eligible for federal funding during the Bush Administration more specifically, as well as the extent to which these states have drawn new scientists into the field. The underlying data have been publicly released on a new website (http:// www.stemcellstates.net) designed to facilitate additional analysis of state-funded stem cell science and improve public awareness of these programs. Given ongoing legal uncertainties surrounding federal funding of hESC research and the likelihood that voters, at least in California, will be asked to approve additional state stem cell funding in the future, understanding and evaluating the
effects of state-funded stem cell research is both timely and useful. The database that forms the basis for the analysis described here contains the title, principal investigator, institution, abstract and amount for each grant awarded by the agency overseeing stem cell research funding in these six states (Supplementary Methods). In all, between December 2005 when New Jersey awarded the first state stem cell grants and the end of 2009, the six stem cell states awarded nearly 750 grants totaling just over $1.25 billion. The scale of these programs varies substantially, ranging from the roughly $15 million awarded by Illinois and New Jersey to the $1.02 billion awarded by California. On a per capita basis, funding awarded through the end of 2009 ranges from just over $1 in Illinois to nearly $28 in California (Table 1). States funding stem cell research can choose to support several different activities, ranging from investigatorinitiated research grants to new facilities to workforce development. To investigate how states prioritized these various types of funding, each grant was classified by its primary purpose (Supplementary Methods). Research grants and support for scientific infrastructure were the two largest categories, accounting for more than 90% of all state stem cell funding (Table 1). The infrastructure category was dominated by the $271 million California awarded for the construction of 12 major stem cell research facilities, although several other states also dedicated a substantial portion of their funding to infrastructure, such as shared equipment or core laboratories. In contrast to supporting basic investigator-initiated research, spending money on infrastructure is a classic state economic development approach, but, in these cases, spending was motivated, at least in part, by the need to create separate laboratories to facilitate research on unapproved hESC lines. The restrictions on federal funding for hESC research instituted by former President George W. Bush were an important rationale behind the adoption of most state stem cell programs, yet it is not clear to what extent state programs focused on hESC research generally or hESC research not eligible for federal funding more specifically. To address these questions, each research grant awarded through the end of 2009 was analyzed to assess if funded research used hESCs and, if applicable, appeared ineligible for federal funding under the Bush Administration rules (Supplementary Methods). The percentage of grants that supported hESC
volume 28 number 12 DECEMBER 2010 nature biotechnology
correspondence
Table 1 The scale and prioritization of state stem cell funding programsa State program Grants and funding Year first grants awarded Total funding pledged/time period Number of grants awarded
California
Connecticut
Illinois
Maryland
New Jersey
2006
2006
2006
2007
2005
New York 2008
$3 billlion/ 10 years
$100 million/ 10 years
N/A
N/A
N/A
$600 million/ 11 years
329
69
17
140
35
158
1,024
40
15
54
15
121
28
11
1
10
2
6
Percentage of funding for research
58%
76%
100%
93%
64%
61%
Percentage of funding for infrastructure
31%
24%
0%
0%
36%
34%
Percentage of grants for hESC research
75%
97%
35%
42%
21%
21%
Percentage of grants clearly not NIH eligible
18%
16%
12%
3%
6%
0%
Percentage of state PIs without NIH stem cell funding
42%
61%
65%
71%
61%
49%
Percentage of state hESC PIs without NIH hESC funding
77%
91%
67%
79%
100%
66%
Funding awarded ($ millions) Funding per capitab($) Research prioritizationc
© 2010 Nature America, Inc. All rights reserved.
NIH funding status of state grant recipientsd
aIncludes
grants awarded through the end of 2009. bPer capita funding based on state population from US Census Bureau 2009 Population Estimates. chESC prioritization analysis includes only grants with a primary purpose of research. dNIH funding was examined from FY2005 to present. PIs, principal investigators.
research varied substantially among these states (Table 1). Large majorities of the research grants awarded in Connecticut and California supported studies involving hESCs, whereas only a minority of grants supported hESC research in the other states. These disparities likely reflect differences in the types of stem cell scientists present in these states as well as priorities of the various state funding bodies. Only a subset of grants for hESC research supported science that was clearly ineligible for NIH funding during the Bush Administration. California and Connecticut focused the most on this sort of research—which typically involved the derivation of new hESC lines or the use of newer unapproved cell lines—but even in these states fewer than a fifth of grants went to clearly ineligible research. Many scientists indicated plans to use existing hESC lines but did not specify which lines they planned to use. Given evidence that a handful of approved cell lines account for a large proportion of the hESC lines actually distributed to scientists and an even larger share of published literature10, most of these projects probably used approved hESC lines. In some cases, however, scientists may have chosen to use ineligible cell lines but not clearly indicated these plans. Thus, the share of grants reported here as clearly ineligible for NIH funding should be viewed as a lower bound on the amount of research each state funded that was ineligible for federal funding. Several factors could explain the relatively small share of grants that went toward clearly ineligible research. Some scientists who wished to pursue this research may have been
unable to access the raw materials or acquire the intellectual property rights required to do so. Alternatively, these findings could simply reflect scientific interest. The discovery of induced pluripotent stem cells11 may, for instance, have reduced scientific interest in the derivation of new hESC lines. Finally, these findings may reflect a preference on the part of scientists to use well-established and well-studied hESC lines. This last explanation may be particularly relevant for new scientists entering the field of hESC research, as using recognized cell lines may give their initial research efforts greater credibility. In addition to supporting research not eligible for federal funding, focused state programs might serve to draw new scientists into the field of stem cell research. To evaluate this potential impact, the recent NIH funding portfolio of each scientist receiving a state stem cell grant with a primary purpose of research was examined (Supplementary Methods). Although most scientists had received NIH funding, a substantial number (ranging from 42% in California to 71% in Maryland) had not received NIH funding for stem cell research (Table 1). Similar, but more pronounced, results are observed when the NIH funding portfolio of scientists receiving state funding for hESC research is examined, as only a small minority of these scientists also had NIH grants supporting hESC research. Given the importance of NIH funding for biomedical research in the United States, these results suggest that the existence of state funding programs for stem cell research has drawn many new scientists into the field of stem cell research, or at least encouraged scientists to consider how stem
nature biotechnology volume 28 number 12 DECEMBER 2010
cell research could complement their existing research programs. These data also permit a more nuanced comparison between state stem cell funding and NIH stem cell funding than has previously been available (Fig. 1). Total state funding for all types of stem cell research has risen rapidly since grants were first awarded in 2005, but states still spend less than half of what the NIH spends each year on stem cell research. The situation is different for hESC research, as state funding for hESC research grants first exceeded comparable NIH funding in 2007 and equaled or exceeded it in 2008 and 2009. Considered together, these data and analyses indicate that state funding for stem cell research has grown into a substantial enterprise that has provided funding on a scale comparable to the NIH. Although states vary in the degree to which they have focused on hESC research, as a whole, state funding for hESC research has been substantial, exceeding, in cumulative terms, NIH funding for this research between 2005 and 2009. Most state hESC funding appears to have supported research also eligible for federal funding during the Bush Administration. This finding is surprising, given the explicit intent of several state programs to preferentially support science not eligible for federal funding, but likely reflects the nature of the grant proposals state agencies received, particularly given the number of grants states awarded to scientists relatively new to the field of hESC research. In the light of the recent change in federal stem cell policy and the ongoing economic downturn, the future of state 1247
correspondence
246
250
73
5
60 30
76 76 33
20 05
20 09
20 08
31 1
0
20 07
20 05
0
90
NIH grants
144
21
35
20 09
519
125
120
20 08
426
500
155
150
20 07
657
643
609
180
20 06
Funding awarded ($ millions)
938
1,000 750
b
1,231
1,250
20 06
Funding awarded ($ millions)
a
State grants
© 2010 Nature America, Inc. All rights reserved.
Figure 1 Comparing state and NIH stem cell funding. (a) Total amount of all NIH stem cell grants and all stem cell grants awarded by the six states. (b) Total amount of all NIH and state hESC research grants. Only grants with a primary purpose of research are included. State funding is by calendar year. NIH funding is by fiscal year.
stem cell programs, as well as similar state programs supporting other areas of science, is uncertain. The analysis here suggests that state stem cell funding programs are sufficiently large and established that simply ending the programs, at least in the absence of substantial investment in the field by other funding sources, could have deleterious effects. Such action would fail to capitalize on the initial efforts of scientists who have been drawn to the field of stem cell research by state programs and leave many stem cell scientists suddenly searching for funding to continue their research. Large-scale state funding for basic research is a relatively new phenomenon, and many questions remain about the impact of these programs on the development of scientific fields and the careers of scientists. The influence of state funding programs on the distribution of research publications, the acquisition of future external funding, the creation of new companies and the translation of basic research into medical practice, for instance, are important unanswered questions. Similarly, comparing state funding programs with federal funding programs as well as foundations could offer new insight into the relative priorities of different funding bodies and the extent to which their funding portfolios overlap or are distinct. We hope the analysis presented here and the public release of the underlying database will inspire additional analysis of state science funding programs generally and state-funded stem cell science in particular. Note: Supplementary information is available on the Nature Biotechnology website. Acknowledgements The authors gratefully acknowledge financial support from the Roadmap for an Entrepreneurial Economy Program, funded by the Kauffman
1248
Foundation and the Georgia Research Alliance, and Georgia Tech. They thank J. Walsh at Georgia Tech for helpful comments on an earlier version of this manuscript. They also appreciate the assistance they received with data collection from officials in various state stem cell agencies. A.D.L. would also like to thank A. Jakimo, whose comment at a meeting of the Interstate Alliance on Stem Cell Research inspired collection of these data.
COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests.
Ruchir N Karmali1, Natalie M Jones1 & Aaron D Levine1,2 1School of Public Policy, Georgia Institute of Technology, Atlanta, Georgia, USA. 2Institute of Bioengineering & Bioscience, Georgia Institute of Technology, Atlanta, Georgia, USA. e-mail:
[email protected]
1. Anonymous. Nat. Biotechnol. 28, 987 (2010). 2. Plosila, W.H. Econ. Dev. Q. 18, 113–126 (2004). 3. Stayn, S. BNA Med. Law Pol. Rep. 5, 718–725 (2006). 4. Lomax, G. & Stayn, S. BNA Med. Law Pol. Rep. 7, 695–698 (2008). 5. Levine, A.D. Public Adm. Rev. 68, 681–694 (2008). 6. Levine, A.D. Nat. Biotechnol. 24, 865–866 (2006). 7. McCormick, J.B., Owen-Smith, J. & Scott, C.T. Cell Stem Cell 4, 107–110 (2009). 8. Fossett, J.W., Ouellette, A.R., Philpott, S., Magnus, D. & Mcgee, G. Hastings Cent. Rep. 37, 24–35 (2007). 9. Mintrom, M. Publius 39, 606–631 (2009). 10. Scott, C.T., McCormick, J.B. & Owen-Smith, J. Nat. Biotechnol. 27, 696–697 (2009). 11. Takahashi, K. & Yamanaka, S. Cell 126, 663–676 (2006).
Towards a knowledge-based Human Protein Atlas To the Editor: We report on the launch of version 7 of the Human Protein Atlas with subcellular localization data and expression data for all major human tissues and organs. A milestone has been achieved with the inclusion of expression data for >50% of the human protein-coding genes. The main new feature of the release is an attempt towards a knowledge-based portal, including an annotated protein expression feature for protein targets analyzed with two or more antibodies, and the establishment of the main subcellular localization of protein targets. In 2005, the first version of the Human Protein Atlas (http://www.proteinatlas. org/) was released with protein profile data based on immunohistochemistry on tissue microarrays covering 48 different human tissues and organs, including kidney, liver, heart, brain and pancreas1. The first version included data from 718 antibodies corresponding to 650 human protein-coding genes. High-resolution images were published along with annotation of the presence or absence of a particular protein target in all represented tissues. The 2005 Human Protein Atlas also contained information regarding protein profiles from 20 different types of human cancer, including breast, colorectal,
lung and prostate cancer. The data in the portal were made available freely both for academia and industry without restrictions or password protection. In 2007, the portal was extended to also include subcellular profiling data2 using immunofluorescence-based confocal microscopy in three human cancer cell lines of different (glial, mesenchymal and epithelial) origin. More data have been added to the portal every year since the first release3 and version 6, launched in March 2010, contained 11,274 antibodies corresponding to 8,489 protein-coding genes. This entire effort depends heavily on the availability of good quality antibodies, and recently a communitybased portal, Antibodypedia (http://www. antibodypedia.org/), has been launched to allow antibodies from different providers to be listed and compared4,5, although the main source of information so far comes from the providers’ own validation data, not by independent third-party users. At present, the Antibodypedia contains close to 100,000 antibodies, corresponding to >70% of the protein-coding genes in humans. An important objective has now been reached with the inclusion of 10,118 proteincoding genes corresponding to >50% of the 19,559 human entries as defined by UniProt, including only entries with evidence at protein
volume 28 number 12 DECEMBER 2010 nature biotechnology
correspondence
246
250
73
5
60 30
76 76 33
20 05
20 09
20 08
31 1
0
20 07
20 05
0
90
NIH grants
144
21
35
20 09
519
125
120
20 08
426
500
155
150
20 07
657
643
609
180
20 06
Funding awarded ($ millions)
938
1,000 750
b
1,231
1,250
20 06
Funding awarded ($ millions)
a
State grants
© 2010 Nature America, Inc. All rights reserved.
Figure 1 Comparing state and NIH stem cell funding. (a) Total amount of all NIH stem cell grants and all stem cell grants awarded by the six states. (b) Total amount of all NIH and state hESC research grants. Only grants with a primary purpose of research are included. State funding is by calendar year. NIH funding is by fiscal year.
stem cell programs, as well as similar state programs supporting other areas of science, is uncertain. The analysis here suggests that state stem cell funding programs are sufficiently large and established that simply ending the programs, at least in the absence of substantial investment in the field by other funding sources, could have deleterious effects. Such action would fail to capitalize on the initial efforts of scientists who have been drawn to the field of stem cell research by state programs and leave many stem cell scientists suddenly searching for funding to continue their research. Large-scale state funding for basic research is a relatively new phenomenon, and many questions remain about the impact of these programs on the development of scientific fields and the careers of scientists. The influence of state funding programs on the distribution of research publications, the acquisition of future external funding, the creation of new companies and the translation of basic research into medical practice, for instance, are important unanswered questions. Similarly, comparing state funding programs with federal funding programs as well as foundations could offer new insight into the relative priorities of different funding bodies and the extent to which their funding portfolios overlap or are distinct. We hope the analysis presented here and the public release of the underlying database will inspire additional analysis of state science funding programs generally and state-funded stem cell science in particular. Note: Supplementary information is available on the Nature Biotechnology website. Acknowledgements The authors gratefully acknowledge financial support from the Roadmap for an Entrepreneurial Economy Program, funded by the Kauffman
1248
Foundation and the Georgia Research Alliance, and Georgia Tech. They thank J. Walsh at Georgia Tech for helpful comments on an earlier version of this manuscript. They also appreciate the assistance they received with data collection from officials in various state stem cell agencies. A.D.L. would also like to thank A. Jakimo, whose comment at a meeting of the Interstate Alliance on Stem Cell Research inspired collection of these data.
COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests.
Ruchir N Karmali1, Natalie M Jones1 & Aaron D Levine1,2 1School of Public Policy, Georgia Institute of Technology, Atlanta, Georgia, USA. 2Institute of Bioengineering & Bioscience, Georgia Institute of Technology, Atlanta, Georgia, USA. e-mail:
[email protected]
1. Anonymous. Nat. Biotechnol. 28, 987 (2010). 2. Plosila, W.H. Econ. Dev. Q. 18, 113–126 (2004). 3. Stayn, S. BNA Med. Law Pol. Rep. 5, 718–725 (2006). 4. Lomax, G. & Stayn, S. BNA Med. Law Pol. Rep. 7, 695–698 (2008). 5. Levine, A.D. Public Adm. Rev. 68, 681–694 (2008). 6. Levine, A.D. Nat. Biotechnol. 24, 865–866 (2006). 7. McCormick, J.B., Owen-Smith, J. & Scott, C.T. Cell Stem Cell 4, 107–110 (2009). 8. Fossett, J.W., Ouellette, A.R., Philpott, S., Magnus, D. & Mcgee, G. Hastings Cent. Rep. 37, 24–35 (2007). 9. Mintrom, M. Publius 39, 606–631 (2009). 10. Scott, C.T., McCormick, J.B. & Owen-Smith, J. Nat. Biotechnol. 27, 696–697 (2009). 11. Takahashi, K. & Yamanaka, S. Cell 126, 663–676 (2006).
Towards a knowledge-based Human Protein Atlas To the Editor: We report on the launch of version 7 of the Human Protein Atlas with subcellular localization data and expression data for all major human tissues and organs. A milestone has been achieved with the inclusion of expression data for >50% of the human protein-coding genes. The main new feature of the release is an attempt towards a knowledge-based portal, including an annotated protein expression feature for protein targets analyzed with two or more antibodies, and the establishment of the main subcellular localization of protein targets. In 2005, the first version of the Human Protein Atlas (http://www.proteinatlas. org/) was released with protein profile data based on immunohistochemistry on tissue microarrays covering 48 different human tissues and organs, including kidney, liver, heart, brain and pancreas1. The first version included data from 718 antibodies corresponding to 650 human protein-coding genes. High-resolution images were published along with annotation of the presence or absence of a particular protein target in all represented tissues. The 2005 Human Protein Atlas also contained information regarding protein profiles from 20 different types of human cancer, including breast, colorectal,
lung and prostate cancer. The data in the portal were made available freely both for academia and industry without restrictions or password protection. In 2007, the portal was extended to also include subcellular profiling data2 using immunofluorescence-based confocal microscopy in three human cancer cell lines of different (glial, mesenchymal and epithelial) origin. More data have been added to the portal every year since the first release3 and version 6, launched in March 2010, contained 11,274 antibodies corresponding to 8,489 protein-coding genes. This entire effort depends heavily on the availability of good quality antibodies, and recently a communitybased portal, Antibodypedia (http://www. antibodypedia.org/), has been launched to allow antibodies from different providers to be listed and compared4,5, although the main source of information so far comes from the providers’ own validation data, not by independent third-party users. At present, the Antibodypedia contains close to 100,000 antibodies, corresponding to >70% of the protein-coding genes in humans. An important objective has now been reached with the inclusion of 10,118 proteincoding genes corresponding to >50% of the 19,559 human entries as defined by UniProt, including only entries with evidence at protein
volume 28 number 12 DECEMBER 2010 nature biotechnology
a
b
2,000
No antibody Single antibody Multiple antibodies
No antibody Single antibody Multiple antibodies
1,400 1,200
1,500 1,000
Number of genes
Number of genes
1,000
800 600 400
500
c
SH3–domain containing proteins
SH2–domain containing proteins
Transcription factors
GPCRs (excl. olfactory receptors)
Chromosome
Peptidases
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 X Y
CD markers
0
Transporters
200
Kinases
or transcript level and proteins inferred from homology6 (Fig. 1). The chromosomal coverage of protein-coding genes is shown in Figure 1a and the status for a selection of important protein classes is reported in Figure 1b. Almost 80% of the human kinases and Src-homology 2 domain–containing proteins and >50% of the transcription factors have protein profiling data in the atlas. We introduce the concept of annotated protein expression for paired antibodies, in which two or more independent antibodies are used to validate the staining pattern of each other. The immunohistochemical staining in each tissue or organ by the independent antibodies is compared and a new annotated protein expression is manually curated for each cell type in each tissue or organ. A reliability score is generated as an estimation of the degree of knowledge-based certainty of the reported expression profile. The reliability score is based on the similarity of the staining for the different antibodies, but also takes into account available information from literature, bioinformatics predictions and additional experimental evidence, such as western blots, transcript profiling and/ or small interfering RNA knockdowns (Supplementary Notes; http://www. proteinatlas.org/about/quality+scoring). Approximately 2,000 protein-coding genes have annotated protein expression patterns in this launch and >75% of these have been scored with a high or medium reliability score (Fig. 1c). It is noteworthy that, at present, the majority of the proteins in the atlas have only been analyzed with a single antibody and thus these genes are reported simply as antibody staining. The long-term objective of the Human Protein Atlas project is to generate at least two antibodies for all human proteincoding genes to allow knowledge-based annotated protein expression data for the complete human proteome. The new portal contains a summary page for every human gene (Fig. 2) to allow for an easy entry point to navigate the detailed protein expression data. The summary page also serves as a convenient entry page from other related databases with a similar genecentric format. The results from the curation and annotation of the staining patterns from one or several antibodies to the same protein target are summarized with links to details pages, including the original images. A new feature is a section for subcellular localization and here, the annotated knowledge-base of the subcellular distribution reports the main and additional subcellular location for each protein target in the analyzed cell. Because the location of human proteins is known only
d 100
Low 17%
Medium 36%
Very low 5%
Fraction of cell types (%)
© 2010 Nature America, Inc. All rights reserved.
correspondence
Weak Moderate Strong
80
60
40
20
High 42%
0
0
500
1,000
1,500
Genes
Figure 1 Overview of the data in the Human Protein Atlas version 7.0. (a,b) The total coverage in terms of number of genes analyzed with single or multiple antibodies for each of the human chromosomes (a) and a selection of protein classes3 (b). (c) Reliability scores for the ‘annotated protein expression’ are given by a four-graded scale, ranging from ‘very low’ to ‘high’. (d) The percent of cell types with the annotated expression levels weak, moderate and strong for each of the 2,000 genes with ‘annotated protein expression’. The genes are arranged according to the total abundance in the analyzed cell types, with genes detected only in a single cell type to the left and genes detected in all cell types to the far right. GPCRs, G protein–coupled receptors. SH2, Src-homology 2 domain.
for a small fraction of the human genes, these subcellular localization data are a valuable resource for defining the protein content of the various compartments of the human cell and can thus constitute a starting point for further in-depth functional studies. A new organ view has been designed to allow cell types of similar origin to be easily compared. The tissues and organs have been divided into 12 functional classes, including central nervous system (CNS), hematopoietic, cardiovascular and female and male reproductive tissues. In this release, a total of 66 normal cell types from 46 tissues and organs have been scored. As an example, in Figure 2, we show part of the protein profiles for the human estrogen receptor 1 (ESR1), including the results from antibody staining of three independent antibodies and
nature biotechnology volume 28 number 12 DECEMBER 2010
the resulting annotated protein expression. The annotated protein expression in the various cell types are obtained by a knowledge-based process taking into account staining from at least two antibodies, literature and additional experimental data. For the ESR1 (Fig. 2), one of the antibodies shows weak staining in tissues such as those of the prostate and urinary bladder, most likely due to nonspecific staining, but the conflation of data from the three antibodies suggests exclusive expression in breast and female reproductive tissues. It is important to point out that this annotated protein expression should be considered as the best estimate of the true expression based on the experimental data available, but that additional data and input from the scientific community may lead to subsequent revisions 1249
correspondence Normal tissues
Annotated protein expression
© 2010 Nature America, Inc. All rights reserved.
Cancer tissues
Figure 2 The Human Protein Atlas web portal. The Human Protein Atlas summary view for the human estrogen receptor 1 (ESR1), displaying expression data for the target protein in normal and diseased tissues. To the right, a detailed organ view of the expression pattern of the ESR1 in normal human tissues with the cell types grouped into functional classes, here focused on the female and male tissues and the urinary tract. Three different antibodies have been used to generate the staining patterns, from which an annotated protein expression has been retrieved. The blue color scale represents annotated protein expression, and the red color scale represents relative antibody staining. The intensity of the colors indicates the level of expression and/or staining.
of the interpretation of protein expression in individual cells, tissues or organs. A new cancer view has been designed, allowing an easy overview of the results for all antibodies towards a particular protein target in tumor tissues from 216 patients representing 20 different types of cancer (Fig. 2). This overview also contains the results for the corresponding normal tissues for each cancer form. An advanced search function has been created to allow complex queries involving both normal and cancer tissues, including subcellular localization, validation results for the antibody and/or target protein classes. An additional feature is the possibility to customize the display of the results page by adding or deleting columns based on interest. An important objective is to allow the integration of this database with other ’omics databases covering genomics7, gene variation8, transcript profiling9, proteomics10 or metabolomics11. All the expression data and the antibody staining data are therefore available for downloads to facilitate comparative expression studies and systems biology approaches. In the near future, we also plan to include additional experimental data from mass spectrometry– based proteomics12 and transcriptomics profiling using next generation RNA-Seq13. An issue with the new gene-centric design is the presence of protein isoforms, including proteolytic variants, posttranslational modifications and splice variants. Although the antigen sequence is reported for most antibodies on the 1250
detailed antibody validation page, and thus makes it possible to determine if a particular antibody should recognize all splice variants or not, the annotated protein expression is based on a consensus model in which a representative protein from the gene is reported. In the future, it might be relevant to move toward separate annotated protein expression profiles for each splice variant or other major alternative isoforms of a particular protein target. The annotated protein expression profiles allow a refinement of the global analysis of tissue specificity recently reported14, in which the staining of antibodies corresponding to one-third of the proteincoding genes suggested that a large portion of the human proteins are present across tissues in a relatively ubiquitous manner. An analysis based on the 2,000 protein targets with annotated protein expression profiles can now be performed and the result suggests a similar profile with a large subset of proteins detected across various cell types (Fig. 1d). In conclusion, we here report on a new version of the Human Protein Atlas, where we have moved towards a knowledge-based portal with gene-centric expression profiles based on the annotation of several antibodies towards the same protein target. The Human Protein Atlas effort described here fits well into recent suggestions to initiate a genecentric Human Proteome Project to map and characterize a representative protein from every protein-coding gene of humans15,16. In
this manner, the antibody-based portal can be complemented with other efforts, such as mass spectrometry–based proteomics and gene fusion technologies, to provide a knowledge-base for the human proteins, including isoforms, subcellular localization, tissue profiles and interaction networks. Because all the data on the Human Protein Atlas are publicly available, the information can be integrated into other databases. The availability of the annotated expression patterns opens up the possibility for a community-based dialog to provide input by researchers with specialized knowledge about particular protein targets. Note: Supplementary information is available on the Nature Biotechnology website. Acknowledgments The authors would like to acknowledge the entire staff of the Human Protein Atlas project. This work was supported by grants from the Knut and Alice Wallenberg Foundation and EU 7th framework program PROSPECTS. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests.
Mathias Uhlen1,2, Per Oksvold1, Linn Fagerberg1, Emma Lundberg2, Kalle Jonasson1, Mattias Forsberg1, Martin Zwahlen1, Caroline Kampf3, Kenneth Wester3, Sophia Hober1, Henrik Wernerus1, Lisa Björling1 & Fredrik Ponten3 1School of Biotechnology, AlbaNova University Center, Royal Institute of Technology (KTH), Stockholm, Sweden. 2Science for Life Laboratory, Royal Institute of Technology (KTH), Stockholm, Sweden. 3Department of Genetics and Pathology, Rudbeck Laboratory, Uppsala University, Uppsala, Sweden.
1. Uhlen, M. et al. Mol. Cell Proteomics 4, 1920–1932 (2005). 2. Barbe, L. et al. Mol. Cell Proteomics 7, 499–508 (2008). 3. Berglund, L. et al. Mol. Cell Proteomics 7, 2019–2027 (2008). 4. Bjorling, E. & Uhlen, M. Mol. Cell Proteomics 7, 2028– 2037 (2008). 5. Kiermer, V. Nat. Methods 5, 860 (2008). 6. The UniProt Consortium. Nucleic Acids Res. 38, D142–148 (2010). 7. Flicek, P. et al. Nucleic Acids Res. 38, D557–D562 (2010). 8. The International HapMap Project. Nature 426, 789– 796 (2003). 9. Lukk, M. et al. Nat. Biotechnol. 28, 322–324 (2010). 10. Mathivanan, S. et al. Nat. Biotechnol. 26, 164–167 (2008). 11. Wishart, D.S. et al. Nucleic Acids Res. 35, D521–526 (2007). 12. Ong, S.E. et al. Mol. Cell Proteomics 1, 376–386 (2002). 13. Sultan, M. et al. Science 321, 956–960 (2008). 14. Ponten, F. et al. Mol. Syst. Biol. 5, 337 (2009). 15. Anonymous. Nat. Methods 7, 661 (2010). 16. Anonymous. Mol. Cell Proteomics 9, 427–429 (2010).
volume 28 number 12 DECEMBER 2010 nature biotechnology
c o m m e n ta r y
Anchoring gene patent eligibility to its constitutional mooring The antiquated legal standard that natural laws and products are not eligible for patent protection is ill-suited for gene and diagnostics patents. Here, I propose a new, technology-agnostic framework for determining patent eligibility that is tailored to the meet the US Constitutional objective of promoting innovation.
T
he escalating debate over the validity of gene and diagnostic patents in the United States threatens to strip the incentives necessary to translate scientific findings from the bench to the bedside. Although a strong aversion to patents in new technological areas is nothing new, what sets the current debate apart is the extent of the misinformation, the personal nature of the subject matter, the support of powerful organizations and the potential damage an illinformed decision would have on the future of medicine and healthcare reform. The latest ruling in Association for Molecular Pathology v. US Patent & Trademark Office1 (The Myriad Genetics Case) that invalidated patents covering the isolated breast cancer genes (BRCA1 and BRCA2) and methods for their use in assessing the risk of breast and ovarian cancer provides an example of a decision based on faulty scientific reasoning, murky legal precedent and unsupported policy arguments. Any judge-made doctrine for assessing patent eligibility should be based on sound legal principles that are closely aligned with the constitutional purpose to “promote the Progress of Science and the Useful Arts”2. The challenge lies in finding the proper balance between encouraging innovation by awarding limited exclusivity to inventors and stymieing basic research through overprotection. Refocusing the legal reasoning for the natural products doctrine on this constitutional language offers a fresh perspective on the debate and opens the door to a new proposal for determining patent Kenneth G. Chahine is professor of law and member of the BioLaw Project, S.J. Quinney College of Law at University of Utah, Salt Lake City, Utah, USA. e-mail:
[email protected]
eligibility, which is closely aligned to the goal of promoting innovation. In this article, I provide five criteria that would be useful for courts, legislatures and businesses when confronted with a question of subject matter eligibility. Taken together, these criteria incorporate many of the principles previously announced by the courts. They also allow courts to build a solid evidentiary foundation from which to decide the scope of an invention and its eligibility for patent protection. Importantly, the criteria may provide greater predictability to the legal, investment and research community by creating a legal framework for patent eligibility with one unequivocal and constitutionally consistent objective—promoting innovation. The natural products doctrine US courts have long held that “laws of nature, natural phenomena, and abstract ideas” are not eligible for patent protection3. Under this ‘natural laws’ or ‘natural products’ doctrine, the eligibility of two classes of patents are being questioned: first, patents covering isolated gene sequences; and second, methods of comparing genetic sequences, proteins and metabolites to diagnose disease or individualize treatment4,5. It has become increasingly clear, however, that advances in medicine and other information-based technologies are calling into question the applicability of the natural products doctrine. During the oral arguments in Bilski v. Kappos, US Supreme Court Justice Breyer voiced his own uncertainty on where to draw the line when he asked counsel—“All right, so what do I do?”6. Although the Bilski decision was hailed as a victory by the biotech community, it was based on the court’s
nature biotechnology volume 28 number 12 december 2010
reluctance to make a sweeping decision that might cause collateral damage to other industries7. In rejecting the US Federal Circuit’s rigid test for determining patent eligibility, the Supreme Court once again affirmed the need for other criteria to establish patent eligibility that was consistent with the ‘purpose’ of our patent system8. In examining the legal precedence for the natural products doctrine, I found there is little consistency in the manner in which the doctrine is applied. Because at a basic level all innovations comprise products of nature, even proponents of the doctrine have difficulty
©Joe Lawton
© 2010 Nature America, Inc. All rights reserved.
Kenneth G Chahine
Judge Robert Sweet of the NY federal circuit court, who in March invalidated Myriad Genetics’ BRCA1 and BRCA2 patents.
1251
© 2010 Nature America, Inc. All rights reserved.
C O M M E N TA R Y defining when an invention crosses the line and becomes ineligible for patent protection. In distinguishing the patent ineligibility of isolated genes and the patent eligibility of purified adrenaline the Department of Justice argues that “patent eligibility may arise when a natural compound has been so refined and purified through human intervention as to become a substance different in kind from the natural product”9. The Department, however, offers no suggestion for determining when a product has been simply purified and remains patent ineligible (e.g., isolated genes) and when it has been purified to the extent that it now has changed ‘in kind’ and is patent eligible (e.g., adrenaline). Early decisions boldly declared without much support that natural products are “free to all men and reserved exclusively to none”10. More recently, the courts have relied on the justification that §101 of the US Patent Act excludes products of nature. As the Supreme Court has admitted, however, “[t]he plain language of §101 does not answer the question”11 and Chief Judge Rader of the Federal Circuit Court has called it a blunt tool for defining patent eligibility (http://www.patentlyo.com/ patent/2010/08/gene-patents-on-appeal-aclusrecusal-motion.html). Indeed, §101 broadly embraces “any new and useful” invention or discovery, causing courts to strain the plain language of the statute to justify their exclusion of laws and products of nature12. Notably, both the US Constitution and the Patent Act explicitly embrace not only inventions, but also discoveries. The plain language of the statute and US Constitution, therefore, discredits the often-used argument that natural products are ineligible because they have been discovered rather than invented and cast doubt over the Department of Justice’s claim that patent law embraces only “human-made inventions”13. Thus, the natural products doctrine seems to be based on a legal house of cards. Decisions in the US courts are best rationalized based on how judges perceive the invention rather than any available legal theory. The discourse between Dan Ravicher representing Association for Molecular Pathology and Chief Judge Rader at a conference at the Fordham Law School in April succinctly captures nearly 150 years of legal precedence on the issue. Ravicher (pointing to a bottle of water), asked “was that [purification] sufficient intervention between what God gave us...and what man created to merit a patent?” Chief Judge Rader responded, “how many people have died of water pollution over the course of human events? Probably billions.” Ravicher is narrowly focusing on the structural differences between water and purified water without regard to its new utility (that 1252
is, potability); Judge Rader is less concerned with the magnitude of the structural differences and instead focuses on the impact of its new utility (that is, saving lives)14. Similarly, looking at the structural differences between a gene in one’s body and a patented isolated gene has led many to conclude that gene patents are not deserving of protection. This is the logic Judge Sweet employed in his reasoning in the Myriad Genetics (Salt Lake City, Utah, USA) case, which emphasized that isolated and complementary DNA is not “markedly different” from native DNA15. On the other hand, those focused on the novel medical uses of an isolated gene to diagnose disease come to a different conclusion. This is consistent with the Federal Circuit’s reasoning in Prometheus Laboratories, Inc. v. Mayo Collaborative Services in which the court stated that “methods of treatment … are always” eligible for patent protection16. Regardless of the ideological view held, it seems clear that we need a more objective tool for making decisions on the eligibility of inventions. That tool should be closely aligned with an objective we can all embrace—encouraging scientific innovation and bringing new technology to market that improves living conditions around the world now and for future generations. Do we even need a natural products doctrine? The distinction between patent eligibility and patentability is often blurred, yet it remains an important distinction. An invention is patentable if it is useful, novel and not obvious to one in the field provided that other sections of the Patent Act are met (e.g., it is adequately described and enabled). An invention that is declared ineligible cannot be patented, even if it would have met all of the patentability requirements of the Patent Act. As discussed in greater detail below, this distinction is critical to building a sound legal argument on which to rest a decision on the eligibility of gene and diagnostic patents. The natural products doctrine for patent ineligibility is a judicially created doctrine without any direct support in the Patent Act or US Constitution. The doctrine was created to prevent patents being given on fundamental discoveries that might stymie innovation. It is not entirely clear, however, whether this doctrine is a necessary legal tool to reign in patents whose breadth threatens innovation. The examples used by the courts to justify the natural products doctrine are not convincing, especially when one considers the less controversial patentability requirements of the Patent Act that can adequately address similar concerns. Take, for example, the Supreme
Court’s often quoted statement that “Einstein could not patent his celebrated law that E = mc2; nor could Newton have patented the law of gravity”17. Rather than arguing that gravity is not eligible for patent protection because it is a “law of nature,” one can persuasively argue instead that gravity was “known or used by others…before the invention thereof by the applicant for patent” and, therefore, is not novel under section 102 of the Patent Act18. Put differently, Newton could not have patented gravity because it would have restricted public use of something the public was enjoying before his discovery—an explanation for how something already in the public domain works has never been per se patentable (for a review of law of anticipation, see ref. 19). Discovering the mechanism of action of aspirin is not patentable if the public has, albeit unknowingly, been enjoying those benefits. Similarly, Samuel Morris’s attempt to broadly patent all forms of communication that used “electomagnetism” has been used to support the idea that laws of nature are not patent eligible. The court, however, also engaged in a lengthy discussion that the claim was invalid because it did not teach all methods of communicating by means of electromagnetism, only now by the telegraph20. The court’s reasoning is now codified as § 112 of the Patent Act and forms a sound basis for invalidity in instances when the patentee attempts to capture more intellectual estate than taught in the patent. Even in the Myriad case, Christopher Holman and Robert Cook-Deegan have pointed to vulnerabilities in the patent claims and suggest that “other doctrines of patentability can limit gene-based patent claims to an appropriate scope, rather than using the patent eligibility doctrine that would unnecessarily invalidate all DNA-based patents indiscriminately and regardless of their merit”21. By using other legal grounds, such as novelty and enablement, the court would obviate the need to clearly and predictably define what constitutes laws of nature, natural phenomena, and abstract ideas—a task it has struggled with unsuccessfully for over a century. It would also allow courts to make a decision on a case-bycase basis rather than invalidate an entire class of patents and risk hobbling the diagnostic industry by inadvertently eliminating the incentive to invent and invest. A new proposal for assessing patent eligibility Judicial reliance on §101 of the Patent Act to justify patent ineligibility is misplaced and serves only to confound the issue. Section 8 of Article 1 in the Constitution gives Congress plenary power to define what is and is not
volume 28 number 12 december 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
C O M M E N TA R Y patentable subject matter under one fundamental condition—it must “promote the Progress of Science and the Useful Arts”22. Therefore, a more sound legal justification is that an invention is not eligible for patent protection if it attempts to patent subject matter so fundamental that granting even a limited monopoly would impede scientific progress23. If so, courts could then logically hold that the patenting of fundamental products of nature are unconstitutional for failing to “promote the Progress of Science and useful Arts” even if the Patent Act does not specifically proscribe this class of invention or discovery2. Focusing on the constitutional underpinning for patent eligibility is not merely an exercise in legal gymnastics. Pushing past the arguments stemming from the legal precedent focused on §101 toward agreement on why certain inventions should not be eligible for patent protection may pave the way for a balanced and reasoned approach to determine when an invention or discovery is or is not promoting innovation. The current doctrine could then be viewed as one of several factors in considering patent eligibility. Just as a physician would never rely on one abnormal cholesterol score to determine whether to conduct open-heart surgery, courts should not rely solely on this doctrine to invalidate countless biotech patents. The court should incorporate into the analysis additional factors that can better define the breadth of an invention and its threat to our constitutional mandate to promote innovation. Using a balanced, multipronged approach to defining the scope of intellectual property rights is not new. In copyright law, a four-pronged approach is used to determine whether a reproduction is made for the purposes of “criticism, comment, news reporting, teaching … scholarship, or research,” and thus, exempt from infringement under the fair use doctrine. The statute specifically states: “In determining whether the use made of a work… is a fair use the factors to be considered shall include—(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use upon the potential market for or value of the copyrighted work.” What is appealing about these factors is how it balances the copyright holders’ economic interests with First Amendment rights to free speech and basic research. Over time, the court has applied these factors and brought clarity and predictability to the field. I propose that the US Congress and courts should take a similar approach to determining
patent eligibility using criteria that refocus the determination on factual considerations that bear directly on the objective of promoting innovation. In developing this framework, I have reviewed three areas: first, Supreme Court precedence, which clarified the factors and rationales that guided the Court when making patent eligibility determinations; second, factual circumstances, case studies and market forces relevant to determining the risks of overprotection, including the specific contribution of the patent to the overall concerns; and finally, the roles of existing laws that may limit the abuse by patent holders. This review allowed the formulation of five criteria that I believe would be useful for courts, legislatures and business when confronted with a question of subject matter patentability. 1. D oes the patented invention cover a product or law of nature unaltered by humans? 2. Do other limitations in the claims and prosecution history confer meaningful limitations on the scope of the patented claims? 3. Are there reasonable approaches to design around or circumvent the patented invention? 4. Is there evidence that basic research is being hindered, and if so, is the patent solely or substantially responsible? 5. And are there other laws in place that would mitigate the risk of overprotection? Taken together, these criteria incorporate many of the principles previously adhered to by the courts. They also allow courts to build a solid evidentiary foundation from which to decide the scope of an invention and its eligibility for patent protection. Importantly, the criteria may provide greater predictability by creating a legal framework for patent eligibility with one unequivocal and constitutionally consistent objective—promoting innovation. I discuss each of the criteria in more detail below. Does the patented invention cover a product or law of nature unaltered by humans? This first criterion largely preserves the US courts’ previously announced laws and products of nature doctrine. The US Supreme Court has stated that “anything under the sun made by man was patentable”23. Therefore, as the US Department of Justice and others agree, all human-made inventions are per se patent eligible. In cases where there is a question whether the invention is human-made as with isolated DNA, or the discovery employs a correlation or algorithm as with the diagnostic claims,
nature biotechnology volume 28 number 12 december 2010
additional factors should be considered when determining patent eligibility of the invention as a whole. For more information on the natural products doctrine, readers are referred to relevant reviews in the literature17. Do other limitations in the claims and prosecution history confer meaningful limitations on the scope of the patented claims? This second criterion considers the impact of claim limitations on the scope of the exclusivity sought and the risk of overprotection. After all, an invention is nothing more than the use of natural laws and products in an application not found in nature. A mixture of naturally occuring elements that make a new alloy with unique properties is patent eligible. Likewise, inventions do not become ineligible for patent protection because they employ a natural law, even if the law in central to its utility. A novel airplane wing design that improves fuel efficiency is patent eligible, even if central to its operation is the Bernoulli principle that creates lift and permits flight. In both instances, others are free to use the individual elements and the Bernoulli principle in applications outside those claimed. What is not eligible, therefore, is the individual elements and the Bernoulli principle per se because exclusivity to those products and laws without limits would grant an unreasonable broad monopoly. Indeed, the Supreme Court has recognized that “an application of a law of nature…may well be deserving of patent protection”24, and at times has even used that reasoning to uphold patent eligibility. In Diamond v. Diehr, the Supreme Court held that although the Arrhenius equation alone was not patent eligible, the scope of a claim for curing rubber that used the equation was eligible because it didn’t foreclose the use of the equation in other processes25. Likewise, in gene and diagnostic patents the court should look at limitations, including the following: the gene covered, the mutations recited, the diseases it claims to diagnose and the tissue source. The Myriad Genetics patents do not seek to exclude all genetic correlations, of all genes, to diagnose all diseases—they claim two genes for diagnosing breast and ovarian cancer. As such, these limitations should be considered in a patentability determination, especially if the justification for the doctrine is to reign in monopolies that unreasonably preempt future progress. Are there reasonable approaches to design around or circumvent the patented invention? This third criterion considers the ability to design around the invention. As Justice 1253
© 2010 Nature America, Inc. All rights reserved.
C O M M E N TA R Y Breyer notes in his decent to the revocation of a grant of certiori in Laboratory Corp. of America Holdings v. Metabolite Labs., Inc., “[p]atent law [also]…seeks to avoid the diminished incentive to invent that underprotection can threaten”26. If patents do not place meaningful barriers to ward off imitators the entire constitutional purpose of promoting the progress of science would be subverted. It is important, therefore, for the court to consider alternatives that exist, or could be developed, which would reasonably circumvent the patent, avoid underprotection and promote the progress of science. This factor is particularly useful when applied to other inventions the courts have previously held ineligible. It is not possible to design around a truly fundamental product or principle such as a new element, Einstein’s law of relativity and Newton’s law of universal gravitation27. History instructs us that this inquiry should not be narrowly limited to the precise technology claimed. In the music industry, for example, the biggest threats have not come from improving the vinyl records but from the evolution of disruptive technologies, such as the cassette player, CDs, DVDs and digital recordings that were developed over a 20-year span. In gene and diagnostic patents the inquiry is too narrowly focused on circumventing the precise invention. In a genetic test where, for example, mutations or polymorphisms are a proxy for deficient protein activity, it is entirely possible that another assay could be developed—a protein, metabolite or functional assay—that would better predict a predisposition to the disease. In the Myriad case, a functional assay for BRCA1 and BRCA2 proteins would obviate the need to examine the gene sequences and circumvent the patents. Evidence of a potential design-around, therefore, would weigh heavily against finding the claim overly broad and unconstitutional. Is there evidence that basic research is being hindered, and if so, is the patent solely or substantially responsible? This fourth criterion considers whether the patent in question is solely or substantially responsible for the alleged industry concerns. As a recent Nature Biotechnology article by Carbone et al.28 suggests, “the potential obstacles to innovation that patents cause in diagnostics may not be as high…as some had feared.” The authors’ policy changes in licensing practices would solve many of the problems that patent ineligibility is meant to address. This approach is supported by a case study by Fore et al.29 on the polymerase chain reaction (PCR) technology. At the time PCR was patented, a strong argument could have been made that it was a product of nature and 1254
patenting its use would foreclose basic research and stymie innovation—after all, the enzyme in a test tube merely does what it naturally evolved to do. However, the authors found that despite “heavy patent protection” PCR was “disseminate[d] broadly in the molecular biology world, becoming an indispensable research tool employed in nearly every biological field”29. This was due to deliberate licensing and business practices on the part of the patent holders, including ‘rational forbearance’ where companies refrain from suing researchers for patent infringement30. The Carbone et al.28 thesis, that licensing practices will resolve most of the issues, should be tempered for several reasons. First, licensing practices may increase the number of labs that offer the diagnostic test but it does not guarantee lower pricing, reimbursement and broader accessibility. As another paper31 by several of the authors in Carbone et al. reported, the per unit price of the exclusively licensed BRCA1/ BRCA2 diagnostic is lower than a non-exclusively licensed colorectal cancer test. There are, therefore, other market forces outside patents that conspire to sustain higher prices and limit accessibility. Second, an issue not often discussed is the impact of regulatory hurdles. Carbone et al.28 suggest that unlike the diagnostic industry, exclusive licenses to therapeutic proteins, such as erythropoietin, growth hormone and interferon, are “commercially significant” and justified. Although true, the reason exclusive licensing of therapeutics is commercially significant (and why erythropoietin at the time was likely not challenged as a natural product) is the requirement for US Food and Drug Administration (FDA; Silver Spring, MD) approval. The lack of FDA oversight over many diagnostics not only lowers the barriers to entry but also substantially reduces the development costs. If the FDA began regulating all diagnostics as it currently does for therapeutics, this debate would likely become mute as most of the laboratories and associations opposing gene and diagnostic patents would be unable (or unwilling) to develop FDA-approved diagnostic tests, even in the absence of patents and exclusive license arrangements. Most hospital and university-based labs draw a bright line between developing tests that can be offered by the Commercial Laboratory Improvement Act (CLIA) and FDA 510k or premarketing approval (PMA). Given the FDA’s increased scrutiny over these so-called laboratory developed tests, we should not ignore that a change in the regulatory winds may fundamentally alter market forces and take the wind out of the sails of the most vocal opponents to gene and diagnostic patents.
And finally, most analysis starts with the assumption that there is a robust reimbursement policy in place for diagnostics. This gives the false impression of lower barriers to entry, especially for academic and hospital labs. A role of the exclusive licensee in a disruptive technology is to create a market and pioneer reimbursement. Establishing a commercially viable market for the product is a costly undertaking that all subsequent providers benefit from at a fraction of the investment. It is worth considering what the availability of the BRCA1/BRCA2 diagnostic would have been in the absence of Myriad Genetics having pioneered the market and established a reimbursement policy. To ensure that patents are not unjustly blamed for the ills of the diagnostic industry, this criterion considers other factors that may impact dissemination and access. Overreacting to the public outcry concerning gene patents without considering additional factors may fundamentally cause a change in patent law without resolving the present concerns of the industry. Moreover, even assuming patents currently do not play a significant role in the diagnostic industry, market and regulatory forces may shift in the future necessitating once again strong incentives to invent and invest to bring diagnostics to market. Are there other laws in place that would mitigate the risk of over-protection? This final criterion takes into consideration other mechanisms that substantially mitigate the risk of over-protection and abuse by patent holders. For example, under the Bahy-Dole Act the government retains a “nontransferrable, irrevocable, paid-up license…to practice the invention… throughout the world”32. The Act further specifies that the government may, under certain conditions, exercise “March-in-Rights” to make the invention available to the public. In federally funded research, therefore, the courts should consider the government’s rights to reign in any potential abuse by a patent holder. Other laws could be revisited or created to address other concerns. There is, for example, a reasonable public policy argument that an individual should have a right to obtain a second opinion before undergoing a substantial medical procedure. If so, the courts could examine an extension of the doctrine of patent exhaustion. Under the doctrine (also referred to as the first sale doctrine), the first unrestricted sale of a patented item or method exhausts the patentee’s control over that particular item or method. This compromise would uphold the patentee’s rights and economic incentives by retaining exclusivity over the first sale, yet allow second opinions to be rendered without the threat of infringement. Similarly, concerns
volume 28 number 12 december 2010 nature biotechnology
C O M M E N TA R Y
© 2010 Nature America, Inc. All rights reserved.
that a physician’s fear of infringement may chill the practice of medicine could be addressed by US Congress as it has by excluding medical practitioners from liability for infringing a patent by performing of a medical or surgical procedure on a body33. Conclusions In the ongoing debate, both sides are talking over each other in part because there is a lack of well-articulated criteria for what constitutes a law of nature, a natural phenomena and an abstract idea, as well as the nexus between the doctrine and its putative objective of promoting innovation. Focusing the analysis on the constitutional objective that inventions and discoveries must promote the progress of science provides the clearest framework for determining what inventions should be eligible for patent protection. The proposal made for using a balanced, multipronged approach that emphasizes a factual determination directly bearing on this constitutional mandate is intended as the first step on the road to a workable compromise and a predictable legal framework. By mirroring the approach to determining copyright fair use, this proposal allows courts to thoughtfully assess patent eligibility on a case-by-case basis and build, over time, a predictable legal framework for different classes of inventions and discoveries. Without clear evidence that DNA patents are the sole or significant factor in disrupting basic research and dissemination, US courts
should be loath to invalidate an entire class of patents. One misstep may have unforeseeable negative consequences on the future of personalized medicine and cripple the biotech industry. Moreover, a flexible approach to patent eligibility is more likely to withstand Supreme Court scrutiny, which has repeatedly emphasized its dislike for rigid tests previously proposed by the Federal Circuit. Although there will no doubt be a healthy debate as to where to draw the line between overprotection and underprotection, both the scientific and commercial communities should recognize that neither can advance if a single party owns exclusive and absolute rights to an invention involving a truly fundamental law or product of nature. ACKNOWLEDGMENTS The author is grateful to J. Mixco for his contribution and assistance. The views expressed are solely those of the author. COMPETING FINANCIAL INTERESTS The author declares no competing financial interests. 1. No. 09 Civ. 4515 (RWS), 2010 U.S. Dist. LEXIS 35418, at *2 (S.D.N.Y. Mar. 29, 2010, revised Apr. 2, 2010). 2. U.S. Const. art. I, § 8, cl. 8. 3. Diamond v. Diehr, 450 U.S. 175 (1981). 4. See Association for Molecular Pathology, supra note 2. 5. See Prometheus Laboratories, Inc. v. Mayo Collaborative Services, 581 F.3d 1336 (Fed. Cir. 2009). 6. Transcript of Oral Argument at 20, Bilski v Kappos, 130 S. Ct. 3218 (2010) (No. 08–964). 7. Bilski v. Kappos, 130 S. Ct. 3218 (2010). 8. Ibid. 9. Brief For The United States as Amicus Curiae in AMP v. USPTO (No. 2010–1406) at 20. (emphasis in original)
nature biotechnology volume 28 number 12 december 2010
10. Funk Brothers Seed Co. v. Kalo Inoculant Co., 333 U.S. 127 (1948). 11. Parker v. Flook, 437 U.S. 584 (1978). 12. “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title.” 35 U.S.C. §101. 13. Supra note 15 at 12. 14. Motion by Plaintiffs-Appellees for Recusal of Chief Judge Randall R. Rader, AMP v. USPTO, No. 09-CV4515 (Fed. Cir. June 29, 2010). 15. See Association for Molecular Pathology, supra note 2 at 114. 16. Prometheus Labs, Inc v. Mayo Collaborative Services, 581 F.3d 1336, 1346 (Fed. Cir. 2009) 17. Diamond v. Chakrabarty, 447 U.S. 303 (1980). 18. 35 U.S.C. §102(a) (2010). 19. Dan, L. Burk & Mark A. Lemley, Inherency, 47. William Mary Law Rev., 371 (2005). 20. O’Reilly v. Morse, 56 U.S. 62, 117–18 (1853). 21. Brief of Amici Curiae Christopher M. Holman and Robert Cook-Deegan in Support of Neither Party in AMP v. USPTO (No. 2010-1406) at 24. 22. “...the reason for exclusion [of laws and products of nature] is that sometimes too much patent protection can impede rather than ‘promote the Progress of Science and useful Arts,’ the constitutional objective of patent... protection.” Laboratory Corp. of America Holdings v. Metabolite Labs., Inc., cert. dismissed as improvidently granted, 548 U.S. 124, 126–27 (2006) (Breyer, J., dissenting). 23. Chakrabarty, supra note 17. 24. Diamond v. Diehr, supra note 3. 25. Ibid. 26. Laboratory Corp. supra note 22. 27. Chakrabarty, supra note 17 at 309. 28. Carbone, J. et al. Nat. Biotechnol. 28, 784–791 (2010). 29. Fore, J. Jr. et al. J. Biomed. Discov. Collab. 1, 7 (2006). 30. Ibid. 31. Cook-Deegan, R. et al. Genet. Med. 12 Suppl, S15–S38 (2010). 32. 35 U.S.C. §§200–12 (2010). 33. 35 U.S.C. §287(c) (2010).
1255
c o m m e n tar y
The environmental impact subterfuge Gregory Conko & Henry I Miller
T
he latest weapon used in the misinformation war against recombinant DNA technology and its agricultural applications is an obscure environmental law from the seventies. Green activists and organic farmers are exploiting the National Environmental Policy Act of 1970 (NEPA) to convince courts that inconsequential paperwork oversights by regulators at the US Department of Agriculture warrant the revocation of two final approvals for recombinant DNA–modified crop varieties and of the issuance of permits to test several others. At least one more case is pending. Under NEPA, all US federal government agencies are required to consider the effects that any “major actions” they take may have on the “human environment.” Agencies can exempt whole categories of routine or repetitive activities but most other decisions—such as the issuance of a new regulation, the location of a new bridge or the approval of a new agricultural technology—trigger the NEPA obligation to evaluate environmental impacts. If the agency concludes that the action will have “no significant impact” (a legal term of art), it issues a relatively brief Environmental Assessment explaining the basis for that decision. If significant effects are likely, though, the agency must prepare a comprehensive Environmental Impact Statement (EIS), which typically requires thousands of hours of work, details every imaginable effect and runs to hundreds (or even thousands) of pages. The obligation under NEPA is wholly procedural, which means that even significant environmental effects do not prohibit the agency Gregory Conko is a senior fellow at the Competitive Enterprise Institute in Washington, DC, and Henry I. Miller, a physician and fellow at Stanford University’s Hoover Institution, was the founding director of the FDA’s Office of Biotechnology. e-mail:
[email protected]
1256
David R. Frazier/DanitaDelimont.com/Newscom
© 2010 Nature America, Inc. All rights reserved.
Action needs to be taken to prevent anti-biotech activists from co-opting environmental law to derail the planting of transgenic crops that have already received regulatory approval.
In August, a federal judge revoked the USDA’s approval of Roundup Ready sugar beets, which represent 95% of the crop now grown in the United States.
from ultimately taking the proposed action. Its purpose is solely to force government agencies to in fact consider possible environmental effects. However, courts have interpreted the law broadly by requiring a comprehensive review of every imaginable effect on the “human environment.” This category now encompasses not only harm to the natural ecology but also economic, social and even aesthetic impacts. Thus, if agencies miss some tangential or speculative issue, they can be tripped up by an irresponsible litigant who alleges that the environmental review was incomplete. Even when regulators actually do consider a potential impact but reject the concern owing to its unimportance or improbability, they can run afoul of NEPA by failing to extensively and comprehensively document their reasoning. This latter phenomenon has lately plagued USDA approvals of recombinant DNA–modified crops. NEPA and biotech The USDA has approved 74 different recombinant DNA–modified crop varieties, or
transformation events, for commercial-scale cultivation, including varieties of corn, canola, squash, soybean, potato, tomato and other species. In each case, the department’s Animal and Plant Health Inspection Service (APHIS) reviewed copious amounts of data from several years’ worth of controlled field experiments to evaluate the variety’s agronomic and environmental effects. And because each one of these plants is highly similar to conventionally bred varieties already grown throughout the United States—differing only in the addition of a small number of well characterized genes that introduce useful traits—APHIS concluded that they would have no significant environmental impact. But some anti-biotech activists are unwilling to permit facts or the judgments of experts to get in the way of their antagonism, so a group of environmental organizations led by the antibiotech Center for Food Safety joined with a handful of organic farmers to sue in federal court to halt the planting of alfalfa, sugar beets and turf grass modified for resistance to the
volume 28 number 12 december 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
COMMEN TA R Y herbicide glyphosate (better known by its trade name Roundup (Monsanto Company; St. Louis, MO)). The plaintiffs made no substantive arguments that these crops were unsafe, but instead claimed that APHIS’s Environmental Assessment was procedurally insufficient and that a comprehensive EIS should have been prepared. What were the supposedly significant environmental impacts that APHIS allegedly failed to account for in its environmental analysis? The plaintiffs made two claims: first, that crosspollination by recombinant DNA–modified crops would ‘contaminate’ non-genetically engineered alfalfa and produce negative economic and socioeconomic effects, such as the loss of certification for organic crops and an inability to produce non-genetically engineered seed; and second, that the widespread adoption of Roundup Ready crops and increased use of glyphosate could accelerate the development of glyphosate-resistant weeds and force farmers to switch to other, less convenient herbicides. Both claims are dubious. Cross-pollination by recombinant DNA– modified plants does not, in fact, jeopardize the organic certification of those crops because the USDA’s rules for organic production are based on process, not outcome. As long as organic growers adhere to permissible practices, take reasonable steps to prevent unintentional contact with transgenes and do not intentionally plant recombinant DNA–modified seeds, unintentional crosspollination does not cause those crops to lose their organic status. Furthermore, organic rules require growers to create distinct buffer zones to prevent unintended contact with prohibited substances, such as pollen from recombinant DNA–modified plants and synthetic pesticides. Seed breeders are also accustomed to using isolation distances as well as physical and biological buffer zones to maintain genetic purity, and the net effect of approving these recombinant plants would be, at worst, minimal. Similarly, the development of herbicideresistant weeds is not unique to recombinant DNA–modified varieties, but occurs commonly in all crops exposed to herbicides. APHIS is well aware of the phenomenon and considers it routinely when making approval decisions. The agency made clear to the court that good stewardship is the only reasonable defense against the development of herbicide-resistant weeds, but the court criticized APHIS for not evaluating the cumulative extra impact of approving new Roundup Ready varieties. From a NEPA perspective, what makes recombinant DNA–modified herbicide-resistant plants fair game is that they are regulated by a federal government agency and that approval to commercialize them constitutes, in NEPA-speak, a
“major action.” Even though plant scientists are virtually unanimous that recombinant DNA is an extension, or refinement, of earlier techniques and that it is safer and more predictable than many other conventional breeding methods, such as wide-cross hybridization and mutation breeding, only recombinant DNA–modified varieties are subject to the kind of government approval that triggers the EIS obligation. It is ironic that recombinant DNA–modified crop varieties, whose development and environmental impacts are carefully scrutinized by the USDA, are subject to the extra burden of a compelled EIS, whereas varieties developed with cruder conventional methods are subject to neither premarket assessment nor the NEPA rules. Indeed, the EIS requirement for recombinant DNA–modified crop approvals could have been avoided if the USDA and other federal regulatory agencies had heeded the advice of the scientific community some 25 years ago and chosen not to subject the products of recombinant DNA technology to special, discriminatory government regulation. Had the regulators instead followed the scientific consensus, they would not have been tripped up by NEPA’s paperwork requirement because, in the absence of an approval process, there would be no “major actions” to trigger the EIS obligation. Environmental creep Since its enactment, the scope of NEPA’s coverage has experienced continual creep. Through a combination of prodding by environmental activists and judicial overreach, the meaning of the term “human environment” has been expanded to include not just tangible ecological harms that affect people and human communities but also impacts that are economic, social, cultural, historic and aesthetic. A decision by a federal district court in Minnesota illustrates how liberally the statute has been interpreted: “Relevant as well is whether the project will affect the local crime rate, present fire dangers, or otherwise unduly tap police and fire forces in the community…the project’s impact on social services, such as the availability of schools, hospitals, businesses, commuter facilities, and parking…harmonization with proximate land uses, and a blending with the aesthetics of the area…[and a] consideration of the project’s impact on the community’s development policy…” In other words, almost any possible effect that a clever plaintiff can imagine may constitute a significant impact under NEPA if a sympathetic judge agrees. In the first ever NEPA challenge to a product made with recombinant DNA technology,
nature biotechnology volume 28 number 12 december 2010
environmental activist Jeremy Rifkin successfully overturned a 1984 decision by the US National Institutes of Health (NIH) to permit the field testing of recombinant DNA–modified Pseudomonas syringae bacteria, engineered to help protect crop plants from frost damage. Researchers at the University of California, Berkeley, discovered that P. syringae contains an ‘ice nucleation’ protein that helps initiate the growth of ice crystals that damage growing crops. These scientists deleted the gene sequence coding for the ice-nucleation protein in the hope that spraying the resulting ‘ice minus’ variants on plants could inhibit ice crystal formation and thereby reduce frost damage. Rifkin’s Foundation on Economic Trends, Bethesda, sued to stop the tests and argued that the NIH had not sufficiently considered, among other things, the possibility that spraying the modified bacteria might disrupt wind circulation patterns and affect aircraft flying overhead. Not surprisingly, the NIH had dismissed such a ridiculous theory out of hand because P. syringae bacteria with a missing or non-functioning ice-nucleation gene were known to arise spontaneously in nature owing to natural mutations, and the permit was for a single small-scale field trial. Astonishingly, however, both a federal district court and federal appeals court allowed the challenge and overturned the NIH decision. It is little wonder, then, that federal courts would recognize the possibility of agronomic impacts as the basis to challenge the approval of recombinant DNA–modified crop plants. NEPA’s effect on farmers Although unsurprising, the resolution of these lawsuits has been a nightmare for US plant breeders and farmers. In one case, the plaintiffs challenged APHIS’s decision to permit the Scotts, Marysville, Ohio, garden products company to conduct field trials of glyphosateresistant turf grass. Another case involved APHIS field trial permits to four different seed companies to cultivate small, geographically isolated plots of corn and sugarcane genetically modified to produce proteins that would be used in medical products. The cases of glyphosate-resistant alfalfa and sugar beet were particularly vexing to farmers, however, because the lawsuits were filed after the USDA had approved the varieties for commercial release and farmers had already begun to cultivate the seeds. The alfalfa case involves the 2004 approval of a glyphosate-resistant variety sold under the trade name Roundup Ready by its codevelopers Monsanto and Forage Genetics (Nampa, ID). To secure approval, the firms conducted almost 300 government-monitored field trials over a period of eight years. APHIS scientists evaluated the data generated 1257
© 2010 Nature America, Inc. All rights reserved.
COMMEN TA R Y from these trials to determine the likely environmental impacts of an approval—called ‘deregulation’ in agency parlance—and of widespread use of Roundup Ready alfalfa by US farmers. Because the same genetic trait had already been incorporated into dozens of approved varieties of corn, soybeans, canola and other crop plants grown in the United States since 1996, APHIS was quite familiar with the glyphosateresistance trait’s likely effects—not just from the field tests needed to secure approval, but also from nearly a decade of real-world experience. Regulators naturally concluded that deregulating the crop would not have any significant environmental impact because there is no evidence that the glyphosate-resistance gene harms humans or other animals and it was already in common use in US agriculture. In turn, the USDA prepared a shorter Environmental Assessment explaining its “finding of no significant impact” and deregulated Roundup Ready alfalfa. Roughly 5,500 farmers across the United States had planted more than a quarter million acres of Roundup Ready alfalfa by 2007 when a federal district judge in San Francisco ruled that the USDA’s Environmental Assessment was legally insufficient. The court issued an injunction revoking the approval and prohibiting new seeds from being sold until the USDA completed a full EIS to evaluate concerns about glyphosate-resistant weeds and the possible impacts on organic farmers. Biotech advocates hailed the decision in June of the US Supreme Court reversing the blanket injunction on sales of new Roundup Ready alfalfa seed, but the actually decision was a purely procedural one that reinforces the USDA’s obligation to complete an EIS before permitting more seed sales. It took two years, but by November of 2009 APHIS had published a draft EIS and solicited public comments. Unsurprisingly, the EIS concluded what agency scientists and the scientific community already knew: the potentially negative impacts of approving Roundup Ready alfalfa are minimal and manageable, and they are in any event far outweighed by the crop’s many substantial benefits. The USDA must now publish a final version of the EIS and reapprove the crop; however, although the substantive work is complete, the administrative process of moving to final reapproval could take many more months and probably will not be finalized in time for new Roundup Ready alfalfa seed to be planted in 2011. Fortunately, the farmers who planted Roundup Ready alfalfa when it was first approved were permitted to continue growing and then harvest that initial crop, so the overall impact will be limited. A far worse fate is in store for America’s sugar beet growers. 1258
In August, another federal district judge revoked the USDA’s approval of Roundup Ready sugar beets. That decision will sow monumental confusion because an estimated 95% of the sugar beets currently grown in the United States are of the Roundup Ready variety. As was the case with Roundup Ready alfalfa, the court’s injunction will permit the already planted crops to be harvested and processed, but before the USDA may re-approve the variety, a full EIS will be necessary. The process will take years, leaving farmers in regulatory limbo in the meantime. Perhaps the biggest short-term problem arising from the decision, however, is a looming shortage of conventional sugar beet seeds for next spring’s planting. Because the Roundup Ready variety so quickly became popular with American beet growers, seed companies cut back on their production of non-genetically engineered seed. According to Duane Grant, chairman of the Snake River Sugar (Boise, ID, USA), which produces about 20% of the nation’s beet sugar, many growers fear they will have nothing to plant in 2011. Consumers will surely feel the pinch of sharply rising prices, and farmers and others who care deeply about protecting the environment will be denied the proven beneficial effects of these crops until they are restored to the marketplace. Time for NEPA reform Remarkably, because the NEPA obligation is purely procedural, US courts are not permitted to consider the fact that recombinant DNA– modified herbicide-resistant crop varieties have offsetting benefits to farmers, consumers and the natural environment. The mere fact that the USDA did not properly document its evaluation of potential negative effects is sufficient grounds for revoking the approval. The only function of NEPA is to ensure that agencies do in fact consider whether their actions may harm the environment. This is a laudable goal, but it is manifestly not the intention of most NEPA litigation. Instead, the statute has been hijacked by environmental activists to slow down or prevent government agencies from taking actions the activists do not like. Because the law requires agencies to consider almost any conceivable impact, the statute offers fertile ground for bad-faith, obstructionist litigation no matter how meticulous the agency is in preparing an Environmental Assessment or EIS. NEPA is therefore a recipe for stagnation, a particular problem for ‘gatekeeper’ regulatory agencies that must grant approvals before a product can be tested or commercialized. Something must be done to change the system. But what? Short of substantive reform of the underlying statute by Congress—the preferable and definitive solution—agencies
themselves can take some minor steps to mitigate the Act’s worst effects. Under the NEPA statute, every agency may establish a set of ‘categorical exclusions’ that exempt whole classes or types of activities from the EIS obligation. These may include routine or repetitive actions that, on the basis of past experience, do not involve significant impacts on natural, cultural, recreational, historic or other resources; and also those that do not otherwise, either individually or cumulatively, have any significant environmental impacts. Because they fall into those categories, APHIS has already categorically excluded most small-scale field trials of recombinant DNA–modified plants. The exclusion stipulates that all large-scale field tests, as well as any field release of recombinant DNA–modified organisms involving unusual species or novel modifications, still generally require an Environmental Assessment or EIS. But the list of excluded or included activities can be modified through notice-and-comment rulemaking, in which the agency sets forth the complete analysis and rationale for excluding the activity. At the very least, APHIS should consider categorically excluding some of the classes of recombinant DNA–modified crops with which it now has more than two decades’ worth of precommercial and commercial experience, including herbicide-tolerant varieties of common crop species. The most rational and definitive approach, however, would be to eliminate the agency action that triggers the NEPA obligation initially—namely, case-by-case reviews of virtually all field trials and the commercialization of recombinant DNA–modified plant varieties. That would offer the dual advantages of relieving the USDA’s NEPA difficulties and also making regulators’ approach to recombinant DNA technology more scientifically defensible and risk based. As the scientific community has explained for over two decades, the decision to subject recombinant and conventional organisms to different regulatory standards cannot be justified scientifically. The increasing prevalence of obstructionist litigation now shows that doing so is also wholly impractical. Recent NEPA lawsuits have prevented the marketing of products that offer palpable, demonstrated benefits to farmers, consumers and the environment. Nuisance litigation intended to slow the advance of socially responsible technologies are abusive, irresponsible and antisocial. And so are those who file them. It is long past time for NEPA’s burdensome paperwork requirements to be lifted from such an important and beneficial technology. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests.
volume 28 number 12 december 2010 nature biotechnology
F E AT U R E
To selectivity and beyond George S Mack
© 2010 Nature America, Inc. All rights reserved.
First-generation epigenetic drugs have proven clinically useful in several hematological cancers. But newer enzyme inhibitors in the pipeline aim to be more selective and promise to broaden the portfolio of therapeutic uses.
F
ifty years after covalent chromatin modifications were first discovered, four drugs that modulate the cellular epigenetic machinery are now registered by the US Food and Drug Administration (FDA) for use in prolonging the lives of people afflicted with blood and lymphoid cancers. Although clinical responses to these first-generation products have been reasonable—given that they act on a pan-genomic scale—drug developers are now aiming to produce a second generation of agents that target newly recognized enzymes involved in DNA and chromatin modification. Because these enzymes alter expression of a narrower, more specific set of modifications, it is hoped the new experimental agents will not only have an improved therapeutic index but also open up epigenetic therapy to indications outside of cancer. Early validation Among the current approved chromatin modulating agents are drugs that were first investigated in the 1970s and early 1980s (Table 1). In a few cases, their clinical value in treating malignancies only became apparent decades later. In the intervening years, advances in our understanding of the mechanisms of chromatin remodeling and DNA modification have lent support to the notion that in certain diseases, the dysregulation of gene expression arising from alterations in the epigenetic ‘software’ may be corrected by pharmacological intervention. By correcting aberrant cytosine methylation in the DNA sequence, or incorrect acetylation, methylation, phosphorylation, ubiquitination or sumoylation modifications of histone proteins, such agents should, in principle, mitigate abnormal patterns of gene expression associated with disease. In the past 15 years, researchers have begun to identify a battery of enzymes, which George S. Mack is a freelance writer based in Columbia, South Carolina.
Unmarking the genome. An artist rendition of HDAC inhibitors removing marks from chromatin. (Source: Marc Phares, Photo Researchers)
promote or reverse modifications to histones, and chemical changes to DNA, which do not alter the nucleotide sequence (Fig. 1). It is now known that many of these marks are heritable from gametes to offspring and from somatic to daughter cells, but at the same time, demonstrate a level of plasticity that could be exploited in therapeutic settings. Epigenetic modifications can also be environmentally (and perhaps behaviorally)
induced, and these, too, may be passed down through meiosis and mitosis. Although chromatin interactions affecting transcription are known to occur at even greater chromosomal scales, until now, much of the research on epigenetic machinery has focused on nucleosomes—nuclear structures in which the DNA is wrapped around a core of two copies each of the H2A, H2B, H3 and H4 histone proteins. It is now known that the
Table 1 Approved drugs that target epigenetic mechanism Development stage
Company
Target
Agent
Indications
Merck
HDAC
Zolinza (vorinostat) (suberoylanilide hydroxamic acid, aka SAHA)
Cutaneous T-cell FDA approved lymphoma (CTCL) Oct. 2006
Celgene
HDAC
Istodax (romidepsin) (formerly FK228, a cyclic peptide principally active against class 1 HDACs); nanomolar potent
CTCL
FDA approved Nov. 5, 2009
Celgene
DNMT
Vidaza (5-azacitidine)
MDS
FDA approved May 2004
DNMT Eisai Tokyo; sublicensed to Johnson & Johnson
Dacogen (decitabine)
MDS
FDA approved May 2006
nature biotechnology volume 28 number 12 DECEMBER 2010
1259
feature
© 2010 Nature America, Inc. All rights reserved.
Figure 1 The ever-expanding family of enzymes that control the structure of chromatin and gene transcription. Modifications of the histone protein tail and DNA are shown: changes in acetylation (Ac) by acetyltransferases and deacetylases, phosphorylation (PO4) by kinases, ubiquitylation (Ub) by ligases and changes in methylation (Me) by methyltransferases and demethylases. Methylation of CpG on DNA is also shown. (Source: Epizyme)
Me R
Ac
K
K S K Ub
stability of nucleosome occupancy at specific genomic loci is affected by the presence of particular variants of core histone proteins, such as H3.3 and H2A.Z. Whereas our knowledge of some of the chromatin modifications associated with certain diseases is progressing, little is yet understood about the mechanisms by which such marks influence pathogenesis of disease. Epigenetic modifications can either upregulate or downregulate genes; for example, it has been assumed with some clinical vindication that deacetylated histone conditions bring some tumor suppressor genes to unfavorably diminished states of expression, thereby preventing apoptosis and leading to cell proliferation and advancing disease. In addition, hypermethylation with 5-methylcytosine of DNA within CpG islands is seen in many cancers, and this phenomenon invites repressive complexes, consisting of methylCpG-binding-domain proteins and DNA methyltransferases (DNMTs) for maintaining DNA methylation, resulting in a relatively durable silencing of the same gene region1. First-generation DNMT and histone deacetylase (HDAC) inhibitors are largely indiscriminate in their activity. DNMTs apply methylation marks to any cytosine on a DNA strand that is accessible, but the factors governing which methyl groups are removed by DNA demethylases remain unclear. It is also now recognized that cytosine can be marked with hydroxymethyl groups as well, but the significance of this modification is currently only conjecture. HDAC inhibitors are classified into four categories: hydroxamic acid derivatives (including the fungal antibiotic trichostatin A), short chain fatty acids (e.g., valproic acid), cyclic peptides (e.g., depsipeptides) and phenylene diamines. Pharmacological inhibition of HDAC activity can affect many cellular processes other than histone function. For example, HDAC classes I and II target many 1260
96 Histone methyltransferases (HMTs) 44 Arginine methyltransferases (RMTs) 52 Lysine methyltransferases (KMTs)
PO4
Me CpG
nonhistone proteins, including transcription factors and proteins that regulate cell division, migration and death, all of which are presumably affected by HDAC inhibitors. Class III HDACs make up the so-called sirtuin family, which has also been the subject of intense focus in designing treatments for
age-associated diseases, such as activation of SIRT1 in type 2 diabetes. Histone acetylation takes place on lysine residues that reside on the histone tails, and as far as researchers can tell the histone acetyltransferases that apply these acetyl marks and the HDACs that take them off don’t show any preference for which lysine residue or which histone is affected, or even for histones, for that matter. Some researchers now refer to them as lysine deacetylases rather than HDACs, which better describes them by their function, as opposed to defining them by a target. A greater level of specificity might be achievable by targeting histone methyltransferases (HMTs) and demethylases (HDMs), where a great deal of new research has been focused over the past eight years. Nessa Carey, scientific director at epigenetics startup company CellCentric (Cambridge, UK), notes that, compared with HDACs and DNMTs, some HMTs “are very, very precise.” For example, the histone lysine
Box 1 Noncoding RNAs as targeted epigenetic drugs? Evidence is growing that natural antisense transcripts and other long, interdispersed, noncoding RNAs (lincRNAs) are involved in inducing chromatin modifications (and to a much lesser extent, alteration of sense promoter DNA methylation). The exact mechanisms by which noncoding RNAs mediate these activities are being worked out and may involve promotion of DNA methylation by binding to the promoter region of a corresponding gene sequence (in the sense orientation) in the genome or providing a template onto which histone-modifying enzymes are then recruited. Although mechanisms remain unclear, the association of lincRNAs with disease, particularly cancer, is convincing. The lincRNA H19, one of the first to be discovered, is associated with several tumor types, including breast, bladder and liver. Furthermore, under conditions favoring tumorigenesis, such as hypoxia, knocking H19 down affects the expression of genes involved in angiogenesis, survival and tumorigenesis, suggesting a central role for the RNA in tumorigenesis4. Similarly, recent studies have shown that a specific lincRNA called HOTAIR, a part of the HOX locus, which is dysregulated in breast cancer and foreshadows metastasis, appears to do so by reprogramming cells to a more embryonic state. Overexpression of HOTAIR in epithelial cells causes repositioning of the Polycomb repressive complex 2 and alterations of methylation at H3 lysine 27 (ref. 5). Startup CURNA (Jupiter, FL, USA) is attempting to translate this knowledge into oligonucleotide therapies based on work from the laboratory of Claes Wahlestedt at the Scripps Research Institute (Jupiter, FL, USA). According to Wahlestedt, of the two possible natural antisense mechanisms—concordant, in which inhibiting an antisense transcript directed against a lincRNA has the effect of repressing gene expression, and discordant, where it has the opposite effect of derepressing or activating genes—CURNA is focused on discordant effects (that is, upregulation of gene expression). These are more attractive from a commercial standpoint because most conventional oligonucleotide therapeutics, monoclonal antibodies or small molecules act by inhibiting the target rather than activating it. Upregulating proteins in vivo has other advantages in that it gets around the problem of having to produce proteins synthetically and of delivering them intracellularly, says Wahlestedt. CURNA has an exclusive license for the technology from Scripps on natural antisense and noncoding RNA as drug targets and has filed additional patents on roughly 100 genes and inhibitor sequences for upregulating pharmaceutically important targets, seven of which they have now tested in vivo. In all cases, according to Wahlestedt, the oligos had the desired effect of upregulating their target gene.
volume 28 number 12 DECEMBER 2010 nature biotechnology
feature
© 2010 Nature America, Inc. All rights reserved.
Table 2 Deals centering around epigenetics products or technology Epigenetics company Partnering company (location) Date
Agreement
MethylGene
EnVivo Pharmaceuticals
March 2004
EnVivo entered into an exclusive agreement to develop an isotypic HDAC inhibitor for neurodegenerative diseases for $1.1 million upfront
Syndax
Bayer Schering Pharma
April 2007
Syndax acquired worldwide development and commercial rights to entinostat (SNDX-275, formerly MS-275)
Celgene
Pharmion
November 2007
Celgene acquired rights to Vidaza (DNMT inhibitor) by acquiring Pharmion for $2.9 billion
MGI Pharma
Eisai
January 2008
Eisai acquired rights to Dacogen by acquiring MGI Pharma for $3.9 billion. MGI had licensed Dacogen from SuperGen in 2004
Nycomed 4SC (Planegg-Martinsried, (Zurich) Germany)
July 2008
4SC acquired eight oncology programs from Nycomed for €14 million ($17.8 million in today’s dollars), including the HDAC inhibitors resminostat (4SC-201) and 4SC-202, plus two other HDAC inhibitor programs
Cronos Therapeutics (Oxfordshire, UK)
ValiRx (London)
July 2008
Cronos became a wholly owned subsidiary of ValiRx
Aphios (Woburn, MA, USA)
VivaCell Biotechnology España (Cordoba, Spain)
March 2009
Collaborative R&D agreement to develop a combination therapy for treatment of HIV latency, consisting of bryostatins (PKC modulators) and HDAC inhibitors
Celleron Therapeutics AstraZeneca (Oxon, UK) (London)
May 2009
Celleron licensed the HDAC inhibitor AZD9468 (now CDX101) from AstraZeneca. AstraZeneca was granted the option to discuss reacquisition of commercialization rights in the future
SuperGen (Dublin, CA, USA)
GSK (Brentford, UK)
October 2009
GSK and SuperGen entered into a collaborative agreement where GSK could license or acquire as many as four discovery-stage and preclinical agents following proof-ofconcept experiments. Upfront payment of $5 million with potential of $375 million in royalties, milestones
Celgene
Gloucester Pharmaceuticals
December 2009
Celgene acquired rights to Istodax (HDAC inhibitor) by acquiring the company for $340 cash plus potential milestones and royalties worth $300 million
Spectrum Pharmaceuticals (Irvine, CA, USA)
Topotarget
February 2010
Co-development deal for belinostat (HDAC inhibitor). Spectrum licensed rights for North America and India for upfront payment of $30 million and potential milestones up to $320 million
CellCentric
Takeda
February 2010
CellCentric exclusively outlicensed one of its cancer programs to Takeda Pharmaceutical, which could potentially be worth more than $200 million to CellCentric over its course
Cellzome
GSK
March 2010
GSK gains exclusive license to Cellzome’s Episphere technology as applied to immunoinflammatory disease for an upfront payment of €33 million, with royalties that could equal an additional €475 million
methyltransferase KMT1E will only methylate H3K9 (lysine nine on the H3 tail); so far, it has not been determined to methylate any of the other possible lysines. “To a greater or lesser extent,” says Carey, “these histone methylases and demethylases will only look at very specific residues on a very specific histone tail.” Even so, such enzymes are still working genome-wide rather than targeting DNA or chromatin modifications at particular genes. This is because chromatin modification enzymes are recruited to specific genes by DNA-binding proteins or co-factors or RNAs that localize the enzyme complex to a specific stretch of sequence. In themselves, they have no inherent specificity for a particular stretch of DNA. Hence, one last intriguing possibility for epigenetic therapeutic intervention is RNA or cofactors that direct traffic to specific genes. New evidence continues to be found that antisense RNAs transcribed from the genome play a role in DNA methylation, chromatin modifications and monoallelic expression (e.g., in X-chromosome-inactivation–linked or imprinting disorders). Such transcripts might be targeted using oligos or oligos designed to mimic natural antisense mechanisms (Box 1).
Greater specificity? One key question for epigenetic therapy is, What actually provides target specificity for a particular HMT, HDM or, for that matter, any other chromatin-altering enzyme? “That’s the question I asked when I first looked at this job,” says Mark Goldsmith, who joined Constellation Pharmaceuticals (Cambridge, MA, USA) as CEO in August 2009. “The complexity of factors upregulating or downregulating genes is daunting,” he says, “but the problem is at least becoming more approachable.” A few puzzle pieces are now starting to come together to produce the beginnings of a blueprint that describes where certain marks cluster within or around genes according to their activity status. For example, if a histone is methylated at position H3K9, it tends to be on genes that are inactive; however, H3K4 methylation seems to be on active genes. Some unpublished research in the laboratories at Constellation Pharmaceuticals has shown a change of expression in “tens of genes” when histone methylation enzymes were inhibited, “not hundreds or thousands” as with HDAC inhibitors, according to Goldsmith. “The fundamental premise and strategy of our company and the field of epigenetics right now
nature biotechnology volume 28 number 12 DECEMBER 2010
is the belief that these enzymes are much more selective,” says Goldsmith, whose company is currently surveying an array of more than 50 HMTs and more than 30 HDMs. “They are much more selective in what their substrates are as well as how they are regulated,” he says. “Most importantly they are selective in their cumulative effect on target genes. But we are still in very early-stage discovery.” Up until now no company has been willing to show its hand on exactly which enzymes might be targeted next, but it appears that one of the first of the new-generation epigenetic agents advancing to clinical trials will be a histone demethylator. “I think that over the next 18 months you’ll see some histone methyltransferase inhibitors getting into the clinic,” says Will West of CellCentric. “We know of a number of parties who are actively working on that, and we ourselves are, of course, working in that space as well.” West won’t confirm any specific targets his company may be going after, but in late February CellCentric licensed one of its cancer programs to Asia’s largest drug maker Takeda Pharmaceutical (Osaka, Japan), which is looking to develop new oncology products out of this deal, which could be worth more than $200 million to CellCentric (Table 2). 1261
feature
PRC2/3 Histones
EZH2
Methylation of histone tails
DNA
Methylation of DNA
© 2010 Nature America, Inc. All rights reserved.
DNMTs
Repressive complexes PRC1
Repressed genes
Takeda oncology research division leader Osamu Nakanishi says his company is looking into other unnamed epigenetic enzymes as well as the histone methylases CellCentric has identified. “Epigenetics controls so many things,” he says. “And it has decisive power for inducing cancer.” Although his work at Takeda is limited to oncology, Nakanishi ponders other possibilities, and he suggests that it would be interesting to look into targeting epigenetic enzymes for indications beyond cancers, particularly in metabolic disease, such as diabetes, and central nervous system (CNS) disorders, such as schizophrenia and depression (Box 2). The CNS connection is noteworthy because neurons are rarely mitotic after they are in place, and thus gene mutation would not be a likely culprit in disease etiology as it often is in cancers. “But epigenetic mutation or changes are certainly a possibility in CNS disease,” says Nakanishi. He sums up Takeda’s perspective on next-generation epigenetic therapies: “HDACs are so nonselective that even if we had a very specific HDAC inhibitor, we would still see toxicity, and that would limit dose,” he says. “Also, it is still not clear that even very selective HDAC inhibitors could be applied to specific diseases.” Although the new crop of epigenetics companies has been understandably coy about what newly identified targets they might be pursuing, some candidates have been receiving 1262
Figure 2 Making marks. The Polycomb group protein EZH2 adds methyl groups to histone proteins, primarily to lysine 27 of histone H3 (small blue circles). EZH2 recruits DNA methyltransferases (DNMTs, small green circles) to certain target genes, either setting up new methyl marks during development or maintaining the marks after each round of cell division. These marks could recruit further repressive complexes, such as PRC1 or HDAC co-repressors, through methyl-binding domain proteins. (Reprinted with permission from Taghavi, P. and van Lohuizen, M., Nature 439, 794–795, 2006)
wide attention. In the histone methylation pathway, there is a group of polycomb proteins, one of which is the enhancer of zeste homolog 2 (EZH2), which trimethylates H3K27 and is currently drawing lots of interest from investigators (Fig. 2). Certain mutations occurring in the gene encoding the HDM UTX result in copious EZH2 expression, and there is strong in vivo evidence that inhibiting EZH2 function reduces histone methylation at the one particular lysine residue (H3K27) and significantly reduces tumor cell proliferation in various models2. Furthermore, when EZH2 expression is high in solid tumors, it tends to be a poor indicator of prognosis. Aside from these observations, one other reason for the interest is that EZH2 overexpression seems to cause gene silencing in parallel with DNA hypermethylation3. Translational investigator and medical oncologist Jean-Pierre Issa of the University of Texas M.D. Anderson Cancer Center (Houston) has been working with EZH2, and he believes it’s going to be an important target if an agent can be developed to counteract it. “We have evidence that the EZH2 mechanism is used in the same way DNA methylation is used to turn off tumor suppressor genes,” he says. “If you inhibit its expression, it abolishes the growth of the tumor and induces senescence in some cancers. I suspect that this will be the first target that companies are developing drugs for.
Epigenetics company Epizyme (Cambridge, MA, USA) may have a leg up with EZH2 because it holds an exclusive license for this as well as a second HMT, DOT1L, based on work done by Yi Zhang at the University of North Carolina at Chapel Hill. According to Zhang— who in 2007, along with Robert Horvitz of MIT, co-founded Epizyme—the company has rights to the two best characterized HMTs, EZH2 and HOT1L. Epizyme CEO Robert Gould says that RNA interference (RNAi) screens carried out in the company to downregulate HMTs associated with cancer have demonstrated that such changes influence both epigenetic marks and invasive phenotypes. Epizyme is currently screening for small molecules that recapitulate the same effect as RNAi and optimizing the resulting candidate drugs in-house. Expanding indications There has been considerable debate as to why epigenetic inhibitors have been more successful in hematologic cancers than in other malignancies; indeed, solid tumor results with HDAC and DNMT inhibitors have been lackluster at best. Investigators have been looking to the future, anticipating that more selective enzyme antagonists would improve results. Researchers suspect that there are likely hundreds of epigenetic modifiers, and therefore the surface has hardly been scratched in determining, which are the best enzymes to inhibit or modulate in some way.
Box 2 Marks in the brain Several neurodegenerative diseases have their etiology linked to epigenetics. Rett’s syndrome, the first for which the link was made in 1999, is caused by a mutation in methyl-CpG-binding-domain protein. Several syndromes that have an expansion of repeats, such as Fragile X, in which a CGG triplet is repeated, and Friedrich’s ataxia, in which GAA X TCC triplets are expanded within an intron in the gene encoding Frataxin, show silencing of critical genes due to hypermethylation. In the case of Friedrich’s ataxia, silencing of Frataxin is accompanied by hypoacetylation of histones 3 and 4 and trimethylation of H3L9. Researchers at Scripps Clinic (La Jolla, CA, USA) developed a class of HDAC inhibitors that reversed the Frataxin silencing 6, which Repligen (Waltham, MA, USA) has licensed and is taking to the clinic. The company filed an investigational new drug application for one drug, RG2833, and was granted orphan drug status by the FDA in May of this year. Based on a Drosophila model of neurodegenerative disease, EnVivo (Watertown, MA, USA) claims to have developed the only CNS-penetrant HDAC inhibitor, which it reported in 2008. According to its website, the company is exploring this drug for CNS indications, like Alzheimer’s and Parkinson’s disease.
volume 28 number 12 DECEMBER 2010 nature biotechnology
feature One reason why HDAC and DNMT inhibitors may have been less successful in solid tumors is that, like any other new, experimental therapy, they have often been tested in individuals with advanced disease, who have already been through several cycles of treatment with chemotherapies or with targeted cytostatic agents and are therefore very sick. In solid tumors, HDAC inhibitors have been tested as
fifth- and sixth-line therapies, in patients who have already become multi-drug resistant. In contrast to solid tumors, there were no good first-line therapies available for myelodysplastic syndromes (MDS) and, as a result, DNMT inhibitors were used in people naive to therapy. “This was just an accident of drug development,” says Jean-Pierre Issa who was principle investigator for the pivotal trial that preceded
FDA approval of DNMT inhibitor Dacogen (decitabine) in May 2006. “To my knowledge, no one has really tested these drugs [DNMT inhibitors] in solid tumors as first-line therapies to determine if their activity is actually substantially lower than it is in MDS.” Issa designs clinical trials, and sees patients who have newly diagnosed disease that is far advanced. He says he would like to see oncologists speak frankly
Table 3 Select epigenetic inhibitors in various stages of development Company
Target
Agent
Indications
Development stage
Merck
HDAC
Zolinza (vorinostat; suberoylanilide hydroxamic NSCLC acid, SAHA). Nonselective, micromolar potent Multiple myeloma Mesothelioma GBM Latent HIV infection in T cells
Phase 3 Phase 3 Phase 2 Discovery
Celgene
HDAC
Istodax (romidepsin) (formerly FK228) Peripheral T-cell lymphoma Selective for class I HDACs; nanomolar potent Multiple myeloma
Phase 2 Phase 2
Pfizer New York
HDAC1, CI-994 (N-acetyldinaline, also tacedinaline) 3, 6 and 8 Originally developed as an anti-convulsive
Topotarget
HDAC
Belinostat (PXD101). Hydroxamic acid-type Peripheral T-cell lymphoma (monotherapy) CUP (BelCaP) Ovarian cancer (mono and with carboplatin) Hepatocarcinoma (monotherapy) Soft tissue sarcoma (mono and with doxorubicin) NSCLC (mono and in combination) AML and MDS
Phase 2 Phase 2 Phase 2 Phase 2 Phase 1/2 Phase 1/2 Phase 1/2
Novartis
HDAC
Panobinostat (LBH589). Nonselective, very potent. Discovered in house by Peter Atadja
Multiple myeloma CTCL CML Hodgkin’s lymphoma Metastatic melanoma Prostate cancer
Phase 3 Phase 3 Phase 3 Phase 3 (not open yet) Phase 2 Phase 2
Italfarmaco Milan
HDAC
Givinostat (ITF2357)
Juvenile idiopathic arthritis Hodgkin’s lymphoma Polycythemia vera
Phase 2 Phase 2 Phase 2
Syndax Pharmaceuticals
HDAC1, 2 and 3
Entinostat (SNDX-275; formerly MS-275) Selective for HDACs 1, 2 and 3
NSCLC (in combination with Tarceva (erlotinib)) Phase 2 Breast cancer (in combination with aromatase Phase 2 inhibitors) Leukemia and MDS (in combination with Vidaza Phase 2 (5-azacitidine))
MethylGene Montreal
HDAC Hos2/ HDAC
Lymphoma Mocetinostat (MGCD0103) AML Selective for class I HDACs MGCD290 selective for fungal HDAC enzymes Fungal infections
Phase 2 Phase 2 Phase 1
EnVivo Pharmaceuticals
HDAC
EVP-0334 (central nervous system (CNS) penetrant)
Alzheimer’s disease and other CNS indications Schizophrenia
Phase 1 Preclinical
Sirtris Cambridge, MA, USA (Unit of GlaxoSmithKline)
HDAC (SIRT1)
SRT501 Selective for Class III HDACs
Colon cancer Multiple myeloma
Phase 2 Phase 2
Curis Cambridge, MA, USA
HDAC EGFR/ ErbB1 HER2/ neu
Advanced refractory tumors CUDC-101 (a hydroxamic acid moiety integrated into a quinazoline receptor tyrosine kinase inhibition pharmacophore)
Phase 1
Johnson & Johnson New Brunswick, NJ, USA
HDAC
JNJ-26481585 Selective for class I HDACs Potential for enhanced HSP70 upregulation and Bcl-2 downregulation
MDS CML CLL B-cell lymphomas BCR-ABL positive leukemia
Phase 1
Pharmacyclics Sunnyvale, CA, USA
HDAC HDAC8
PCI24781 (broad range multi-isoform HDAC inhibitor) PCI-34051 Selective for HDAC8
Sarcomas Lymphomas, MM, CLL T-cell lymphomas Pediatric neuroblastoma Autoimmune disease
Phase 1 and 2 Phase 1 Preclinical
© 2010 Nature America, Inc. All rights reserved.
HDAC inhibitors
NSCLC Pancreatic cancer MM
Phase 3 Phase 2 Phase 2
(continued)
nature biotechnology volume 28 number 12 DECEMBER 2010
1263
feature
Table 3 Select epigenetic inhibitors in various stages of development (continued) Company
Target
Agent
Indications
Development stage
Celgene
DNMT
Vidaza (5-azacitidine)
AML
Phase 2
Eisai Tokyo Sublicensed by Eisai to Johnson & Johnson
DNMT
Dacogen (decitabine)
CML
Phase 3
AML AML (elderly) Ovarian cancer
Phase 3 Phase 2 Phase 2
DNMT inhibitors
© 2010 Nature America, Inc. All rights reserved.
HMT inhibitors HMTs Takeda Pharmaceutical Histone Osaka (Licensed from CellCentric) ubiquitinrelated enzymes
HMT inhibitors
Cancers
Discovery
CellCentric
HMTs Histone ubiquitinrelated enzymes HDM
HMT inhibitors
Cancers
Discovery
Cellzome
HMTs
HMT inhibitors
Discovery Inflammatory and autoimmune diseases, such as rheumatoid arthritis, multiple sclerosis, and inflammatory bowel disease
Epizyme
HMTs
HMT inhibitors
Cancers
Discovery
Constellation Pharmaceuticals
HMTs
HMT inhibitors
Cancers
Discovery
HDAC, histone deacetylase; DNMT, DNA methyltransferase; HMT, histone methyltransferase; HDM, histone demethylase; CUP, cancers of unknown primary origin; GBM, glioblastoma multiforme; MM, multiple myeloma; NSCLC, non-small cell lung cancer; CLL, chronic lymphocytic leukemia; CML, chronic myelogenous leukemia; MDS, myelodysplastic syndromes; AML, acute myeloid leukemia.
with patients about using investigational drugs in late-stage disease before traditional cytotoxic agents have been tried. There is a generation of newer HDAC inhibitors coming online, including Celgene’s (Summit, NJ, USA) Istodax (romidepsin), which shows highest activity against class 1 and was approved in November 2009. There is also the very potent nonselective HDAC
inhibitor panobinostat (LBH589) being developed by Novartis (Basel), which is in phase 3 trials for cutaneous T-cell lymphoma (CTCL), as well phase 2 for prostate cancer and metastatic melanoma. “I don’t think we’ve heard the end of histone deacetylase inhibitors, perhaps not as single agents but in combination with other drugs,” says Issa (Box 3). “I’m confident we’re going to find more applications for them.”
Takeda’s Nakanishi doesn’t want to shut the door on existing drugs either. He proposes that there may be a way to make HDAC inhibitors work more effectively by combination with other epigenetic modulators. “That would be a potential strategy for regulating more specific, narrower gene sets,” he says (Table 3). Several biotechs as well as some pharma companies have programs in clinical trials
Box 3 It takes two (or maybe more) Several companies are looking beyond single agents to drug combinations. Topotarget (Copenhagen) has a pan-HDAC inhibitor, belinostat, in multiple clinical trials as a single agent (Table 3), and in combination with a number of chemotherapeutic agents, including carboplatin, a DNA-modifying agent, and Taxol (paclitaxel), which interacts with microtubules. The rationale is that if you open up the chromatin with an HDAC inhibitor, there will be greater access for DNA-damaging agents. In the case of the belinostat and Taxol combination, the two drugs have shown synergy in preclinical studies that looked at tubulin acetylation and antiproliferative activity. Acetylation of tubulin stabilizes microtubule formation and thus adversely affects the ability of cancer cells to divide. Henri Lichtenstein, chief development officer at Topotarget, says their dose escalation studies in solid tumors show that belinostat could be combined safely with therapeutic doses of the two chemotherapies. They looked further into ovarian and bladder cancer and had good results even with platinum-resistant tumors, which has gotten the attention of the gynecological group at the US National Cancer Institute (Bethesda, MD, USA), which is sponsoring further clinical studies in gynecological cancers. The company is also completing
1264
a proof-of-concept study with belinostat plus carboplatin and Taxol (BelCaP) in cancers of unknown primary origin (CUP). “What’s also very interesting [with CUP] is that we can go into first-line patients since there is no effective treatment,” says Lichtenstein. Elsewhere, quasi-virtual biotech company Syndax (Waltham, MA, USA) is focused entirely on combination therapies in solid tumors, where the need for good therapies is the greatest. Based on work from Ron Evans’ laboratory at the Salk Institute (La Jolla, CA, USA) on regulating the expression of nuclear hormone receptors, Syndax is attempting to reverse drug resistance, in particular resistance to hormone therapy that develops in some breast cancers. According to CEO Joanna Horobin, “[Combining HDAC inhibitors with conventional chemotherapy] has a well-trodden regulatory path, but it was not clear to us that it was playing to the specific science.” Instead, they are trying combinations that “leverage the role of modulating genes that interfere with cell growth in cancer, particularly in [a] solid tumor setting,” she says. The company has two phase 2 programs in metastatic breast cancer in combination with aromatase inhibitors, and in lung cancer combining Tarceva (erlotinib) and Vidaza (5-azacitidine).
volume 28 number 12 DECEMBER 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
feature Figure 3 Programs in epigenetics by phase of development. (Source: CellCentric)
60
HDAC Histone demethylase Histone methyltransferase Histone ubiquitin ligase Histone-ubiquitin-specific proteases 20
ed
3 La
un
as
ch
e
2 Ph
Ph
as
e
1 e as Ph
er y
0
ov
Looking forward Researchers anticipate that greater selectivity will be a major factor for improvements in clinical responses to epigenetic inhibitors. Although drug development programs often seek to increase therapeutic index by increasing target selectivity, the latter doesn’t always yield the desired results. In a collaboration that took place during 2005 and 2006 between Boehringer Ingelheim (Ingelheim am Rhein, Germany) and the Research Institute of Molecular Pathology
DNMT 40
Di sc
showed the drug was well tolerated, with no drug-drug interactions at the highest dose.
Number of programs
targeting HDACs (Table 3). Furthermore, the number of HDAC inhibitor programs in all stages of development greatly outnumbers any other class of epigenetic targets (Fig. 3). MethylGene (Montreal), which was among the first companies to focus on epigenetic targets and the first to commit to identifying isoform-selective inhibitors, according to Jeffrey Besterman, CSO at MethylGene, currently has a class 1 ‘selective’ HDAC inhibitor, mocetinostat (MGCD-0103), an orally active phenylene diamine derivative currently in phase 2 trials for follicular lymphoma. The company also has another HDAC inhibitor, MGCD-290, that is a specific inhibitor for the fungal HDAC HOS2. Results of a phase 1 clinical trial of this compound were reported at the Interscience Conference on Antimicrobial Agents and Chemotherapy in September and
(IMP, Vienna), which is principally supported by Boehringer, more than 125,000 compounds were assayed by means of highthroughput screens to find suitable inhibitors
Box 4 Diagnostic test development There has been explosive growth in commercial operations offering tests for detecting methylation patterns in certain malignancies. Basic epigenetic knowledge can be more rapidly translated into diagnostic products because the path to marketing is abbreviated compared with drug development, and usually only Clinical Laboratory Improvement Amendments certification is required for companies to set up assays in reference pathology laboratories in the United States. On the other hand, tests often have shortened life cycles due to the introduction of new and improved versions by competitors. Two colorectal cancer (CRC) blood assays based on DNA hypermethylation are in late-stage development and clinical use (Table 4) to noninvasively check for nascent disease or to see if an individual is at risk for cancer. One is being developed to determine loss of imprinting of insulin-like growth factor 2 (IGF2) as a potential risk factor for colorectal cancer in younger people by Orion Genomics (St. Louis), based on technology Orion exclusively licensed from Johns Hopkins University (JHU; Baltimore). Loss of imprinting creates imbalance through increased expression, and indeed, loss of IGF2 imprinting has been shown to occur in nearly every human tumor that has been examined. Colon cancer is a particular instance where both copies of the gene are being expressed inappropriately. “There probably should be no IGF2 production
Table 4 Select epigenetics diagnostics companies Company
Technology
Products or development stage
Epigenomics
Diagnostics and biomarkers by means of DNA methylation analysis
Main product is mSept9, a test based on methylation of SEPT9 for early detection of colorectal cancer via blood sample. CE marked in October 2009 and marketed in the EU. Licensed to Abbott Molecular for its m2000 real-time PCR automated system. Abbott will submit a PMA in late 2010. Quest Diagnostics will offer the test in the US Lung cancer via bronchial lavage sample expected to be launched in EU in 2010 Prostate cancer via urine sample in development
Epiontis (Berlin) Spin-off from Epigenomics in July 2003
Diagnostics and biomarkers via DNA methylation analysis
Foxp3 marker for regulatory T cells for cancer diagnostics and screening in development T-cell marker for cancer screening in development CD4 T cell (naive and memory) assay in development for immune monitoring Collaboration with Genzyme’s cartilage repair product, Carticel, to develop biomarker assays for cell purity and quality of product
Exact Sciences Multiplex DNA (Madison, WI, USA) assay
Stool-based DNA test that combines mutation testing, methylation marker analysis and immunochemical tests
Orion Genomics
Diagnostics and biomarkers via DNA methylation analysis
Blood test for detection of loss of imprinting of the pro-growth, anti-apoptotic IGF2 gene as an assay for colorectal cancer risk. Technology is licensed from Johns Hopkins University and is in development Bladder, lung and ovarian cancer assays in development
Oncomethylome Sciences (Liege, Belgium)
Diagnostics and biomarkers via DNA methylation analysis
Prostate cancer via urine and tissue samples. Partnered with LabCorp and Johnson & Johnson. Late-stage development Other cancers partnered with LabCorp, Schering-Plough (now Merck & Co.) and Merck in development. • Colorectal cancer screening via blood and stool in development, partnered with LabCorp • Bladder cancer testing via urine sample in development • Lung cancer in early stage development • Pharmacogenomic tests in early stage development with partners Abbott Molecular and GSK Biologicals
Valirx (London)
Cancer diagnostics • Nucleosomics cancer detection test measures specific histone modifications that correlate with certain cancers
Sequenom (San Diego)
Tools for research, including software
Enzo Life Sciences Tools for research (Farmingdale, NY, USA)
• EpiPanels for cancers and imprinting; marketed Tools for histone- and DNA-modifying enzymes, for protein methylation and demethylation, and development of kits for HDACs and lysine-specific demethylases
PMA, premarket approval; CE, meets requirements of applicable European directive.
nature biotechnology volume 28 number 12 DECEMBER 2010
(continued)
1265
feature
© 2010 Nature America, Inc. All rights reserved.
Box 4 Diagnostic test development (continued) in colon cells—or at least you should have production from one allele only,” says epigenetics researcher and gene imprint expert Randy Jirtle of Duke University (Durham, NC, USA). Researchers from Orion and JHU are collaborating to improve Orion’s IGF2 assay, and independent of Orion, JHU is conducting large-scale prospective trials to establish the lifetime cancer risk. Prior published retrospective studies involving more than 200 patients have demonstrated that CRC patients are 21 times more likely to harbor the IGF2 biomarker in their blood compared with appropriate cancer-free subjects7. “When clinical trials are completed, we will market the Orion CRC Risk Test to the 81.6 million people in their twenties and thirties in the United States and to the primary care physicians who treat them,” says Nathan Lakey, Orion’s CEO. Another CRC test based on loss of imprinting of SEPT9 is already being marketed in the European Union (Brussels) by Epigenomics (Berlin and Seattle). The Epigenomics CRC test has not yet been approved for use in the United States, but the company’s nonexclusive licensee Quest Diagnostics (Madison, NJ, USA) is set to market those laboratory services to physicians and patients once the test has been approved. If CRC can be identified at an early stage while still localized, the 5-year survival rate is high, ~90%. Yet only 40% of colorectal cancers are diagnosed as early stage disease, one reason being that there is resistance to colonoscopy exams due to fear of discomfort as well as economic concerns. The SEPT9 biomarker accurately picks up 62.75% of tumors, which is more precise than current methods of testing fecal samples.
for the H3K9me2 HMT G9a, which is implicated in such cancers as liver and prostate. Investigators ended up with seven confirmed hits and ultimately identified a single smallmolecule compound BIX-01294 (a diazepinquinazolin-amine derivative), which was specific for G9a. “It had only modest potency,” says Boehringer’s medicinal chemist and senior principal scientist Michael August. “At the time, the project didn’t fit any of our therapeutic program objectives,” he says, but Boehringer did make the compound available to academic and translational researchers through a material transfer agreement. “There has been a lot of interest in the compound from the academic community,” he says. “We sent out a lot of samples to various groups.” Post-doctoral fellow Stefan Kubicek
1266
Early diagnosis can give patients the best prognosis, no matter the disease. With autism spectrum disorders (ASDs), there is no specific blood test or imaging study that can deliver a definitive diagnosis so that young children can begin treatment when it is likely to be most effective. Because autism spectrum disorders (ASD) occur three to six times more frequently in males than in females, clinical genetics researchers Julie Jones and Michael Friez at the Greenwood Genetic Center (GGC; Greenwood, SC, USA) are working on the hypothesis that the syndrome is X linked8. With chip developer Oxford Genome Technologies (OTG; Oxford, UK), they have developed a microarray system to analyze expression at GGC and so far have identified a subset of eight genes on the X chromosome that are differentially overexpressed or underexpressed in a cohort of 17 boys with autism. A larger study is under way, and so far, the trend of dysregulation continues in the new data set, but the eight genes do not stand out to the degree they did in the first round of data. Given the heterogeneity of patients, screening so few genes to obtain a definitive diagnosis might not be possible. What is apparent is that there is a greater degree of gene dysregulation in the autism cohort compared with the controls. “We are working with OGT to come up with the most appropriate methylation array design based on this most recent data set,” says Friez. The etiology of ASDs has been daunting for investigators. “You could argue that it’s really a chromosome- or genome-wide dysregulation of methylation and that it may just be a vulnerable location in males only because there’s not a second copy of the X chromosome,” says Friez. “We just don’t know yet.”
of the Broad Institute (Cambridge, MA, USA) was principal laboratory investigator on the screening project at IMP when he was a graduate student; he has found BIX-01294 to be “very selective,” with regulation of as few as 50 genes, 20 of which are upregulated “in a common biochemical pathway.” But toxicity has been “a major problem in preclinical mouse models,” he says. The future will undoubtedly bring greater selectivity and precision in epigenetic therapies. In the meantime, there are nearer-term commercial opportunities in molecular diagnostics (Box 4 and Table 4). What is clear is that advances in our understanding of chromatin remodeling processes and their control is unlikely to translate into panaceas, especially in cancers that seem to escape and go around
other modalities. “So far, these drugs [HDAC and DNMT inhibitors] have been helpful but not curative,” says Issa. “I don’t believe there is anything magical about epigenetic therapy as opposed to other kinds of therapies. Resistance often develops, and this is the nature of cancer,” he says. “I think we just have to deal with it and look for other epigenetic approaches because multiple modality therapy is the way to overcome resistance in general.” Bostick, M. et al. Science 317, 1760–1764 (2007). Sooryanarayana, V. Nature 419, 624–629 (2002). Kondo, Y. et al. Nat. Genet. 40, 741–750 (2008). Matouk, I.J. et al. PLoS ONE 2, e845 (2007). Gupta, R.A. et al. Nature 464, 1071–1076 (2009). Herman, D. et al. Nat. Chem. Biol. 2, 551–558 (2006). 7. Cui, H. et al. Science 299, 1753–1755 (2003). 8. Jones, J.R. et al. Am. J. Med. Genet. A. 146A, 2213– 2220 (2008). 1. 2. 3. 4. 5. 6.
volume 28 number 12 DECEMBER 2010 nature biotechnology
p at e n t s
On firm ground: IP protection of therapeutic nanoparticles Paul Burgess, Peter Barton Hutt, Omid C Farokhzad, Robert Langer, Scott Minick & Stephen Zale
© 2010 Nature America, Inc. All rights reserved.
The development of generic therapeutic nanoparticles faces a combination of scientific, patent and regulatory challenges.
O
ver the past decade, nanotechnology has shown the potential to dramatically affect several fields. In the healthcare sector, there has been an intense investment in the development of therapeutic nanoparticles for treatment of common diseases such as cancer, inflammatory disorders, infectious disease and cardiovascular disease. Despite the significant potential benefits of these products to patients and their families, no authoritative article has addressed the intellectual property (IP), regulatory, scientific, clinical and commercial advantages for therapeutic nanoparticles. This article outlines the reasons why innovative pharmaceutical products based on therapeutic nanoparticles offer particular advantages in IP protection from generic competition. What is a therapeutic nanoparticle? Parenterally administered therapeutic nanoparticles typically comprise an active ingredient together with organic or inorganic biomaterials, and range from 50 to 200 nm in diameter. Several classes of therapeutic nanoparticles, commercially available or in development, include liposomes, albumin-drug complexes and polymeric nanoparticles. The earliest therapeutic nanoparticles were liposomal drugs. For example, Doxil, which was approved by the US Food and Drug Administration (FDA) in 1995 for treatment of Kaposi’s sarcoma, is a long-
Paul Burgess, Scott Minick and Stephen Zale are at BIND Biosciences, Cambridge, Massachusetts, USA; Peter Barton Hutt is at Covington & Burling, Washington, DC, USA; Omid C. Farokhzad is at Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts, USA; and Robert Langer is at Massachusetts Institute of Technology, Cambridge, Massachusetts, USA. e-mail:
[email protected]
Conventional drug product
Nanoparticle therapeutic Platform IP Process patents
Formulation patents
Formulation patents Manufacturing trade secrets PK study does not assess drug at site of action
Figure 1 Market exclusivity: therapeutic nanoparticle versus conventional drug product. PK, pharmacokinetic.
circulating liposome containing doxorubicin1. More recently, there has been interest in therapeutic nanoparticles composed of biodegradable polymers, which offer long circulation times characteristic of liposomes together with controllable drug-release kinetics mediated by polymer biodegradation2. After parenteral administration, these therapeutic nanoparticles can selectively accumulate in particular tissues or body locations, thereby enhancing the delivery of the drug payload to the site of disease. Many therapeutic nanoparticles access diseased tissue through the enhanced permeability and retention (EPR) effect. The EPR effect occurs in tumors, sites of inflammation and other diseased body tissues where the blood vessels are either disrupted or not fully formed, and as a result are leakier than normal vessels. The EPR effect allows nanoparticles to pass readily from the blood vessels into the tumor or inflamed tissue and to be differentially retained while the drug is released from the particles. It has been demonstrated that, as a result of this effect, therapeutic nanoparticles accumulate in a particular target tissue location and deliver more of the drug to the disease site over a longer period of time than a conventional drug product administered as a solution. Currently marketed therapeutic nanoparticles use passive targeting; that is, they rely exclu-
nature biotechnology volume 28 number 12 DECEMBER 2010
sively on the EPR effect to enhance delivery of drugs to the site of disease. Actively targeted nanoparticles now under development are surface-functionalized with targeting ligands that impart even greater specificity by binding to receptors located on diseased cells or tissues. The targeting ligand enhances the binding and retention of the nanoparticle to the target tissue and can also mediate the uptake or ingestion of the nanoparticle (and its therapeutic payload) by a target cell type. Whether a disease site is actively or passively targeted, therapeutic nanoparticles can increase the concentration of drugs in diseased cells or tissues and reduce concentration where the drugs can cause undesirable side effects. Moreover, therapeutic nanoparticles enable the drug developer to modulate the trafficking of the active pharmaceutical ingredient within the body without altering its molecular structure, and hence its inherent pharmacological activity. The decoupling of drug disposition from molecular structure affords the possibility of developing better drugs with fewer tradeoffs than is possible with conventional drugs, where both biodistribution and pharmacological activity depend on molecular structure and cannot be manipulated independently. This interdependence often forces undesirable compromises, or worse, abandonment of otherwise promising new drug candidates. 1267
pat e n t s
© 2010 Nature America, Inc. All rights reserved.
Table 1 Currently approved nanoparticle or liposomal products on the US market Trade name
Generic name
Date approved
Patents listed
Par IV filing
Doxil
Doxorubicin
1995
No
No
Generic equivalent No
Depocyt
Cytarabine
1999
Yes
No
No
Ambisome
Amphotericin B
1997
Yes
No
No
Abelcet
Amphotericin B
1995
No
No
No
Amphotec
Amphotericin B
1996
No
No
No
DepoDur
Morphine
2004
Yes
No
No
Daunoxome
Daunorubicin
1996
No
No
No
Abraxane
Paclitaxel
2005
Yes
No
No
A successful therapeutic nanoparticle must be precisely optimized to prevent its removal from the circulation by the body’s defense systems before it has a chance to reach its target, maximize trafficking to the desired location and minimize accumulation at sites where the drug may cause side effects. Like conventional pharmaceutical products, it is necessary to develop a robust and well-controlled manufacturing process that produces nanoparticles of uniform quality from batch to batch; however, nanoparticle manufacturing has additional complexities to sort through to uniformly and precisely control the critical parameters that lead to their therapeutic benefits. Therapeutic nanoparticles can encompass both existing and new active pharmaceutical ingredients. Indeed, when manufacturers use validated active pharmaceutical ingredients, the risk of drug development can be substantially lower as compared to development of a new chemical entity. The end product will still be a substantially differentiated product and retain market exclusivity as well as or better than a new chemical entity (Fig. 1). This exclusivity is likely to extend beyond patent expiration as a result of additional barriers to generic competition discussed below that are unique to therapeutic nanoparticles, thereby potentially reducing or avoiding the sudden revenue fall-off for successful proprietary drugs that confronts the pharmaceutical industry today (Fig. 2). Patenting of nanoparticles A therapeutic nanoparticle is a complex, yet uniformly precise, platform for developing new drugs. Because of the complexity of these nanoparticles, opportunities abound for obtaining meaningful patent protection. For example, a polymeric nanoparticle can vary by polymer type, polymer size, drug, mix of polymers, surface characteristics, targeting ligands, content of drug encapsulated, concentration of nanoparticles or manufacturing process. Each one of those factors can have a considerable effect on the behavior of the nanoparticle in a biological system. Polymeric nanoparticle characteristics critical to its safety and effectiveness, such as its 1268
target tissue accumulation, pharmacokinetics (clearance and volume of distribution), stability and drug release kinetics, are dependent on numerous formulation and process parameters. The combination of many factors leading to many outcomes provides numerous grounds for patentability. Opportunities exist to patent (i) particular aspects of therapeutic nanoparticles critical for effective use, (ii) classes of therapeutic nanoparticle compositions, (iii) specific therapeutic nanoparticle compositions, (iv) methods of using therapeutic nanoparticles, (v) methods of manufacturing therapeutic nanoparticles, (vi) therapeutic nanoparticles of particular drugs and (vii) the pharmacokinetic profile of particular nanoparticles. As they do for new chemical entities, patenting opportunities exist in the discovery phase of identifying new therapeutic nanoparticles. Later in the product development cycle, additional opportunities exist to patent clinical-stage uses and dosing schedules. These patent opportunities can be combined with existing IP to protect a new drug entity or can be the basis of a new patent estate protecting an existing active pharmaceutical ingredient or product. In addition, many aspects of developing and manufacturing nanoparticle products involve extensive knowhow to optimize the product. Manufacturing challenges Therapeutic nanoparticles present many of the same scientific challenges for generic drug entry as complex biological drugs. Very small changes to the manufacturing process of a therapeutic nanoparticle can result in a different nanoparticle altogether, both with respect to its chemical composition and structure as well as to its ultimate therapeutic effects. Changes to the manufacturing process of a therapeutic nanoparticle can affect the drug content of the particle and the size and chemical makeup of the polymer component of the particle. Structural parameters, including the nanoparticle surface properties and size, and the spatial arrangement of the polymer, drug and targeting ligand, are determined by the manner in which the nanoparticle assembles. Changing any of
these factors may alter critical properties, such as the distribution of the particles in the body or the rate at which the drug is released from the particle, thereby resulting in a different clinical outcome3. Because the analytical tools necessary to characterize therapeutic nanoparticles and their specific biological effects are not yet well established, a generics company may not be able to demonstrate the degree of sameness of a nanoparticle product required for FDA or other regulatory approval. As such, the manufacturer of the trade or ‘pioneer’ drug could have great exclusivity advantages by keeping their final manufacturing process a trade secret or by obtaining process patents. If a generics company cannot practice the same manufacturing process, the generic product probably does not have the same therapeutic nanoparticle. Simply put, because the FDA requires that a generic product be the “same,” then without the same process, the generics company will face considerable difficulties producing the same product. Regulatory challenges Stringent regulatory requirements exist for the entry of generic therapeutic nanoparticles. Table 1 lists the currently approved nanoparticle or liposomal products on the US market. Despite the fact that many of these products lack patent protection today, and that several generate worldwide sales of $250–750 million, none of these products has a generic equivalent4. In fact, the FDA has never approved a parenterally administered generic therapeutic nanoparticle. No generics company has even attempted to seek approval for a generic form of a patent-protected nanoparticle product, and thus there has not been any patent litigation resulting from a generics challenge (that is, a “paragraph IV challenge”). In essence, the technical and manufacturing complexity, difficulties in demonstrating bioequivalence and regulatory challenges result in longer and more expensive development pathways, which are incompatible with most generic drug business models. The two major regulatory difficulties for the development of generic therapeutic nanoparticles are (i) showing bioequivalence and (ii) FDA requirements for parenteral drug products. For a generic drug company to avoid doing full clinical trials of safety and effectiveness, they must show that, among other things, the generic product is bioequivalent to the pioneer product that it references and relies on for approval. For typical, oral, small-molecule products, this bioequivalence is easily shown by dosing healthy volunteers and measuring and comparing the mean area under the plasma concentration curve and maximum plasma
volume 28 number 12 DECEMBER 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
pat e n t s
Traditional drug economics Nanoparticle drug economics
Profit
Longer-term peak sales
Shorter, less expensive path to approval and launch
Cash flow
Loss
concentration of the drug and then calculating the 90% confidence interval for the ratio of those mean responses5. If that confidence interval is within 80–125% of the pioneer product for those parameters, the generic product is said to be bioequivalent5. In the case of therapeutic nanoparticles and other locally acting drugs, the standard approach to the evaluation of bioequivalence is not sufficient to show sameness with respect to rate and extent of absorption of the active pharmaceutical ingredient at the site of action. A therapeutic nanoparticle circulates in the plasma and targets specific tissue locations. It is this tissue localization that is responsible for nanoparticles’ enhanced effectiveness and/or safety. A standard bioequivalence test suitable for most small-molecule generic products only assesses plasma drug concentrations in healthy volunteers, and does not reveal whether a generic product has the same tissue distribution and drug concentration in disease sites as a pioneer product. To directly demonstrate equivalent drug concentration at the site of action, a prospective generic company would have to confront several issues. First, it would need to test the generic product in individuals with the target indication because the EPR effect and the product biodistribution would be different in healthy individuals. Moreover, for most anticancer drugs, testing in healthy volunteers is not feasible because of the drugs’ toxicity. Second, noninvasive assays for quantifying active pharmaceutical ingredients or nanoparticles in most tissues do not exist. No assay currently exists to measure the drug release that occurs at the tissue site as opposed to other areas in the body. In principle, the possibility exists of sampling the target tissue and analyzing it for the concentration of nanoparticles and drug. This possibility is only theoretical because the FDA has never accepted this type of data for a generic product. Even if a company did conduct a study testing a generic product’s localization by conducting an invasive procedure, there would be a low likelihood the company would successfully identify a generic product with comparative localization on the first attempt. Many tissues simply would not be available for sampling without an invasive procedure such as surgery or percutaneous biopsy, which may create an ethical dilemma. Assuming that the necessary techniques to do this type of work could be developed (as these techniques do not currently exist), the cost and timing to conduct such testing could be prohibitive for most generic manufacturers or create a cost structure for a generic drug that would make it difficult to set the price below the proprietary medicine. A company would have to recruit patients, possibly conduct an invasive procedure on the patients, and develop new
Figure 2 Drug development economics: traditional versus therapeutic nanoparticle drugs.
and complex analytical methods. Even with all that data, the FDA might still require a clinical trial. The FDA discussed some of these complex comparison issues in 2001 (ref. 6) and 2007 (ref. 7) and has yet to decide how to address or solve these issues for therapeutic nanoparticles as a whole. A recent draft guidance relating specifically to abbreviated new drug applications for Doxil, a passively targeted nanoparticle, sets a very exacting standard8. The guidance states that the generics company should conduct clinical studies in cancer patients and assess bioequivalence based on analysis of both free doxorubicin and liposome-encapsulated doxorubicin. It further states that the pivotal clinical study should use test product produced using the proposed commercial scale manufacturing process—a far more stringent requirement than the 1:10 scale conventionally employed for bioequivalence studies. Moreover, it is recommended that the proposed generic product contain the same lipid excipients produced by the same synthetic route as the reference product, and that the generic product be manufactured using the same process as the reference product. Lastly, the generic product should be equivalent with respect to a broad array of physicochemical characteristics, including liposome composition, physical state of the encapsulated drug, the liposome internal environment, morphology, lipid bilayer fluidity, size distribution, surface chemistry, electrical potential and charge, and in vitro drug leakage under a variety of conditions9. The FDA will likely review each therapeutic nanoparticle on its own and determine the recommended tests on an individual basis. For therapeutic nano-
nature biotechnology volume 28 number 12 DECEMBER 2010
particles that localize to a greater extent than Doxil, the FDA may also require a showing of drug and nanoparticle concentration at the site of action. The combination of these factors makes for an uncertain, expensive and daunting development route for generic equivalents to therapeutic nanoparticles. A generic therapeutic nanoparticle would also have to meet the FDA’s stringent requirements for drug products intended for parenteral use. The FDA regulations state: “Generally, a drug product intended for parenteral use shall contain the same inactive ingredients and in the same concentration as the reference listed drug identified by the applicant under paragraph (a)(3) of this section. However, an applicant may seek approval of a drug product that differs from the reference listed drug in preservative, buffer, or antioxidant provided that the applicant identifies and characterizes the differences and provides information demonstrating that the differences do not affect the safety or efficacy of the proposed drug product”9. Essentially, this requirement means that a generics company seeking to make a drug product for parenteral use, such as a therapeutic nanoparticle, must develop the same formulation, both qualitatively and quantitatively, as that of the pioneer drug; that is, with certain exceptions, it must contain precisely the same ingredients in precisely the same amounts. The recent guidance from the FDA on liposomal doxorubicin suggests that even changes in preservative, buffer or antioxidant may not be accepted for a generic therapeutic nanoparticle. 1269
© 2010 Nature America, Inc. All rights reserved.
pat e n t s This level of sameness contrasts with typical oral formulations. For generic oral products, a generics company can substitute a wide variety of excipients and change their concentrations as compared to the pioneer product. Generics companies often change the excipients in a formulation because the manufacturer of the pioneer drug has a formulation patent and the generics company is looking to circumvent this patent. A generics company’s ability to switch excipients in oral products often limits the value of formulation patents for oral products. The issue for the pioneer drug company is typically that broad formulation claims can be invalidated in litigation and narrow formulation claims are often circumventable. In contrast, because of the FDA’s requirements for parenteral drug products, patents for a therapeutic nanoparticle have a much greater value than formulation patents for an oral product. For example, a pioneer drug company could have a patent claim narrowly covering a nanoparticle formulation, which a generics company could not circumvent. Importantly, however, if this claim covered an FDA-approved therapeutic nanoparticle, a generics company would have to develop an identical product with the exact same polymers and the exact same drug, all in the exact same proportions. Based on the FDA guidance on liposomal doxorubicin, it is now apparent that, at a minimum, the generic nanoparticle would also have to match the pioneer product in state of encapsulated drug, nanoparticle internal environment, nanoparticle morphology, nanoparticle size distribution, polymer orientation, electrical surface potential and drug release9. Regulations for more complex nanoparticles, such as polymeric nanoparticles, could have even more rigorous regulatory requirements including clinical outcome data. An attempt to circumvent the claim by substituting a different polymer or slightly reducing the concentration of a particular polymer would not be allowed
1270
for a generic therapeutic nanoparticle. Thus, the combination of even a very narrow composition or process patent claim with the FDA’s general requirements for parenteral drug products and the heightened requirements for therapeutic nanoparticles can result in a significant and long exclusivity period. Even broader patent claims could also be obtained for additional market protection, as described above. A generics company could attempt to circumvent the FDA’s requirements for parenteral drug products by filing a Section 505(b) (2) new drug application (NDA). Such a filing would not require the ‘same’ formulation. Of course by switching formulations, a generics company would not be able to fully rely on the pioneer drug company’s previous clinical findings. At best, a 505(b)(2) NDA would be able to rely on the toxicology data for the active pharmaceutical ingredient. A 505(b)(2) applicant would need to conduct new effectiveness trials in order to ensure that the new formulation would deliver the same quantity of drug to disease sites over the same time course, or produce the same drug exposure to nondiseased tissues. Because conducting effectiveness trials is the most expensive part of product development, this 505(b)(2) regulatory route is unlikely to be economically viable for the generics drug industry. In addition, products approved through the 505(b)(2) route are not directly substitutable in the pharmacy for the branded product. Hence, a 505(b)(2) generic product would need to be marketed and sold by a sales force, which most generics companies do not have. Conclusions Therapeutic nanoparticles offer the potential to dramatically improve the effectiveness and side-effect profile of new and existing drugs. In addition, this new drug class presents equivalent or possibly greater challenges for generic drug entry than other types of pharmaceutical products,
including biologics. The combination of scientific, patent, know-how and regulatory issues makes the development of a generic therapeutic nanoparticle very difficult and inconsistent with the generics business model. Pioneer drug companies have the opportunity to obtain strong patent protection, which can maintain market exclusivity for the full patent term and potentially beyond patent expiration due to the scientific, manufacturing and regulatory hurdles confronting the development of generic therapeutic nanoparticles. If successful, this opportunity will fundamentally change the pharmaceutical business model by reducing or avoiding the sudden revenue fall-off for successful proprietary drugs that confronts the industry today (Fig. 2). COMPETING FINANCIAL INTERESTS The authors declare competing financial interests: details accompany the full-text HTML version of the paper at http://www.nature.com/naturebiotechnology/. 1. Doxil Product Information. Ortho Biotech Products, LP (2008). 2. Farokhzad O. & Langer, R. ACS Perspective 3, 16–20 (2009). 3. Davis, M. Nanotechnol. Law & Bus. 255–262 (September 2006). 4. US Food and Drug Administration. Orange Book: Approved Drug Products with Therapeutic Equivalence Evaluations
5. US Food and Drug Administration. Guidance for Industry: Bioavailability and Bioequivalence Studies for Orally Administered Drug Products (March 2003). 6. US Food and Drug Administration. Advisory Committee For Pharmaceutical Science Proceedings (July 20, 2001). 7. US Food and Drug Administration. Critical Path Opportunities for Generic Drugs (May 1, 2007). 8. US Food and Drug Administration. Draft guidance on doxorubicin hydrochloride (February 2010). 9. 21 CFR 314.94(a)(iii).
volume 28 number 12 DECEMBER 2010 nature biotechnology
patents
© 2010 Nature America, Inc. All rights reserved.
Recent patent applications in pharmacogenomics Inventor
Priority application date
Patent number
Description
Assignee
WO 2010115154, US 20100273219
A method of amplifying target nucleic acids in samples involving preparing an amplification mixture of forward and reverse primers comprising the target-specific portion and a barcode primer, comprising nucleotides, and subjecting the mixture to amplification.
Anderson M, Chen P, 4/2/2009 Fluidigm (S. San Francisco, CA, Kaper F, May A, Wang J USA)
10/7/2010, 10/28/2010
US 20080038810, US 7816121
A droplet microactuator comprising a substrate with electrodes for conducting droplet operations and temperature control elements arranged near the electrodes to transport nucleic acid droplets on electrodes; useful for nucleic acid amplification.
Paik PY, Pamula VK, Paik PY, Pamula VK, Pollack MG Pollack MG, Duke University (Durham, NC, USA), Advanced Liquid Logic (Research Triangle Park, NC, USA)
4/18/2006
2/14/2008, 10/19/2010
WO 2010115044
A method for depleting a nucleic acid sample of nontarget nucleic acids involving denaturing the sample nucleic acids in a reaction mixture, contacting the denatured nucleic acids with at least one target-specific primer pair under suitable annealing conditions, conducting a first cycle of extension of any annealed target-specific primer pairs and conducting nuclease digestion of singlestranded nucleic acid sequences in the reaction mixture.
Fluidigm (S. San Francisco, CA, USA)
4/2/2009
10/7/2010
US 20100210025
A system comprising assigning modules to a genome, assigning a value or weight to a module for a given profile, analyzing a genomic sequence to identify modules and assigning a profile to the sequence; useful, e.g., for profiling a genomic sequence.
George R, Wouters M Victor Chang Cardiac Research Institute (Darlinghurst, Australia)
8/15/2006
8/19/2010
WO 2008144345, EP 2145021
Bristol-Myers Squibb A method of predicting the likelihood of a (Princeton, NJ, USA) therapeutic response to insulin growth factor-1 receptor (IGF1R) modulator as a cancer treatment, comprising measuring levels of biomarker genes or proteins before and after exposing sample to IGF1R modulator.
US 20100178655, WO 2010083250
Genotyping a single cell, comprising preamplifying an amplified genome to produce a pre-amplification reaction mixture comprising amplicons specific for target nucleic acids, and amplifying and detecting the amplicons.
Hamilton A, Lin M, Fluidigm (S. San Francisco, CA, Mir A, Pieprzyk M USA)
1/13/2009
7/15/2010, 7/22/2010
WO 2010075570
A computer-accessible medium used for assembling at least one haplotype sequence or genotype sequence of at least one genome, with stored computer-executable instructions.
New York University (New York)
Mishra B, Narzisi G
12/24/2008
7/1/2010
WO 2010048497
A method of diagnosing predisposition to, or progression of, Alzheimer’s disease, comprising determining a genetic profile comprising at least one marker in a specified candidate region and correlating the genetic profile with a reference profile.
Genizon Biosciences (St. Laurent, Quebec, Canada), TechnoSynapse (Blainville, Quebec, Canada)
Belouchi A, Croteau P, Debrus S, Kebache S, Keith T, Little RD, Paquin B, Poirier J, Raelson JV, Segal J, van Eerdewegh P
10/24/2008
4/29/2010
KR 2010011719
A single-nucleotide polymorphism provided for calretinin gene, retinoic acid receptor beta gene and pre-B-cell leukemia transcription factor 1 gene; for use in biochips for predicting the risk of obesity.
Jeong J
Gang S, Jeong J, Jeong M, Kim S
7/25/2008
2/3/2010
JP 4403376
A unit for detecting interaction between substances (e.g., hybridization) that has a reaction region, water-absorptive gel in the reaction region, an electrode and water-holding portion; useful in a substrate for bioassays for analyzing mutations of genes.
Sony (Tokyo)
Mamine T, Sakamoto Y, 10/9/2003 Segawa Y, Yuo K
US 20090117583
A new epidermal growth factor–like variant in skin (ELVIS)-2 polypeptide; useful for screening assays, detection assays, predictive medicine and methods of treatment.
Busfield SJ, Busfield SJ, Gearing DP Gearing DP, Millennium Pharmaceuticals (Cambridge, MA, USA)
Zimmermann BG
Attar RM, Carboni JM, 5/17/2007 Chen J, Dongre AR, Gottardis MM, Hafezi R, Han X, Huang F, Hurlburt W, Robinson DM, Wittenberg GM
11/19/1998
Publication date
11/27/2008, 7/22/2010
1/27/2010
5/7/2009
Source: Thomson Scientific Search Service. The status of each application is slightly different from country to country. For further details, contact Thomson Scientific, 1800 Diagonal Road, Suite 250, Alexandria, Virginia 22314, USA. Tel: 1 (800) 337-9368 (http://www.thomson.com/scientific).
nature biotechnology volume 28 number 12 DECEMBER 2010
1271
news and views
Megabases for kilodollars Mikkel Algire, Radha Krishnakumar & Chuck Merryman
The potential applications of synthetic DNA are virtually unlimited, ranging from the design of new genetic circuits and biosynthetic pathways to the creation of complete bacterial genomes1–3. Two reports in this issue from Church and colleagues bring us a step closer to an era of abundant synthetic DNA by addressing the hurdles of high cost and error rates. Kosuri et al.4 and Matzas et al.5 describe approaches that allow relatively inexpensive DNA oligonucleotides (oligos), synthesized on a microarray, to be used to assemble long DNA molecules encoding genes and other genomic elements (Fig. 1). Currently, the price of long synthetic DNA is dominated by the price of the initial oligos, ~$0.10 per nucleotide. The second most costly aspect of constructing synthetic DNA is the identification of assembled products that have the correct sequence, a step that requires sequencing. Sequencing costs vary tremendously depending on the degree of automation and other factors, but under the best of circumstances, they are 10–20% of oligo costs. The methods of Kosuri et al.4 and Matzas et al.5 produce synthetic DNA at a cost of <$0.01 per nucleotide, a tenfold improvement that should make large-scale projects involving megabases of DNA readily affordable. Moreover, Matzas et al.5 demonstrate an intriguing new use of a high-throughput sequencer as a preparative tool, whereby specific DNA is retrieved after it has been sequenced. This allows the authors to obtain initial oligos that are free of errors, which may reduce labor and eliminate the need to verify the final synthesized DNA by sequencing. Oligos to be assembled are usually synthesized on a column, a method that yields accurate, homogeneous aliquots of molecules Mikkel Algire, Radha Krishnakumar and Chuck Merryman are at the J. Craig Venter Institute, Rockville, Maryland, USA. e-mail: [email protected] or [email protected]
1272
that can be easily combined. A much less expensive method is synthesis on a micro array. Oligos generated this way are typically removed from the array all at once, resulting in a pool of thousands of unique oligos that must be separated into smaller groups of about 4–20 before being assembled. This extra labor can be substantial because the separation and assembly of specific groups of oligos is frequently idiosyncratic, owing to factors related to oligo representation, length and quality6. Moreover, the array synthesis process is error prone, meaning that the oligo pool contains many contaminating molecules with the wrong sequences. Poor oligo quality has often resulted in an unacceptable number of errors in DNA assembled from microarray oligos. Thus, column-derived oligos have been preferred because of their reliability and uniformity, despite the extra expense. Nevertheless, it would be ideal if inexpensive oligo pools could be used, which would make sequencing, rather than oligos, the primary determinant of cost. Kosuri et al.4 and Matzas et al.5 tackle many of these subtle problems and provide improved methods for selecting subsets of oligos from complex pools produced on a microarray. Kosuri et al.4 use successive rounds of PCR to selectively amplify subpools of oligos. The pool of DNA that they obtain from the microarray contains 13,000 individual sequences—equivalent to almost 2.5 megabases of DNA—although they do not exploit the full synthetic capacity of the array (up to 55,000 sequences). Sufficient amplification specificity is achieved by drawing on a set of 240,000 orthogonal DNA probes that are designed to minimize cross-hybridization7. This specificity, in conjunction with 200-bp oligos, enables the authors to easily amplify subpools that contain only the material needed for an individual gene. The effectiveness of this approach is demonstrated by successfully assembling 40 out of 42 DNA molecules encoding single-chain antibody fragments. The high GC content and
repetitive sequences in these constructs make them difficult to synthesize, regardless of the oligo source or the DNA assembly method. Moreover, the authors provide two alternative but straightforward protocols for constructing genes from microarray-derived oligos. One protocol is more scalable, whereas the other is more accurate, with error rates as low as 1 in 7,170 bp. In the past there has been some success with similar methods6, but the ability to synthesize longer microarray oligos (up to ~200 bases) seems to have improved matters Microarray oligonucleotide synthesis
Sequence pool and pick beads
Amplify subpool by PCR
Assemble oligos Katie Vicari
© 2010 Nature America, Inc. All rights reserved.
Two approaches for selecting oligonucleotides from complex mixtures improve the fidelity and scalability of DNA synthesis.
Figure 1 Assembly of user-defined DNA sequences from low-cost raw materials. New advances in the length, quality and processing of microarray oligonucleotides are lowering barriers to generating synthetic genes and genomes. Oligonucleotides synthesized on a microarray are less expensive than those made by column-based methods, but they contain more errors and must be recovered from a complex mixture. Kosuri et al.4 selectively amplify oligos from the pool using PCR (right). Matzas et al.5 use a high-throughput sequencer to identify molecules having the desired sequences and pick these directly off of the sequencing plate with a micropipette (left). Oligos from both methods are assembled into larger constructs.
volume 28 number 12 december 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
news and v i ews considerably. Presumably, assembly is more robust because longer tracts of DNA are left after the amplification primers are removed. Thus, fewer starting fragments are needed to assemble each product. The method of Kosuri et al.4 is still subject to the error rate of input oligos, although errors are reduced by using new, more accurate microarrays and by performing an enzymatic error correction step. The article from Matzas et al.5 directly addresses the problem of errors in oligonucleotide sequences and the labor needed to deal with them. They use a 454 sequencing instrument to identify microarray oligos with the correct sequence and then recover these oligos from the sequencer. In essence, this provides cheap oligos preverified by cheap sequencing. 454-beads bearing correct sequences are located with microscope cameras and picked using a micropipette. The oligos are subsequently assembled into larger products. Because the DNA assembly step can be virtually error free (even during the construction of whole genomes1,2), presequenced and clonal oligos have the potential to produce near-perfect assemblies. Even though Matzas et al.5 find an error in one of the eight ~220-bp constructs made from their microarray, this error rate is likely to decrease upon further optimization. Although the starting oligos, polymerase fidelity and accuracy of 454 sequencing all contribute errors, the experiments indicate that major gains are to be had by improving bead localization and retrieval. Automated bead extraction is under development, and the authors hope to achieve recovery rates of two or three beads per minute. The error rate of the current system is estimated to reach about 1 in 21,000 bp. These papers demonstrate that the use of inexpensive microarray oligonucleotides to produce long synthetic DNA is becoming more practical. Assembly technology is also undergoing rapid development; for example, oligos produced by any method can be assembled into larger sequences by a fast and essentially labor-free method3. At the same time, projects involving intensive DNA synthesis are generating advances such as empirically determined codon-optimization rules8, new insights into host metabolism9 and the first microbe controlled by a synthetic genome2. It is reasonable to wonder about how to make effective use of megabases of userdefined sequence. But with low-cost oligos and a simple means of assembling them, we can safely predict that our ability to understand and manipulate biology will only continue to improve, perhaps leading eventually to advances such as microbes engineered to be attenuated vaccines or to synthesize biofuels.
COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. 1. 2. 3. 4.
Gibson, D.G. et al. Science 319, 1215–1220 (2008). Gibson, D.G. et al. Science 329, 52–56 (2010). Gibson, D.G. et al. Nat. Methods 7, 901–903 (2010). Kosuri, S. et al. Nat. Biotechnol. 28, 1295–1299 (2010).
5. Matzas, M. et al. Nat. Biotechnol. 28, 1291–1294 (2010). 6. LeProust, E.M. et al. Nucleic Acids Res. 38, 2522–2540 (2010). 7. Xu, Q. et al. Proc. Natl. Acad. Sci. USA 106, 2289–2294 (2009). 8. Welch, M. et al. PLoS ONE 4, e7002 (2009). 9. Warner, J. et al. Nat. Biotechnol. 28, 856–862 (2010).
No refuge for insect pests Kongming Wu The sterile insect technique offers an alternative to the refuge strategy for managing resistance to Bt toxins. Transgenic cotton and corn expressing insecticidal proteins from the bacterium Bacillus thuringiensis (Bt) have been cultivated on >200 million ha worldwide over the past 15 years1, reducing the use of chemical insecticides and increasing farmers’ profits2,3. But the environmental and economic advantages of these crops are threatened by the tendency of insect pests to develop resistance to Bt toxins. A report in this issue by Tabashnik et al.4 demonstrates a new approach for suppressing the emergence of resistance. Field trials in Arizona over four growing seasons show that releasing sterile male pink bollworm moths, which can mate with resistant females, succeeded in almost completely eradicating a major cotton pest. The study establishes that sterile insect release is a viable alternative to the so-called refuge strategy for preventing the emergence of resistance and can enhance the sustainability of Bt crops. Continuous monoculture of crop varieties producing Bt toxins provides strong selective pressure for Bt-resistant insect pests (Fig. 1a). Indeed, although Bt crops remain effective against most targeted insect populations, several pests have evolved resistance5. The most promising strategy for delaying the emergence of resistance involves planting Bt crops in close proximity to ‘refuges’, which contain plants that do not express Bt toxin. Such refuges maintain populations of insects susceptible to Bt toxin, which are likely to mate with the rare, resistant insects. If the mode of inheritance of resistance is recessive, Bt plants will kill the hybrid progenies produced by such mating and thereby delay the evolution of Bt resistance6. Certain governments limit the proportion of crop that each farmer can plant to Bt Kongming Wu is at the State Key Laboratory for Biology of Plant Diseases and Insect Pests, Institute of Plant Protection, Chinese Academy of Agricultural Sciences, Beijing, China. e-mail: [email protected]
nature biotechnology volume 28 number 12 december 2010
crops and require a minimum proportion of non-Bt crop to serve as a refuge. For example, the United States and Australia mandate this approach for Bt cotton that produces only one Bt toxin5. Resistance monitoring data from these countries have suggested that the refuge strategy can significantly delay the evolution of insect resistance to Bt crops7. It is also beneficial to non-Bt crops in the refuges, which are exposed to fewer pests owing to the control exerted by neighboring Bt crops2. Moreover, as nontransgenic seed is cheaper, there is an economic incentive for farmers to adopt the refuge strategy. Despite the benefits of the refuge strategy, it has proved difficult to implement in developing countries such as China and India, primarily because of the challenges associated with training and monitoring millions of smallholder farmers. During 2009, China planted 3.7 million ha (~70% of total cotton production) and India planted 8.4 million ha (~90% of total cotton production) to Bt cotton. In certain cases, however, it has been put in place unintentionally. In China, the primary pest targeted by Bt cotton is Helicoverpa armigera, which also feeds on other crops, including corn, soybean, vegetables and peanuts. These nontransgenic host plants are frequently planted near Bt cotton and serve as refuges for H. armigera. Thus far, continuous resistance monitoring has not detected any resistance of H. armigera populations to Bt toxin7. In contrast to the situation with H. armigera in China, reliance on noncotton host plants as refuges is not an option for controlling pink bollworm (Pectinophora gossypiella), which feeds almost exclusively on cotton in some regions4. This pest has evolved resistance to Bt cotton producing one toxin in western India, where farmers did not follow regulations requiring them to plant non-Bt cotton refuges8. In Arizona, where pink bollworm is an alien species first detected about a century ago, the efficacy of Bt cotton has been sustained for 1273
© 2010 Nature America, Inc. All rights reserved.
news and v i ews considerably. Presumably, assembly is more robust because longer tracts of DNA are left after the amplification primers are removed. Thus, fewer starting fragments are needed to assemble each product. The method of Kosuri et al.4 is still subject to the error rate of input oligos, although errors are reduced by using new, more accurate microarrays and by performing an enzymatic error correction step. The article from Matzas et al.5 directly addresses the problem of errors in oligonucleotide sequences and the labor needed to deal with them. They use a 454 sequencing instrument to identify microarray oligos with the correct sequence and then recover these oligos from the sequencer. In essence, this provides cheap oligos preverified by cheap sequencing. 454-beads bearing correct sequences are located with microscope cameras and picked using a micropipette. The oligos are subsequently assembled into larger products. Because the DNA assembly step can be virtually error free (even during the construction of whole genomes1,2), presequenced and clonal oligos have the potential to produce near-perfect assemblies. Even though Matzas et al.5 find an error in one of the eight ~220-bp constructs made from their microarray, this error rate is likely to decrease upon further optimization. Although the starting oligos, polymerase fidelity and accuracy of 454 sequencing all contribute errors, the experiments indicate that major gains are to be had by improving bead localization and retrieval. Automated bead extraction is under development, and the authors hope to achieve recovery rates of two or three beads per minute. The error rate of the current system is estimated to reach about 1 in 21,000 bp. These papers demonstrate that the use of inexpensive microarray oligonucleotides to produce long synthetic DNA is becoming more practical. Assembly technology is also undergoing rapid development; for example, oligos produced by any method can be assembled into larger sequences by a fast and essentially labor-free method3. At the same time, projects involving intensive DNA synthesis are generating advances such as empirically determined codon-optimization rules8, new insights into host metabolism9 and the first microbe controlled by a synthetic genome2. It is reasonable to wonder about how to make effective use of megabases of userdefined sequence. But with low-cost oligos and a simple means of assembling them, we can safely predict that our ability to understand and manipulate biology will only continue to improve, perhaps leading eventually to advances such as microbes engineered to be attenuated vaccines or to synthesize biofuels.
COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. 1. 2. 3. 4.
Gibson, D.G. et al. Science 319, 1215–1220 (2008). Gibson, D.G. et al. Science 329, 52–56 (2010). Gibson, D.G. et al. Nat. Methods 7, 901–903 (2010). Kosuri, S. et al. Nat. Biotechnol. 28, 1295–1299 (2010).
5. Matzas, M. et al. Nat. Biotechnol. 28, 1291–1294 (2010). 6. LeProust, E.M. et al. Nucleic Acids Res. 38, 2522–2540 (2010). 7. Xu, Q. et al. Proc. Natl. Acad. Sci. USA 106, 2289–2294 (2009). 8. Welch, M. et al. PLoS ONE 4, e7002 (2009). 9. Warner, J. et al. Nat. Biotechnol. 28, 856–862 (2010).
No refuge for insect pests Kongming Wu The sterile insect technique offers an alternative to the refuge strategy for managing resistance to Bt toxins. Transgenic cotton and corn expressing insecticidal proteins from the bacterium Bacillus thuringiensis (Bt) have been cultivated on >200 million ha worldwide over the past 15 years1, reducing the use of chemical insecticides and increasing farmers’ profits2,3. But the environmental and economic advantages of these crops are threatened by the tendency of insect pests to develop resistance to Bt toxins. A report in this issue by Tabashnik et al.4 demonstrates a new approach for suppressing the emergence of resistance. Field trials in Arizona over four growing seasons show that releasing sterile male pink bollworm moths, which can mate with resistant females, succeeded in almost completely eradicating a major cotton pest. The study establishes that sterile insect release is a viable alternative to the so-called refuge strategy for preventing the emergence of resistance and can enhance the sustainability of Bt crops. Continuous monoculture of crop varieties producing Bt toxins provides strong selective pressure for Bt-resistant insect pests (Fig. 1a). Indeed, although Bt crops remain effective against most targeted insect populations, several pests have evolved resistance5. The most promising strategy for delaying the emergence of resistance involves planting Bt crops in close proximity to ‘refuges’, which contain plants that do not express Bt toxin. Such refuges maintain populations of insects susceptible to Bt toxin, which are likely to mate with the rare, resistant insects. If the mode of inheritance of resistance is recessive, Bt plants will kill the hybrid progenies produced by such mating and thereby delay the evolution of Bt resistance6. Certain governments limit the proportion of crop that each farmer can plant to Bt Kongming Wu is at the State Key Laboratory for Biology of Plant Diseases and Insect Pests, Institute of Plant Protection, Chinese Academy of Agricultural Sciences, Beijing, China. e-mail: [email protected]
nature biotechnology volume 28 number 12 december 2010
crops and require a minimum proportion of non-Bt crop to serve as a refuge. For example, the United States and Australia mandate this approach for Bt cotton that produces only one Bt toxin5. Resistance monitoring data from these countries have suggested that the refuge strategy can significantly delay the evolution of insect resistance to Bt crops7. It is also beneficial to non-Bt crops in the refuges, which are exposed to fewer pests owing to the control exerted by neighboring Bt crops2. Moreover, as nontransgenic seed is cheaper, there is an economic incentive for farmers to adopt the refuge strategy. Despite the benefits of the refuge strategy, it has proved difficult to implement in developing countries such as China and India, primarily because of the challenges associated with training and monitoring millions of smallholder farmers. During 2009, China planted 3.7 million ha (~70% of total cotton production) and India planted 8.4 million ha (~90% of total cotton production) to Bt cotton. In certain cases, however, it has been put in place unintentionally. In China, the primary pest targeted by Bt cotton is Helicoverpa armigera, which also feeds on other crops, including corn, soybean, vegetables and peanuts. These nontransgenic host plants are frequently planted near Bt cotton and serve as refuges for H. armigera. Thus far, continuous resistance monitoring has not detected any resistance of H. armigera populations to Bt toxin7. In contrast to the situation with H. armigera in China, reliance on noncotton host plants as refuges is not an option for controlling pink bollworm (Pectinophora gossypiella), which feeds almost exclusively on cotton in some regions4. This pest has evolved resistance to Bt cotton producing one toxin in western India, where farmers did not follow regulations requiring them to plant non-Bt cotton refuges8. In Arizona, where pink bollworm is an alien species first detected about a century ago, the efficacy of Bt cotton has been sustained for 1273
more than a decade. From 1996 to 2005, farmers who grew Bt cotton also planted non-Bt cotton in compliance with the refuge strategy, and the susceptibility of pink bollworm to Bt did not decrease4. A disadvantage of this approach, however, is that populations of the insect pest must be maintained. Tabashnik et al.4 report that superimposing the sterile insect technique (SIT) on Bt cotton plants offers a compelling alternative to the refuge strategy while helping to eradicate pink bollworm. The authors’ program, which spanned ~100,000 ha in Arizona from 2006 to 2009, showed that susceptibility to Bt cotton did not decrease, pink bollworm population density declined dramatically and insecticide sprays against this pest were eliminated. The SIT, a method of pest control using area-wide inundative releases of sterile insects to reduce fertility of a field population of the same species, was first developed in the United States more than 50 years ago9. The repeated release of sterile insects into the environment in numbers 10- to 100-fold in excess of the size of the native population can eventually drive the native population to extinction if the majority of native female insects mate with sterile males. Although the SIT has been successful in some cases, such as the screwworm fly (Cochliomyia hominivorax) eradication program, its application has been limited9. One challenge has been the cost of generating a sufficiently high ratio of sterile to wild insects at the start of a SIT program (Fig. 1b). Using another control method to lower the pest’s population density can make it easier to attain a suitable ratio, but intensive use of conventional insecticides raises concerns about harm to nontarget organisms, including people. The report of Tabashnik et al.4 reveals how the SIT and Bt transgenic technology can be used in a complementary manner (Fig. 1c). On the one hand, expression of the Bt toxin suppresses the size of the native insect population enough to jump-start the efficacy of the SIT while increasing its economic feasibility. On the other hand, when female moths that have evolved resistance to the Bt toxin mate with sterile insects, the failure to produce fertile progeny delays the establishment of resistance based on dominant inheritance. More importantly, the combination of the SIT and Bt crops allows farmers to reduce or eliminate planting of non-Bt refuges. Meanwhile, because mating with sterile insects does not produce fertile progeny, the approach could delay pest resistance to Bt crops that is based on either dominant or recessive inheritance (Fig. 1c). The stunning success of the Arizona program can be attributed to several favorable factors, including long-term public 1274
a
Bt cotton
Emergence of resistance alleles in the absence of refuges Low level of damage with high risk of resistance in the absence of refuges
b
Severe change from resistant insects
Non-Bt cotton
Release of sterile moths
Severe damage from wild insects
c
Low level of damage with costly release of large numbers of sterile moths
Bt cotton
Release of sterile moths
Low level of damage with high risk of resistance in the absence of refuges
Extremely low level of damage with reduced chance of resistance and less extensive release of sterile insects
Katie Vicari
© 2010 Nature America, Inc. All rights reserved.
news and v i ews
Figure 1 Use of the SIT together with transgenic cotton expressing the Bt transgene suppresses the growth of the pink bollworm population and facilitates management of resistance to Bt toxin. Pink bollworm feeds only on cotton bolls and does not damage other tissues. (a) Sustainable use of Bt cotton to control pink bollworm populations is threatened by the emergence of resistance. (b) Although costly, repeated release of sterile pink bollworm moths (red) in vast excess to the number of wild moths (brown) can suppress the growth of pink bollworm populations. (c) Combined use of Bt cotton and SIT ensures that the release of fewer sterile moths can suppress the growth of pink bollworm populations while preventing the emergence of resistance to Bt toxin.
investment in SIT against pink bollworm. Since 1967, sterile pink bollworm moths have been released over cotton fields in the San Joaquin Valley of California to prevent establishment of this pest by moth immigration from southern California9. The Arizona program also benefits from the extremely high efficacy of Bt cotton against pink bollworm. Results from Arizona show that Bt cotton producing either one or two Bt toxins kills virtually 100% of susceptible pink bollworm larvae4. Coordinated contributions from cotton farmers, the US Department
of Agriculture and university scientists have also been critical. Despite the impressive results reported by Tabashnik et al.4 and the potential of this combinatorial strategy for many situations, several alternatives to conventional Bt crops also show considerable promise. For instance, new insectresistant transgenic crops may decrease native insect populations more efficiently than current Bt crops. They include proteins modified to counteract resistance development or the expression of two or more distinct toxins targeting
volume 28 number 12 december 2010 nature biotechnology
news and v i ews COMPETING FINANCIAL INTERESTS The author declares no competing financial interests. 1. James, C. Global Status of Commercialized Biotech/ GM Crops: 2009 (ISAAA 41, Ithaca, New York, USA, 2009). 2. Hutchison, W. et al. Science 330, 222–225 (2010). 3. Wu, K. et al. Science 321, 1676–1678 (2008). 4. Tabashnik, B. et al. Nat. Biotechnol. 28, 1304–1307 (2010). 5. Tabashnik, B.E. et al. J. Econ. Entomol. 102, 2011– 2025 (2009). 6. Gould, F. Annu. Rev. Entomol. 43, 701–726 (1998). 7. Liu, C. et al. Sci. China Life Sci. 53, 934–941 (2010). 8. Bagla, P. Science 327, 1439 (2010). 9. Klassen, W. & Curtis, C. in Sterile Insect Technique (eds. Dyck, V., Hendrichs, J. & Robinson, A.) 3–36 (Springer, The Netherlands, 2005). 10. Tabashnik, B. Science 330, 189–190 (2010). 11. Alphey, N., Bonsall, M.B. & Alphey, L. J. Econ. Entomol. 102, 717–732 (2009).
Nanoparticles in the lung Wolfgang G Kreyling, Stephanie Hirn & Carsten Schleh An imaging study begins to define the parameters that control the biodistribution of nanoparticles after pulmonary delivery. Little is known about the fate of nanoparticles that enter the lungs, either deliberately through medical treatments1 or incidentally through air pollution2,3 and occupational exposure in the workplace. As inhaled nanoparticles can cross the air-blood barrier into the circulation and accumulate in secondary organs and tissues, a biokinetic analysis would be the first step in a dose-response study as part of a comprehensive risk assessment (Fig. 1). In this issue, Frangioni, Tsuda and colleagues4 undertake such a biokinetic analysis in rats, using near-infrared imaging to follow the fate of intratracheally instilled nanoparticles that are varied systematically in size, surface modification and core composition. The authors study the parameters that control whether nanoparticles remain in the lung, are transported to regional lymph nodes and the bloodstream or are cleared from the body through the kidneys. Their findings should benefit several areas of research, including the design of nanoparticles for drug delivery and the evaluation of the toxicity of particulate air pollution. Wolfgang G. Kreyling, Stephanie Hirn and Carsten Schleh are at the Comprehensive Pneumology Center, Institute of Lung Biology and Disease, and Focus Network Nanoparticles and Health, Helmholtz Zentrum München, German Research Center for Environmental Health, Neuherberg/Munich, Germany. e-mail: [email protected]
The lungs represent a promising route for drug delivery because of their accessible, large surface area and because the air-blood barrier is rather thin1. After crossing the airblood barrier, inhaled nanoparticles have been shown to reach detectable levels in the blood circulation and to enter specific cells and tissues5 (Fig. 1). Translocation and accumulation in tissues seem to depend on physicochemical properties of the nanoparticle core and surface5–7. Thus far, however, few studies have described nanoparticle transport across the lungs quantitatively5,6. Frangioni, Tsuda and colleagues4 find that noncationic nanoparticles smaller than ~34 nm in diameter that do not bind serum proteins reach the regional lymph nodes within 30 min. Nanoparticles larger than ~34 nm are consistently retained in the lungs. When the diameter falls below ~6 nm and the charge is zwitterionic, about half of the nanoparticles rapidly enter the bloodstream from the alveolar airspaces and are mostly cleared from the body by means of renal filtration. The authors also show that nanoparticle behavior depends strongly on the surface coating, which affects binding to proteins in body fluid. Biodegradable nanoparticles are currently considered to be the best choice for targeted drug delivery, but biopersistent nanoparticles might also be used if they are cleared from the body quickly enough to minimize exposure and retention8. As demonstrated previously by the Frangioni laboratory, intravenously
nature biotechnology volume 28 number 12 december 2010
administered nanoparticles can be rapidly excreted through the kidneys if their physicochemical properties are carefully chosen9,10. Efficient renal clearance represents one elegant way of keeping the accumulation and, hence, the dose to secondary organs, low9,10. The present study4 indicates how renal clearance could be achieved following deposition in the lungs. The new work4 also defines the requirements for targeting nanoparticles to regional lymph nodes, which may be useful for therapies directed to the immune system. However, further quantitative investigation is needed to differentiate between transport to the blood through lymphatic drainage and direct translocation across the air-blood barrier to the blood. Additional challenges will arise in extra polating from the rather simple nanoparticles used in this study to highly functionalized nanosystems bearing targeting, therapeutic and diagnostic molecules. Finally, the results of Frangioni, Tsuda and colleagues4 may help in understanding the
Bronchial epithelium
Nano particles
Alveolar epithelium
Interstitium
Epithelium lining fluid
AM M Regional lymph nodes
Circulation
Neutrophil Monocyte
Immune system y
Kidneys Heart
Brain
Fetus
Urine
Katie Vicari
© 2010 Nature America, Inc. All rights reserved.
the same pest species10. In another approach, the US Environmental Protection Agency has approved sales of mixtures of corn seeds with and without Bt toxins that kill corn rootworms. This ensures that farmers comply with the refuge strategy. However, its use is limited to the pest species whose larvae do not migrate between plants. Finally, there are also transgenic strategies to improve SIT efficiency by expressing dominant lethal genes, promoting refractoriness of the sterile insects to disease or facilitating strain marking, genetic sexing and molecular sterilization11. There thus seems considerable scope to further enhance the sustainability of Bt crops and develop new strategies for integrated pest management.
Liver
Figure 1 Schematic of nanoparticle translocation from the lung epithelium to regional lymph nodes and blood circulation. Once circulating in blood, nanoparticles may accumulate in various secondary organs of the body. The study by Frangioni, Tsuda and colleagues4 highlights two important pathways of nanoparticle biodistribution: the pathway of lymphatic drainage toward regional lymph nodes and renal clearance from blood to urine. AM, alveolar macrophages.
1275
news and v i ews COMPETING FINANCIAL INTERESTS The author declares no competing financial interests. 1. James, C. Global Status of Commercialized Biotech/ GM Crops: 2009 (ISAAA 41, Ithaca, New York, USA, 2009). 2. Hutchison, W. et al. Science 330, 222–225 (2010). 3. Wu, K. et al. Science 321, 1676–1678 (2008). 4. Tabashnik, B. et al. Nat. Biotechnol. 28, 1304–1307 (2010). 5. Tabashnik, B.E. et al. J. Econ. Entomol. 102, 2011– 2025 (2009). 6. Gould, F. Annu. Rev. Entomol. 43, 701–726 (1998). 7. Liu, C. et al. Sci. China Life Sci. 53, 934–941 (2010). 8. Bagla, P. Science 327, 1439 (2010). 9. Klassen, W. & Curtis, C. in Sterile Insect Technique (eds. Dyck, V., Hendrichs, J. & Robinson, A.) 3–36 (Springer, The Netherlands, 2005). 10. Tabashnik, B. Science 330, 189–190 (2010). 11. Alphey, N., Bonsall, M.B. & Alphey, L. J. Econ. Entomol. 102, 717–732 (2009).
Nanoparticles in the lung Wolfgang G Kreyling, Stephanie Hirn & Carsten Schleh An imaging study begins to define the parameters that control the biodistribution of nanoparticles after pulmonary delivery. Little is known about the fate of nanoparticles that enter the lungs, either deliberately through medical treatments1 or incidentally through air pollution2,3 and occupational exposure in the workplace. As inhaled nanoparticles can cross the air-blood barrier into the circulation and accumulate in secondary organs and tissues, a biokinetic analysis would be the first step in a dose-response study as part of a comprehensive risk assessment (Fig. 1). In this issue, Frangioni, Tsuda and colleagues4 undertake such a biokinetic analysis in rats, using near-infrared imaging to follow the fate of intratracheally instilled nanoparticles that are varied systematically in size, surface modification and core composition. The authors study the parameters that control whether nanoparticles remain in the lung, are transported to regional lymph nodes and the bloodstream or are cleared from the body through the kidneys. Their findings should benefit several areas of research, including the design of nanoparticles for drug delivery and the evaluation of the toxicity of particulate air pollution. Wolfgang G. Kreyling, Stephanie Hirn and Carsten Schleh are at the Comprehensive Pneumology Center, Institute of Lung Biology and Disease, and Focus Network Nanoparticles and Health, Helmholtz Zentrum München, German Research Center for Environmental Health, Neuherberg/Munich, Germany. e-mail: [email protected]
The lungs represent a promising route for drug delivery because of their accessible, large surface area and because the air-blood barrier is rather thin1. After crossing the airblood barrier, inhaled nanoparticles have been shown to reach detectable levels in the blood circulation and to enter specific cells and tissues5 (Fig. 1). Translocation and accumulation in tissues seem to depend on physicochemical properties of the nanoparticle core and surface5–7. Thus far, however, few studies have described nanoparticle transport across the lungs quantitatively5,6. Frangioni, Tsuda and colleagues4 find that noncationic nanoparticles smaller than ~34 nm in diameter that do not bind serum proteins reach the regional lymph nodes within 30 min. Nanoparticles larger than ~34 nm are consistently retained in the lungs. When the diameter falls below ~6 nm and the charge is zwitterionic, about half of the nanoparticles rapidly enter the bloodstream from the alveolar airspaces and are mostly cleared from the body by means of renal filtration. The authors also show that nanoparticle behavior depends strongly on the surface coating, which affects binding to proteins in body fluid. Biodegradable nanoparticles are currently considered to be the best choice for targeted drug delivery, but biopersistent nanoparticles might also be used if they are cleared from the body quickly enough to minimize exposure and retention8. As demonstrated previously by the Frangioni laboratory, intravenously
nature biotechnology volume 28 number 12 december 2010
administered nanoparticles can be rapidly excreted through the kidneys if their physicochemical properties are carefully chosen9,10. Efficient renal clearance represents one elegant way of keeping the accumulation and, hence, the dose to secondary organs, low9,10. The present study4 indicates how renal clearance could be achieved following deposition in the lungs. The new work4 also defines the requirements for targeting nanoparticles to regional lymph nodes, which may be useful for therapies directed to the immune system. However, further quantitative investigation is needed to differentiate between transport to the blood through lymphatic drainage and direct translocation across the air-blood barrier to the blood. Additional challenges will arise in extra polating from the rather simple nanoparticles used in this study to highly functionalized nanosystems bearing targeting, therapeutic and diagnostic molecules. Finally, the results of Frangioni, Tsuda and colleagues4 may help in understanding the
Bronchial epithelium
Nano particles
Alveolar epithelium
Interstitium
Epithelium lining fluid
AM M Regional lymph nodes
Circulation
Neutrophil Monocyte
Immune system y
Kidneys Heart
Brain
Fetus
Urine
Katie Vicari
© 2010 Nature America, Inc. All rights reserved.
the same pest species10. In another approach, the US Environmental Protection Agency has approved sales of mixtures of corn seeds with and without Bt toxins that kill corn rootworms. This ensures that farmers comply with the refuge strategy. However, its use is limited to the pest species whose larvae do not migrate between plants. Finally, there are also transgenic strategies to improve SIT efficiency by expressing dominant lethal genes, promoting refractoriness of the sterile insects to disease or facilitating strain marking, genetic sexing and molecular sterilization11. There thus seems considerable scope to further enhance the sustainability of Bt crops and develop new strategies for integrated pest management.
Liver
Figure 1 Schematic of nanoparticle translocation from the lung epithelium to regional lymph nodes and blood circulation. Once circulating in blood, nanoparticles may accumulate in various secondary organs of the body. The study by Frangioni, Tsuda and colleagues4 highlights two important pathways of nanoparticle biodistribution: the pathway of lymphatic drainage toward regional lymph nodes and renal clearance from blood to urine. AM, alveolar macrophages.
1275
© 2010 Nature America, Inc. All rights reserved.
news and v i ews health effects of air pollution as the nanoparticles studied may share common properties with the nano-fraction of particulate air pollutants. Most previous work in this field has used inhalation of nanoparticle aerosols as the ‘gold standard’ method of exposure because the particles will deposit throughout the entire bronchiolar and alveolar epithelium in a physiological manner. However, such studies are rather elaborate and expensive, and the deposited nanoparticle dose is usually difficult to determine. Intratracheal instillation of small nanoparticle suspension volumes, as used in the present study4, is widely accepted as an alternative method of delivery to the lungs. Although the dose administered by this method is well controlled, lung deposition is less uniform and far from physiological7. Moreover, the dose is frequently high in order to observe adverse short-term effects. The daily dose of inhaled nanoparticles in humans is limited to a maximal nanoparticle
1276
aerosol concentration of 1012 m–3 (beyond that, coagulation changes the concentration rapidly within one minute under ambient air conditions), and the daily volume of air inhaled by an adult is ~15 m3. Assuming a nanoparticle deposition probability of 0.3, the daily deposition amounts to ~5 × 1012 nanoparticles. Frangioni, Tsuda and colleagues4 used doses of ~1015 nanoparticles per animal, which exceed human daily doses to the lungs. Their results must therefore be interpreted with caution when attempting to extrapolate to the effects of particulate air pollutants in humans. In the context of a field that lacks elementary knowledge about the short- and longterm transport of nanoparticles delivered to the lungs, the present study4 represents an important advance. Ultimately, a comprehensive mechanistic understanding of the interactions of nanoparticles with biomolecules and proteins in body fluids or with cellular
membranes and of the biokinetics across the air-blood barrier, the blood-brain barrier, the placenta, etc. can be expected to allow predictable, safe and sustained application of nanoparticles in biomedicine. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. 1. Mansour, H.M., Rhee, Y.S. & Wu, X. Int. J. Nanomedicine 4, 299–319 (2009). 2. Oberdörster, G., Oberdörster, E. & Oberdörster, J. Environ. Health Perspect. 113, 823–839 (2005). 3. Maynard, A.D. et al. Nature 444, 267–269 (2006). 4. Choi, H.S. et al. Nat. Biotechnol. 28, 1300–1303 (2010). 5. Semmler-Behnke, M. et al. Small 4, 2108–2111 (2008). 6. Geiser, M. & Kreyling, W. Part. Fibre Toxicol. 7, 2 (2010). 7. Oberdorster, G. J. Intern. Med. 267, 89–105 (2010). 8. Kostarelos, K. Nanomed. 5, 341–342 (2010). 9. Choi, H.S. et al. Nat. Nanotechnol. 5, 42–47 (2010). 10. Choi, H.S. et al. Nat. Biotechnol. 25, 1165–1170 (2007).
volume 28 number 12 december 2010 nature biotechnology
rese a rc h h ig h l ig h ts
© 2010 Nature America, Inc. All rights reserved.
Rett’s-like neurons from iPS cells Induced pluripotent stem (iPS) cells offer the possibility of creating disease-specific cells that model pathology and can be used for screening therapies. Using skin biopsies from Rett syndrome patients, Marchetto et al. create iPS cells, from which they are able to create both neuronal progenitor cells and mature neurons. Whereas the progenitor cells appear normal, the Rett’slike neurons are both morphologically and functionally abnormal. This recapitulates the onset of disease, as symptoms don’t appear until as long as 18 months after birth (when progenitors have differentiated). The authors show that knockdown of methyl-CpG–binding protein (MeCpG), which is mutated in Rett’s patients, leads to cells with fewer synapses than neurons derived from iPS cells generated from normal individuals. Finally, the application of two drugs—insulinlike growth factor, which reverses the phenotype in a Rett’s mouse model, and gentamicin, which reverses premature protein termination—partially rescues the Rett’s-like neurons. By staining cell populations for proteins involved with X inactivation—of interest as MeCP2 is X linked—the authors could select iPS cells that had two active X chromosomes. Even so, during neuronal differentiation, re-inactivation of the X chromosomes was not random, suggesting the presence of some remnant of X inactivation after reprogramming. (Cell 143, 527–539, 2010) LD
Personalizing epigenetic therapy Unlike the genetic changes that contribute to the progression of liver cancer, the epigenetic modifications responsible for driving the development of the disease can be reversed by pharmacological intervention. Nonetheless, there is little understanding of the factors that influence whether the therapeutic effects of broad-spectrum chemical inhibitors of DNA methylation (e.g., reactivation of epigenetically silenced tumor suppressor genes) over-ride any detrimental consequences. Working with the cytidine analog zebularine, Anderson et al. use transcriptomic and epigenomic profiling to reveal a response signature that classifies liver cancer cell lines as being either sensitive or resistant to the drug. The ability of zebularine to promote apoptosis in sensitive cell lines correlates with reduced growth and metastasis of drug-sensitive tumors and prolonged survival of mice bearing xenografts of sensitive human tumors. In contrast, the drug increased tumor growth rates and decreased survival rates of mice bearing resistant xenografts—consistent with zebularine’s ability to upregulate oncogenic pathways in cell lines predicted to be resistant to the drug. The ability of the signature to predict clinical outcome with 84–96% accuracy in a relatively small cohort of liver cancer patients suggests its value for identifying individuals most likely to benefit from methyltransferase inhibitors. (Sci. Transl. Med. 2, 54ra77, 2010) PH
The ’omics of protein–small molecule interactions Metabolites are important regulators of many proteins. Nonetheless, the systematic analysis of small molecule–protein interactions has lagged Written by Kathy Aschheim, Laura DeFrancesco, Markus Elsner, Peter Hare & Craig Mak
nature biotechnology volume 28 number 12 DECEMBER 2010
behind efforts to characterize, for example, the entirety of proteinprotein or protein-DNA interactions. Now, Li et al. perform a largescale analysis of hydrophobic metabolites that bind the enzymes of the ergosterol synthesis pathway and protein kinases in yeast. They affinity purify tagged versions of proteins of interest and identify proteinbound molecules by mass spectrometry after methanol extraction. In the ergosterol pathway, the authors uncover many previously unknown interactions between intermediate products and enzymes at other levels of the synthetic cascade. This suggests more integrated control of sterol synthesis than previously appreciated, although a detailed functional analysis of the significance of the interactions was not performed. Among the 103 kinases studied, 21 bind a total of ten different metabolites. The authors go on to demonstrate that the binding of ergosterol stabilizes Ssk22 and activates the highly conserved Ypk1 kinase. It seems likely that the approach could be adapted for hydrophilic molecules. (Cell 143, 639–650, 2010) ME
Efficient discovery of rare genetic variants With sequencing single genomes now commonplace, efforts are shifting to gather data from many individuals. The resulting increase in statistical power should identify rare genetic variants, which may underlie disease. Initial results toward this goal are reported in Nature by the 1000 Genomes Project Consortium, with analytic algorithms detailed in Genome Research. Three pilot sequencing efforts demonstrate efficient uses of sequencing to survey genetic variation. First, by reducing the average number of times any given region of the genome was sequenced, called ‘low coverage’ sequencing, the project analyzed 179 genomes. This identified most common single-nucleotide variants present in >5% of individuals and many less-common variants. Second, targeted sequencing of a subset of the genome allowed 8,140 exons to be analyzed across 697 individuals. Third, sequencing related individuals—in this case a mother, father and child from two families—enabled de novo germline mutations to be identified. In Science, Sudmant et al. used single, unique nucleotides found within repetitive regions of the genome to discover remarkable plasticity in copy number variation in duplicate genes, thereby making them amenable to genetic association studies. (Nature 467, 1061–1073, 2010; Genome Res., published online 27 October 2010, doi:10.1101/ gr.1123267.110, doi:10.1101/gr.111120.110, doi:10.1101/gr.113084.110; Science 330, 641–646, 2010) CM
lincRNAs in reprogramming The reprogramming of somatic cells to induced pluripotent stem cells (iPSCs) is accompanied by widespread changes in gene expression and epigenetic marks. A recent study by Loewer et al. explores whether it also involves changes in the expression of long intervening noncoding RNAs (lincRNAs), a class of RNAs with diverse roles, including regulation of gene expression and of the epigenome. The authors began by comparing the expression of ~900 lincRNAs in human fibroblasts, in iPSCs derived from the fibroblasts and in human embryonic stem cells. This analysis identified 133 upregulated and 104 downregulated lincRNAs in the pluripotent cells compared with the fibroblasts. Twenty-eight of the upregulated lincRNAs were expressed more highly in iPSCs than in embryonic stem cells, suggesting a possible involvement in reprogramming. To identify a more generic association with reprogramming, the authors looked for lincRNAs upregulated in iPSCs derived both from fibroblasts and from CD34+ cells. This approach yielded ten candidate lincRNAs, one of which was shown experimentally to influence reprogramming. (Nat. Genet. published online, 7 November 2010; doi:10.1038/ng.710) KA 1277
A n a ly s i s
Large-scale in silico modeling of metabolic interactions between cell types in the human brain
© 2010 Nature America, Inc. All rights reserved.
Nathan E Lewis1, Gunnar Schramm2,5, Aarash Bordbar1, Jan Schellenberger3, Michael P Andersen1, Jeffrey K Cheng1, Nilam Patel1, Alex Yee1, Randall A Lewis4, Roland Eils2,5, Rainer König2,5 & Bernhard O Palsson1 Metabolic interactions between multiple cell types are difficult to model using existing approaches. Here we present a workflow that integrates gene expression data, proteomics data and literature-based manual curation to model human metabolism within and between different types of cells. Transport reactions are used to account for the transfer of metabolites between models of different cell types via the interstitial fluid. We apply the method to create models of brain energy metabolism that recapitulate metabolic interactions between astrocytes and various neuron types relevant to Alzheimer’s disease. Analysis of the models identifies genes and pathways that may explain observed experimental phenomena, including the differential effects of the disease on cell types and regions of the brain. Constraint-based modeling can thus contribute to the study and analysis of multicellular metabolic processes in the human tissue microenvironment and provide detailed mechanistic insight into high-throughput data analysis. Constraint-based reconstruction and analysis of genome-scale microbial metabolic networks has matured over the past decade. This approach has a wide range of applications, such as providing insight into evolution, aiding in metabolic engineering, and providing a mechanistic bridge between genotypes and complex phenotypes1,2. Computational methods3 and detailed standard operating procedures4 have been outlined for the reconstruction of high-quality prokaryotic metabolic networks, and many methods can be deployed for their analysis5,6. Constraint-based modeling of metabolism entered a new phase with the publication of the human metabolic network (Recon 1)7, based on build 35 of the human genome. Methods allowing tissuespecific model construction have followed8–10. Many tissue metabolic functions rely on interactions between cell types. Thus, methods are needed that integrate the metabolic activities of multiple cell types. Here, using Recon 1, we analyze and integrate ’omics 1Department
of Bioengineering, University of California, San Diego, La Jolla, California, USA. 2Department of Bioinformatics and Functional Genomics, Institute of Pharmacy and Molecular Biotechnology and Bioquant, University of Heidelberg, Heidelberg, Germany. 3Bioinformatics Program, University of California, San Diego, La Jolla, California, USA. 4Department of Economics, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA. 5Department of Theoretical Bioinformatics, German Cancer Research Center (DKFZ), Heidelberg, Germany. Correspondence should be addressed to B.O.P. ([email protected]). Published online 21 November 2010; doi:10.1038/nbt.1711
data with information from detailed biochemical studies to build multicellular, constraint-based models of metabolism. We demonstrate this process by constructing and analyzing models of human brain energy metabolism, with an emphasis on central metabolism and mitochondrial function in astrocytes and neurons. Moreover, we provide three detailed examples, demonstrating the use of models to guide experimental work and provide biological insight into the metabolic mechanisms underlying physiological and pathophysiological states in the brain. RESULTS Building metabolic models of multiple cell types ‘Omics data sets can be difficult to analyze owing to their size. However, such data sets can be used to construct large mechanistic models for specific tissues and cell types8,9 that serve as a context for further analysis. The workflow for generating multicellular models (Fig. 1) consists of the following four steps. Step 1. Reconstruct a metabolic network for the organism of interest from genome annotation, lists of biomolecular components and the literature4. Metabolic pathways and associated gene products are not completely known for any species. Thus, a reconstruction is refined through iterations of manual curation, hypothesis generation, experimental validation and incorporation of new knowledge. Recon 1 has been through five iterations7. Step 2. Identify reactions specific to the tissue of interest. Many gene products are not expressed in all cells at any given time11. Therefore, the presence of a gene product, inferred from ’omics data, such as gene expression or mass spectrometry, is mapped to the metabolic network using associations linking genes to proteins and to reactions. This process may be performed manually or algorithmically8,9. Step 3. Partition the reconstruction, using manual curation of the literature, into compartments representing different cell types and organelles. The different cell types are linked with transport reactions, as supported by literature and experimental data. Initial context-specific reconstructions are incomplete and may contain false positives owing to contamination from proximal tissue. Moreover, few high-throughput data sets are cell-type specific. Thus, the initial reconstruction represents the union of metabolic networks from various cell types. After the reconstruction is partitioned using manual curation, it is converted into a model by specifying inputs, outputs and relevant parameters, and by representing the network mathematically12. Details of proper manual curation have been described4. Step 4. Simulation and analysis1,2. Once the network is accurately reconstructed and converted into an in silico model, it is used to
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1279
A n a ly s i s
Experimental data
Formulation into model
Add new knowledge
Context-specific data Step 4: Simulation and analysis for predictions and biological insight
Cholinergic neuron
GABAergic neuron
Glutamatergic neuron
1280
Mito
(iii) Physiological data analysis
Genes
Down
Up
ChAT flux
(ii) 'Omics data analysis EC HIP MTG PC SFG VCX
(i) Gene identification
Endothelium/blood
Int
Compartmentalization and manual curation
Literature
Measured CMR of metabolites
Mito
Step 3: Construct a manually curated model
m/z
g enerate hypotheses and to obtain insight into systems-level biological functions. In contrast to previous approaches that reconstructed tissue–specific models8–10, the approach described here accounts for interacGAD1 tions between cell types within the tissue. First, GAD2 published experimental results (e.g., immuno Brain region histochemistry staining of brain tissue11) are used to refine the reconstructed network into individual cell type networks. Subsequently, interactions between these cell types are included as transporters that transfer metabolites between the different cell type models. This process results in a tissue-specific model that accounts for metabolic interactions between cells and the separation of metabolic functions into the relevant cell types. Thus they provide a more accurate view into tissue metabolism. Previous work had not taken these steps to generate multicellular tissue-specific models of this scale. This workflow was used to build three different multicellular models of brain energy metabolism. Each model represents one canonical neuron type (that is, glutamatergic, γ-aminobutarate (GABA)ergic or cholinergic), its interactions with the surrounding astrocytes and the transport of metabolites through the blood-brain barrier (Fig. 2). This reconstruction focuses on the core of cerebral energy metabolism, including central metabolism, mitochondrial metabolic pathways, and pathways relevant to anabolism and catabolism of the neurotransmitters glutamate, GABA and acetylcholine. Briefly, we began with the manually curated human metabolic reconstruction Recon 1 (ref. 7) (step 1). Next, we extracted relevant brain-specific reactions by mapping them to proteins expressed or localized to the brain as reported in the Human Protein Reference Database (release 5)13, H-inv (version 4.3) (HINV)14 or The Human Proteome Organisation (HUPO) brain proteome project15; additional reactions were added as dictated by biochemical data from the literature (step 2). Reactions and pathways were manually
Astrocyte
Step 2: Find context-specific subnetwork Relative intensity
Genome
Step 1: Build species-specific reconstruction
EC H IP SF VCG X
© 2010 Nature America, Inc. All rights reserved.
Figure 1 A workflow for bridging the genotypephenotype gap with the use of high-throughput data and manual curation for the construction of multicellular models of metabolism. The models can be used to (i) predict disease-associated genes, such as glutamate decarboxylase; (ii) analyze high-throughput data in the network context to identify sets of genes that change together and affect specific pathways, such as the brain-region-specific suppression of central metabolism in Alzheimer’s disease patients; (iii) analyze physiological data in the context of the model, thereby enabling, for example, the identification of tissue properties relevant to disease treatment, such as the calculation of the percentage of the brain that is cholinergic.
Exp. data Simulation
14CO 2
release
Suppressed pathways
curated to verify their presence in the human brain, to determine celltype specificity and to add reactions unique to the different neuron types (step 3) (see Online Methods for complete details). Thus, the three models contain the high-flux pathways and important reactions in neuron and astrocyte metabolism. To our knowledge, these models currently represent the largest and most detailed models of brain energy metabolism16–18 (Supplementary Notes). Our models of glutamatergic, GABAergic and cholinergic neurons contain 1,066, 1,067 and 1,070 compartment-specific interactions (that is, reactions, transformations and exchanges), respectively, involving 983, 983 and 987 compartment-specific metabolites. In total, the three models are associated with 403 genes. Lists of reactions, genes, citations and parameters used to constrain the models are detailed in Supplementary Tables 1–5. The validity of these models is demonstrated through comparisons to physiological data. Specifically, our models predict ATP production rates within 8% of the average published value, and internal flux measurements are consistent with experimentally measured values (Supplementary Notes). Moreover, three analyses using the models are detailed here (step 4). Most of these analyses cannot be done on previous brain models or on Recon 1 (ref. 7) as published. Thus, our models can provide novel insight into brain energy metabolism. Identifying genes behind cell type–specific metabolic phenotypes Alzheimer’s disease is characterized by dementia and is diagnosed postmortem by histopathological features such as neurofibrillary tangles and β-amyloid plaques. Notably, metabolic rates of various brain regions decrease years before the onset of dementia19. Experiments have suggested that glutamatergic and cholinergic neurons are more affected in moderate stages of Alzheimer’s disease20, whereas most GABAergic cells Figure 2 General structure of the models. Three models were built from the brain reconstruction. Each model consists of various compartments including the endothelium/blood, astrocytes, astrocytic mitochondria, neurons, neuronal mitochondria and an interstitial space between the cell types. Each neuron metabolic network was tailored to represent a specific neuron type, containing genes and reactions generally accepted to be unique to the neuron type. Mito, mitochondrion; Int, interstitial space; CMR, cerebral metabolic rate.
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
a n a ly s i s are relatively unaffected until later stages21. Genes responsible for these differences are not known. Thus, we first analyzed the models to identify genes that potentially underlie these cell type–specific effects. Analysis of our models yielded results consistent with known metabolic changes in Alzheimer’s. Several central metabolic enzymes exhibit altered expression or activity in Alzheimer’s disease, such as pyruvate dehydrogenase (PDHm), α-ketoglutarate dehydrogenase (AKGDm) and cytochrome c oxidase22–24. The activities of these enzymes are affected by the Alzheimer’s disease–related proteins β-amyloid and Tau kinase25,26. In silico, as the activities of PDHm and cytochrome c oxidase decrease, neurons demonstrate impaired metabolic capacity (Supplementary Notes and Fig. 1a,b therein), and deficiencies in PDHm activity leads to a decreased cholinergic neurotransmission capacity (Supplementary Notes and Fig. 1c therein). The in vitro AKGDm activity shows the greatest impairment in brains
nadh[m]
f
CO2[m]
nad[m]
akg[m]
m Hx
nad[m]
1.6
1.8
2.0
2.2
g wet brain−1)
b
atp[m] coa[m]
ACSm cit[m]
0.8
1.0
1.2
1.4
1.6
1.8
2.0
c
0.8
1.0
1.2
1.4
1.6
1.8
1.0
1.1
1.2
Cytochrome c oxidase flux (µmol min
−1
1.3
1.4 −1
g wet brain )
e
−4
10
−3
−2
10
−1
10
GABA shunt flux (µmol min
10 −1
10
0
−1
g wet brain )
Normal glutamatergic
Normal cholinergic
AKGDm inhib. glutamatergic
AKGDm inhib. cholinergic
Normal GABAergic
fadh2[m]
SU
1m succ[m] atp[m]
h
1.0
0.8
pi[m]
−1.5
adp[m] coa[m]
EC
1.0
HIP 0.5
Brain region
0.9
Normalized non-AD GADNMN expression
0.8
C
fad[m]
fum[m]
0.6
0.4
MTG 0 PC −0.5 SFG
0.2 −1.0
VCX
log2(GADNMN expression change in AD)
0.7
S OA
SUCD
TCA cycle
g
−1.0
m
H2O[m]
FU
Mm
d
nadh[m]
nad[m]
mal-L[m]
2.0
−0.5 succoa[m]
h[m]
MDHm
h[m]
0
h2o[m]
SSALxm oaa[m] nadh[m]
0.6
co2[m]
nadh[m]
GABA shunt
H2O[m]
accoa[m]
nad[m]
sucsal[m]
CSm
PDHm
0.4
0.5
h[m]
CO2[m] coa[m] nad[m] nadh[m] pyr[m]
1.0
ABTArm
coa[m]
ppi[m]
0.6
4abut[m] nadph[m]
glu-L[m]
amp[m]
0.4
yrm
ICDH nadp[m]
log2(Flux fold-change in AD)
ac[m]
m
−1
Tm
1.4
ON
1.2
AC
1.0
Rate of cerebral CO2 release (µmol min
coa[m]
CO2[m]
GD
0.8
1.5
ICD
icit[m]
AK
© 2010 Nature America, Inc. All rights reserved.
a
from individuals with Alzheimer’s disease examined postmortem (57% decrease compared to normal)23. An in silico analysis shows that this deficiency impairs the metabolic rate in glutamatergic and cholinergic neurons (Fig. 3a) because it limits oxidative phosphorylation capacity in these neurons (Fig. 3b,c). Such impairment of oxidative phosphorylation leads to neuronal apoptosis27. However, oxidative phosphorylation is not impaired in the GABAergic neuron model (Fig. 3d), consistent with the different phenotypes of GABAergic compared with glutamatergic and cholinergic neurons. Therefore, the models were further interrogated to identify a mechanism that allows GABAergic neurons to absorb the perturbation, thereby leading to the cell type–specific effects. Simulations show that GABAergic neurons absorb the AKGDm perturbation through the GABA shunt (Fig. 3e,f), a pathway that uses 4-aminobutyrate transaminase and succinate-semialdehyde
0 EC
AKGDm inhib. GABAergic
HIP
MTG
PC
SFG VCX
GAD1
GAD2
DLX
Brain region
Figure 3 Decrease in AKGDm activity associated with Alzheimer’s disease (AD) shows cell-type and regional effects in silico consistent with experimental data. (a–e) Kernel density plots show the distribution of feasible fluxes for various reactions. An in silico reduction of AKGDm flux from normal activity (solid lines) to Alzheimer’s disease brain activity (dashed) decreases the oxidative metabolic rate for glutamatergic and cholinergic neurons, but not GABAergic neurons (a). This results from a decrease in the feasible fluxes for oxidative phosphorylation (e.g., cytochrome c oxidase) for both glutamatergic (b) and cholinergic neurons (c), but not GABAergic cells (d). (e,f) This cell-type-specific protection from the AKGDm deficiency results from an increased flux through the GABA shunt in GABAergic cells (e), by bypassing the damaged AKGDm (f). GABAergic cells maintain a higher GABA shunt flux because of the expression of glutamate decarboxylase (GAD). Neuroprotective properties of GAD are supported by gene expression. (g) Severely damaged brain regions in Alzheimer’s disease patients have lower GAD NMN expression in control brain, whereas high GADNMN regions (SFG and VCX) show little damage. (h) In Alzheimer’s disease brain, severely affected regions (HIP and entorhinal cortex) show an increase in GADNMN and the GAD-inducing DLX family, suggesting that non-GAD expressing neurons may be lost in Alzheimer’s disease. EC, entorhinal cortex; HIP, hippocampus; MTG, middle temporal gyrus; PC, posterior cingulate cortex; SFG, superior frontal gyrus; VCX, visual cortex; NMN, neuron marker normalized; inhib., inhibited. All reaction and metabolite abbreviations are defined in Supplementary Tables 1 and 2.
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1281
© 2010 Nature America, Inc. All rights reserved.
A n a ly s i s dehydrogenase to bypass part of the tricarboxylic acid (TCA) cycle. However, our models suggest that glutamatergic and cholinergic neurons cannot, despite carrying a small flux through the shunt enzymes (Fig. 3e). Support for these results includes recent evidence that suggests that cerebellar granule neurons, which have higher levels of GABA, can absorb perturbations to AKGDm through this shunt28. To identify the mechanism allowing only GABAergic neurons to use the GABA shunt to absorb the AKGDm perturbation, an in silico analysis was performed to identify contributing genes. Briefly, each reaction was removed from the GABAergic model, followed by an assessment of the correlation of flux between AKGDm and oxidative phosphorylation (Supplementary Notes). This analysis suggested that the two isoforms of glutamate decarboxylase (GAD) could provide the cell type–specific neuroprotection. GAD allows the GABA shunt to carry a higher flux following the AKGDm perturbation in GABAergic neurons; however, the lack of GAD in other neuron types greatly limits the use of the GABA shunt in silico (Fig. 3e). Therefore, by fueling the GABA shunt, GAD may play a neuroprotective function, thus sparing of GABAergic systems in earlier Alzheimer’s disease21. Certain populations of glutamatergic and cholinergic cells tend to be lost earlier in Alzheimer’s disease but others survive. Interestingly, although GAD is canonically encoded by a GABAergic gene, it occasionally shows low expression in other neuron types, including glutamatergic and cholinergic cells29. Therefore, such populations of non-GABAergic, GAD-expressing neurons would also be protected. Thus, we follow with an analysis of the correlation of GAD expression and Alzheimer’s disease pathology for further validation. If GAD has neuroprotective capacity in vivo, we would expect that brain regions with less GAD per neuron will be more affected in Alzheimer’s disease, whereas regions with abundant GAD will be spared. To test this hypothesis, we used a compendium of published microarrays of neurons without tangles from six brain regions in Alzheimer’s patients and age-matched, non-Alzheimer’s controls30. In controls, GAD expression levels among the brain regions is consistent with the extent of neuron loss found in Alzheimer’s disease patients; that is, brain regions with more neuron loss in Alzheimer’s disease (e.g., the entorhinal cortex and hippocampus) have lower GAD expression in control patients, whereas relatively unaffected regions in Alzheimer’s disease (e.g., superior frontal gyrus and visual cortex) show much higher levels of GAD expression (Fig. 3g). In addition, if GAD is neuroprotective, neurons with low GAD expression should be lost in Alzheimer’s disease, resulting in an increase in expression of GAD per neuron in histopathologically affected regions. In microarrays, we observed a significant increase in the expression of the brain-specific GAD2 in the entorhinal cortex and hippocampus in Alzheimer’s disease (P = 0.0050 and 0.018, respectively; SAM test) (Fig. 3h). Expression of all other neuronspecific genes were tested as controls and showed no correlation with Alzheimer’s disease pathology (Supplementary Notes and Fig. 2 therein), except the genes encoding DLX2 and DLX5, which induce GAD expression in the brain (Fig. 3h and Supplementary Notes and Fig. 3 therein)31. Therefore, these results lend additional support to the possibility that GAD is providing a neuroprotective effect and that this effect is correlated with the regional specificity of Alzheimer’s disease. Moreover, the model was able to guide the identification of a gene and the mechanism for its role in Alzheimer’s disease. Pathway-based analysis of multicellular models Atrophy and the cell type–specific effects investigated above cannot fully explain the decreased metabolic rate in Alzheimer’s disease in 1282
many brain regions32. Therefore, using the models as a context for analyzing gene expression data from Alzheimer’s patients and age-matched controls, we searched for downregulated metabolic pathways within surviving cells in metabolically suppressed brain regions. This was done using PathWave33, a method that takes each model and maps microarray data to the metabolic pathways, which have been optimally arranged on a two-dimensional grid. Haar Wavelet transforms detected concerted regulation of neighboring enzymes in the models. Thus, we could identify groups of differentially expressed genes that are not solely related by annotation but that are connected by metabolic pathway functions, thereby providing a mechanistic view of the effects of differential gene expression (see Online Methods and Supplementary Notes). This analysis revealed that brain regions showed distinct changes in metabolic pathways (Supplementary Table 6). The visual cortex and superior frontal gyrus lack any differentially expressed pathways, consistent with previous work that shows little change in metabolic rate in Alzheimer’s disease in these regions30. However, the posterior cingulate cortex (PC) and middle temporal gyrus (MTG) have the most significantly differentially expressed pathways (23 and 18, respectively). These two regions show significantly decreased metabolic rates in Alzheimer’s disease but show fewer histopathological effects30. Both the entorhinal cortex and hippocampus also show decreases in expression of nine metabolic pathways, though the number may be lower since these regions suffer a high amount of neuron loss and only histopathologically healthy neurons were expression profiled. Therefore, more affected neurons may already have been lost or not profiled. PathWave also revealed that the four brain regions showing substantially lower metabolic rates in Alzheimer’s disease (PC, MTG, hippocampus and entorhinal cortex)32 also show a significant suppression of glycolysis and the TCA cycle (P < 5 × 10−4) (Fig. 4). In addition, the hippocampus, MTG and PC show a suppression of the malate-aspartate shuttle and oxidative phosphorylation. Individual regions also show a suppression of other pathways, such as heme biosynthesis (MTG, PC), ethanol metabolism (entorhinal cortex, PC) and several amino acid metabolism pathways. Thus, using pathway topology of our models as a context for microarray analysis, we find that the decreased metabolic rate in specific regions in Alzheimer’s disease is associated with the downregulation of central metabolic gene expression in histopathologically normal neurons. Identifying metabolic properties relevant to treatment Deficiencies in cholinergic neurotransmission have long been believed to contribute to Alzheimer’s disease. Thus, treatment of Alzheimer’s patients often includes efforts to enhance the cholinergic system. Although relevant to treatment, it is still not clearly understood which pathways are used to synthesize acetylcholine and what percentage of the brain participates in cholinergic neurotransmission. Studies have demonstrated that cytosolic acetyl-CoA, which is used to synthesize the neurotransmitter acetylcholine, comes from the acetyl-CoA formed in the mitochondria. This tight coupling of acetylcholine to mitochondrial metabolism allows treatments that increase glucose uptake in the brain to improve cognitive functions in rats34 and humans with severe cholinergic cognitive pathologies, such as Alzheimer’s disease and trisomy 21 (ref. 35). Pathways that transport acetyl-CoA carbon to the cytosol have been suggested; however, the mechanism is still not clear36. Constraint-based modeling helped identify two pathways that could indirectly transport acetyl-CoA into the cytosol and provided insight into needed complementary pathways. To identify these pathways, reaction sets were identified by randomly removing reactions from Recon 1 until a minimum set was determined that couples the mitochondrial and cytosolic
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
a n a ly s i s
PFK
FBA
TPI
Regulation of surrounding reactions Down Up
GAPD PGM ENO
PGK
EC
HIP MTG PC SFG VCX
ABTArm ACONTm AKGDm CSm PDHm PCm FUMm GLUDxm CSm ACONTm GLUDym ICDHxm MDHm ICDHxm ICDHyrm ICDHyrm MALtm MDHm SUCD1m ME2m PDHm SSALxm SSALxm ABTArm SUCCt2m SUCOAS1m SUCD1m SUCOASm SUCOAS1m AKGDm SUCOASm FU Mm
PGI
MALtm
HEX1
b
SUCCt2m
GLCt2r
G3PD2m
HIP MTG PC SFG VCX
ME1m
EC ACYP DPGM DPGase ENO FBA G3PD2m GAPD GLCt1r GLCt2r HEX1 PFK PGI PGK PGM PYK TPI
ME2m
a
PYK
ACYP
>6 3 0 3 >6 –log10(P value)
GLUDxm
Regulation of surrounding reactions Down Up
GLUDym
0
3 >6
–log10(P value)
Figure 4 Analysis of metabolic pathways in different regions of brains affected by Alzheimer’s. (a,b) Pathway analysis demonstrates that histopathologically normal cells from regions of the brain that are metabolically affected (EC, hippocampus, MTG, and PC) demonstrate a significant suppression of central metabolic pathways, such as glycolysis (a) and the TCA cycle and surrounding reactions (b). Regions that are metabolically less affected (SFG and VCX) show no significant suppression. Reaction suppression shown here is a composite expression of the reaction-associated genes and the genes of closely connected reactions. Only significantly changed reactions are shown (FDR = 0.05). EC, entorhinal cortex, HIP, hippocampus, MTG, middle temporal gyrus, PC, posterior cingulate cortex, SFG, superior frontal gyrus, VCX, visual cortex. All reaction and metabolite abbreviations are defined in Supplementary Tables 1 and 2.
acetyl-CoA pools. This was repeated until more than 21,000 unique minimal reaction sets were identified. Singular value decomposition was then used to identify dominant sets of reactions that frequently co-occur. The first singular vector is dominated by reactions that frequently co-occur in the reaction sets (e.g., water transport across cell membranes). However, the second and third singular vectors are dominated by reactions that usually co-occur or never co-occur (Fig. 5). These reactions cluster into three distinct pathways, providing hypotheses of pathways coupling acetylcholine synthesis and mitochondrial metabolism. After re-examining the literature and the ’omics data used in the reconstruction, we were able to eliminate the pathway using cytosolic acetyl-CoA synthetase (Fig. 5a) and
d
0.4
a
0.3 Third singular vector
© 2010 Nature America, Inc. All rights reserved.
>6 3
e
accoa asp-L
ACS
coa h Nacasp
ASPGLUm
f
accoa H2O
oaa
ASPTAm
asp-L
glu-L h
cit
mal-L
CITtam
hmgcoa
glu-L akg
coa h
h coa H2O
HMGLm
akg
glu-L
h
accoa Mitochondrion ACACT1rm
HMGCOASim aacoa coa
CSm
ASPNATm
ASPNATm GLUt2m NACASPtm NACASPAH
0.2
validate pathways using ATP-citrate lyase (ACITL) or cytosolic acetyl-CoA C-acetyltransferase (ACACT1r), which transport acetyl-CoA from the mitochondria to the cytosol on citrate or acetoacetate, respectively (Fig. 5b,c and Supplementary Notes). These two pathways were included in our cholinergic model, contributing to a correlation between the flux through mitochondrial pyruvate dehydrogenase and choline acetyltransferase (r = 0.45, P = 3 × 10−247), consistent with the experimentally observed coupling between mitochondrial metabolism and acetylcholine production. ACITL and ACACT1r also correlate with choline acetyltransferase flux (P < 3 × 10−90). Moreover, it has been reported that the inhibition of ACITL reduces the acetylcholine production rate by 30%36.
succ
OCOAT1m
acac h
succoa
atp coa
SUCOASm adp pi
0.1 0
b
NACASPtm ASPGLUm Nacasp
AKGMALtm
c
ASPTAm ASPTA
−0.1
CSm CITtam
−0.2 −0.3
ACITL
−0.2
SUCOASm HMGLm/HMGCOASim OCOAT1m ACACT1rm
ACACT1r AACOAT
−0.1
0
0.1
0.2
Second singular vector
ACACt2m
0.3
0.4
NACASPAH H2O
ac
ACS
AKGMALtm
h GLUt2m glu-L
asp-L atp coa amp ppi accoa
mal-L cit
ASPGLUm
coa glu-L akg
atp
ACITL
glu-L
h
akg
adp pi accoa
oaa
ASPTA asp-L
h acac
ACACt2m atp coa
AACOAT
amp ppi aacoa coa
ACACT1r accoa
Cytosol
Figure 5 Singular value decomposition (SVD) of feasible pathways elucidates potential pathways that allow for coupling of mitochondria acetyl-CoA metabolism and cytosolic acetylcholine production. We computed 21,000 unique feasible reaction sets, each showing transport of mitochondrial acetyl-CoA carbon to the cytosol in human metabolism. (a–c) Dominant loading from the SVD of a matrix of all 21,000 pathways represented the reactions making up three primary pathways (d–f) that allow this coupling of mitochondrial metabolism to acetylcholine production, by carrying the acetyl-CoA carbon on N-acetyl-l-aspartate (a,d), citrate (b,e) or acetoacetate (c,f). As shown by the second singular vector, reactions in the pathway with citrate tend to be missing from pathways when the reactions for the acetoacetate pathway are included. The third singular vector shows a similar relationship of the N-acetyl-laspartate pathway. The ’omics data and known enzyme localization only support the usage of citrate and acetoacetate as potential carriers in neurons.
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1283
A n a ly s i s
1%
]
C
C
14
14
−
−
[2
[1
© 2010 Nature America, Inc. All rights reserved.
Carbon label position
0.1 mM 0.2 mM
2.0 1.5
0.3 mM
1.0
0.4 mM 1.0 mM
0.5 0
Bromopyruvate
0.5 mM
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
CO2 release (µmol * g wet brain−1 * min−1)
14
2.2 1.8 1.4 1.0 0.42 14
0.46
0.50
0.54
× 10−3
1.8 1.4 1.0
Exp. data Simulation
0.6
0.58
CO2 release (µmol * g wet brain−1 * min−1)
0.14 14
Leucine
2.5
2.6
2.2
2-oxo-3-methyl butanoate
Simulation
3.0
d
× 10−3
2-oxo-3-methyl pentanoic acid
3.5
0 mM
Leucine
Exp. data
ChAT flux (µmol * g wet brain−1 * min−1)
2%
c
−3
2-oxo-3-methyl pentanoic acid
3%
× 10
2-oxo-3-methyl butanoate
ChAT flux (µmol * g wet brain−1 * min−1)
4%
4.0
ChAT flux (µmol * g wet brain−1 * min−1)
b
5%
]
Cholinergic percentage of brain
a
0.18
0.22
0.26
0.30
CO2 release (µmol * g wet brain−1 * min−1)
Figure 6 Model-aided prediction of cholinergic contribution is consistent with experimental acetylcholine production. Percent brain cholinergic neurotransmission was predicted based on 14 sets of experimental data in which brain minces were fed [1- 14C]-pyruvate or [2-14C]-pyruvate, followed by measurement of 14C-labeled CO2 and acetylcholine. (a) For each experiment, the feasible amount of the brain that can generate the experimental response was computed, centering at 3.3%. (b) This parameter was used in the analysis, and the updated model predictions were consistent with experimental data, such as seen in the case of treating the brain minces with [1- 14C]-pyruvate and increasing levels of the pyruvate-dehydrogenase inhibitor bromopyruvate. (c,d) Moreover, the updated model predictions were consistent with measured 14C-labeled CO2 and acetylcholine production for brain minces that were treated with three PDHm inhibitors withheld from previous computations for both supplementation with [1- 14C]-pyruvate (c) and [2-14C]-pyruvate (d). Error bars on the simulation results represent 25 th and 75th percentiles. ChAT, choline acetyltransferase.
The in silico inhibition of ACITL reduces acetylcholine production by 7.3%. It is possible that the in silico decrease is smaller because the model can immediately adapt to the perturbation, whereas in vivo regulatory responses would take time to adapt. Notably, the in silico inhibition of ACACT1r reduces acetylcholine production by 39%. Thus, cholinergic neurotransmission depends on redundant pathways, and acetoacetate may play a more dominant role in transporting mitochondrial acetyl-CoA to the cytosol. Our analysis provides further insight into the coupling of mitochondrial metabolism to acetylcholine synthesis, which aids in the treatment of cholinergic disorders. However, knowledge of the abundance of cholinergic neurotransmission also aids in this purpose. It is difficult to identify cholinergic neurons, based solely on cell morpho logy, because cholinesterases and immunohistochemical markers for cholinergic neurons are also found in noncholinergic neurons and other tissues37. Therefore, it is unknown what percentage of all neuro transmission is cholinergic. Using our cholinergic model, we compute the percent contribution of cholinergic neurotransmission based on published data38. The data were obtained from rat brain minces, incubated in solutions containing [1-14C]pyruvate or [2-14C]pyruvate. Both acetylcholine and radiolabeled CO2 were measured at various titrations of several pyruvate dehydrogenase inhibitors. The cholinergic model was subjected to similar levels of pyruvate dehydrogenase inhibition. The simulations successfully reproduced the experimental linear relationship between acetylcholine production and metabolic rate, and acetylcholine production was correlated with CO2 release (r = 0.68). The fraction of cholinergic neurotransmission for the brain was computed by randomly choosing points from both the distributions of experimental data and distributions predicted by the simulations. A scaling factor was subsequently found that reconciles the two. This was repeated for 14 different combinations of pyruvate labeling and pyruvate dehydrogenase inhibitors38, yielding a median predicted cholinergic portion of total brain neurotransmission of 3.3% (Fig. 6a). After adding this new parameter to the model, the predictions correspond well with experimental data sets (Fig. 6b), including data representing three pyruvate dehydrogenase inhibitors withheld from the previous computations (Fig. 6c,d). Thus, the model was used in 1284
conjunction with experimental data to gain insight into physiological observations and derive physiological parameters, which are dependent on systems-level activity and are relevant to treatment. DISCUSSION In this study we presented a workflow for generating tissue-specific, multicellular metabolic models. Through the analysis and integration of ’omics data, followed by manual curation, we used this workflow to build a first-draft, manually curated, multicellular, metabolic reconstruction of brain energy metabolism. Three models were generated from this reconstruction, representing different types of neurons coupled to astrocytes. We used these models in three distinct analyses, in which we made predictions and gained systems-level insights into Alzheimer’s disease and cholinergic neurotransmission. As experimental methods and data resolution improve, the accuracy of these models and their predictions should also improve. Improvements in neuroimaging and metabolomics will allow for more precise quantification of metabolite flow through the bloodbrain barrier, which is of interest because dysfunction of this system accompanies many neurological disorders and injuries39. In addition, improvements in transcriptomics and proteomics will provide higherresolution quantification of cell- and organelle-specific genes and proteins. These data will allow models to account for neuron groups in specific brain regions, subcellular heterogeneity within cells and the inclusion of less abundant glial cells. For example, higher-resolution models may provide insight into metabolic changes in specific cell populations, such as the structures closely related to the olfactory system, which are affected early on in Alzheimer’s disease40. Insight into mammalian tissue-specific metabolism may be gained as more multicellular models are constructed. Our models demonstrate metabolic coupling and synergistic activities that more coarse-grained models miss, as the three analyses presented here were not possible using Recon 1 or the previous brain metabolism models. The compartmentalization of metabolic processes within cells41, between cells42 and in host-pathogen interactions43 has an important role in normal physiology. Therefore, such models may provide greater insight and more accurately predict both true cellular functions and responses to medical interventions44. This insight will be attained as many analytical methods, in addition to those demonstrated in this work, will be
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
a n a ly s i s developed and deployed in multicellular reconstructions ranging from constraint-based analyses5 to topological studies45. This study serves as an example of how mechanistic relationships between genotype and phenotype can be built through the difficult task of multi-omic data integration46. From the genotype one can begin to reconstruct the network for an organism. The integration of high-throughput data and careful manual curation can add context-specific mechanistic network structure to genomics information. Thus, this network becomes a representation of complex genetic interactions and biochemical mechanisms underlying observed phenotypes. This complex, but mechanistic, relationship between the genotype and phenotype can be used as a foundational structure upon which additional high-throughput data can be analyzed and predictive simulations can be conducted, thus leading to improved understanding, testable hypotheses and increased knowledge1,2,44.
© 2010 Nature America, Inc. All rights reserved.
Methods Methods and any associated references are available in the online version of the paper at http://www.nature.com/naturebiotechnology/. Note: Supplementary information is available on the Nature Biotechnology website. Acknowledgments The authors thank G. Gibson at Cornell University, I. Thiele at the University of Iceland and M. Abrams, M. Mo and C. Barrett at UCSD for suggestions pertaining to this work. This work was funded in part by a Fulbright fellowship, a National Science Foundation IGERT Plant Systems Biology training grant (no. DGE-0504645), US National Institutes of Health grants 2R01GM068837_05A1 and RO1 GM071808 and the Helmholtz Alliance on Systems Biology and the BMBF by the NGFN+ neuroblastoma project ENGINE. AUTHOR CONTRIBUTIONS N.E.L., J.K.C., A.Y., N.P., M.P.A. and B.O.P. conceived and designed the model. N.E.L., J.K.C., G.S., R.K., R.E., J.S., A.B. and R.A.L. performed data analyses. The manuscript was written by N.E.L., G.S., J.S., A.B. and B.O.P. COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. Published online at http://www.nature.com/naturebiotechnology/. Reprints and permissions information is available online at http://npg.nature.com/ reprintsandpermissions/. 1. Feist, A.M. & Palsson, B.O. The growing scope of applications of genome-scale metabolic reconstructions using Escherichia coli. Nat. Biotechnol. 26, 659–667 (2008). 2. Oberhardt, M.A., Palsson, B.O. & Papin, J.A. Applications of genome-scale metabolic reconstructions. Mol. Syst. Biol. 5, 320 (2009). 3. Breitling, R., Vitkup, D. & Barrett, M.P. New surveyor tools for charting microbial metabolic maps. Nat. Rev. Microbiol. 6, 156–161 (2008). 4. Thiele, I. & Palsson, B.O. A protocol for generating a high-quality genome-scale metabolic reconstruction. Nat. Protoc. 5, 93–121 (2010). 5. Lewis, N.E., Jamshidi, N., Thiele, I. & Palsson, B.O. Metabolic systems biology. in Encyclopedia of Complexity and Systems Science (ed. Meyers, R.A.) 5535–5552 (Springer, New York, 2009). 6. Orth, J.D., Thiele, I. & Palsson, B.O. What is flux balance analysis? Nat. Biotechnol. 28, 245–248 (2010). 7. Duarte, N.C. et al. Global reconstruction of the human metabolic network based on genomic and bibliomic data. Proc. Natl. Acad. Sci. USA 104, 1777–1782 (2007). 8. Becker, S.A. & Palsson, B.O. Context-specific metabolic networks are consistent with experiments. PLOS Comput. Biol. 4, e1000082 (2008). 9. Shlomi, T., Cabili, M.N., Herrgard, M.J., Palsson, B.O. & Ruppin, E. Network-based prediction of human tissue–specific metabolism. Nat. Biotechnol. 26, 1003–1010 (2008). 10. Jerby, L., Shlomi, T. & Ruppin, E. Computational reconstruction of tissue-specific metabolic models: application to human liver metabolism. Mol. Syst. Biol. 6, 401 (2010). 11. Ponten, F. et al. A global view of protein expression in human cells, tissues, and organs. Mol. Syst. Biol. 5, 337 (2009). 12. Palsson, B.O. in Systems Biology: Properties of Reconstructed Networks 322 (Cambridge University Press, Cambridge/New York, 2006). 13. Mishra, G.R. et al. Human protein reference database–2006 update. Nucleic Acids Res. 34, D411–D414 (2006). 14. Fujii, Y., Imanishi, T. & Gojobori, T. H-Invitational Database: integrated database of human genes. Tanpakushitsu Kakusan Koso 49, 1937–1943 (2004).
15. Reidegeld, K.A. et al. The power of cooperative investigation: summary and comparison of the HUPO Brain Proteome Project pilot study results. Proteomics 6, 4997–5014 (2006). 16. Chatziioannou, A., Palaiologos, G. & Kolisis, F.N. Metabolic flux analysis as a tool for the elucidation of the metabolism of neurotransmitter glutamate. Metab. Eng. 5, 201–210 (2003). 17. Cakir, T., Alsan, S., Saybasili, H., Akin, A. & Ulgen, K.O. Reconstruction and flux analysis of coupling between metabolic pathways of astrocytes and neurons: application to cerebral hypoxia. Theor. Biol. Med. Model. 4, 48 (2007). 18. Occhipinti, R., Puchowicz, M.A., LaManna, J.C., Somersalo, E. & Calvetti, D. Statistical analysis of metabolic pathways of brain metabolism at steady state. Ann. Biomed. Eng. 35, 886–902 (2007). 19. Reiman, E.M. et al. Functional brain abnormalities in young adults at genetic risk for late-onset Alzheimer’s dementia. Proc. Natl. Acad. Sci. USA 101, 284–289 (2004). 20. Ginsberg, S.D., Che, S., Counts, S.E. & Mufson, E.J. Single cell gene expression profiling in Alzheimer’s disease. NeuroRx 3, 302–318 (2006). 21. Lai, M.K.P., Ramirez, M.J., Tsang, S.W.Y. & Francis, P.T. Alzheimer’s disease as a neurotransmitter disease in Neurobiology of Alzheimer’ Disease (eds. Dawbarn, D. & Allen, S.J.) 245–281 (Oxford University Press, New York, 2007). 22. Fukui, H., Diaz, F., Garcia, S. & Moraes, C.T. Cytochrome c oxidase deficiency in neurons decreases both oxidative stress and amyloid formation in a mouse model of Alzheimer’s disease. Proc. Natl. Acad. Sci. USA 104, 14163–14168 (2007). 23. Bubber, P., Haroutunian, V., Fisch, G., Blass, J.P. & Gibson, G.E. Mitochondrial abnormalities in Alzheimer brain: mechanistic implications. Ann. Neurol. 57, 695–703 (2005). 24. Gibson, G.E. et al. Alpha-ketoglutarate dehydrogenase in Alzheimer brains bearing the APP670/671 mutation. Ann. Neurol. 44, 676–681 (1998). 25. Casley, C.S., Canevari, L., Land, J.M., Clark, J.B. & Sharpe, M.A. Beta-amyloid inhibits integrated mitochondrial respiration and key enzyme activities. J. Neurochem. 80, 91–100 (2002). 26. Hoshi, M. et al. Regulation of mitochondrial pyruvate dehydrogenase activity by tau protein kinase I/glycogen synthase kinase 3beta in brain. Proc. Natl. Acad. Sci. USA 93, 2719–2723 (1996). 27. Gorman, A.M., Ceccatelli, S. & Orrenius, S. Role of mitochondria in neuronal apoptosis. Dev. Neurosci. 22, 348–358 (2000). 28. Santos, S.S. et al. Inhibitors of the alpha-ketoglutarate dehydrogenase complex alter [1–13C]glucose and [U-13C]glutamate metabolism in cerebellar granule neurons. J. Neurosci. Res. 83, 450–458 (2006). 29. Hassel, B., Johannessen, C.U., Sonnewald, U. & Fonnum, F. Quantification of the GABA shunt and the importance of the GABA shunt versus the 2-oxoglutarate dehydrogenase pathway in GABAergic neurons. J. Neurochem. 71, 1511–1518 (1998). 30. Liang, W.S. et al. Altered neuronal gene expression in brain regions differentially affected by Alzheimer’s disease: a reference data set. Physiol. Genomics 33, 240–256 (2008). 31. Stuhmer, T., Anderson, S.A., Ekker, M. & Rubenstein, J.L. Ectopic expression of the Dlx genes induces glutamic acid decarboxylase and Dlx expression. Development 129, 245–252 (2002). 32. Ibanez, V. et al. Regional glucose metabolic abnormalities are not the result of atrophy in Alzheimer’s disease. Neurology 50, 1585–1593 (1998). 33. Schramm, G. et al. PathWave: discovering patterns of differentially regulated enzymes in metabolic pathways. Bioinformatics 26, 1225–1231 (2010). 34. Ragozzino, M.E., Pal, S.N., Unick, K., Stefani, M.R. & Gold, P.E. Modulation of hippocampal acetylcholine release and spontaneous alternation scores by intrahippocampal glucose injections. J. Neurosci. 18, 1595–1601 (1998). 35. Watson, G.S. & Craft, S. Modulation of memory by insulin and glucose: neuropsychological observations in Alzheimer’s disease. Eur. J. Pharmacol. 490, 97–113 (2004). 36. Cooper, J.R. Unsolved problems in the cholinergic nervous system. J. Neurochem. 63, 395–399 (1994). 37. Karczmar, A.G. Cholinergic cells and pathways in Exploring the Vertebrate Cholinergic Nervous System 686 (Springer, New York, 2006). 38. Gibson, G.E., Jope, R. & Blass, J.P. Decreased synthesis of acetylcholine accompanying impaired oxidation of pyruvic acid in rat brain minces. Biochem. J. 148, 17–23 (1975). 39. Abbott, N.J., Ronnback, L. & Hansson, E. Astrocyte-endothelial interactions at the blood-brain barrier. Nat. Rev. Neurosci. 7, 41–53 (2006). 40. Thompson, M.D., Knee, K. & Golden, C.J. Olfaction in persons with Alzheimer’s disease. Neuropsychol. Rev. 8, 11–23 (1998). 41. Schryer, D.W., Peterson, P., Paalme, T. & Vendelin, M. Bidirectionality and compartmentation of metabolic fluxes are revealed in the dynamics of isotopomer networks. Int. J. Mol. Sci. 10, 1697–1718 (2009). 42. Serres, S., Raffard, G., Franconi, J.M. & Merle, M. Close coupling between astrocytic and neuronal metabolisms to fulfill anaplerotic and energy needs in the rat brain. J. Cereb. Blood Flow Metab. 28, 712–724 (2008). 43. Bordbar, A. et al. Insight into human alveolar macrophage and M. tuberculosis interactions via metabolic reconstructions. Mol. Syst. Biol. 6, 422 (2010). 44. Chang, R.L., Xie, L., Xie, L., Bourne, P.E. & Palsson, B.O. Drug off-target effects predicted using structural analysis in the context of a metabolic network model. PLoS. Comput. Biol. 6, e1000938 (2010). 45. Lee, D.S. et al. The implications of human metabolic network topology for disease comorbidity. Proc. Natl. Acad. Sci. USA 105, 9880–9885 (2008). 46. Palsson, B.O. & Zengler, K. The challenges of integrating multi-omics data sets. Nat. Chem. Biol. 6, 787–789 (2010).
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1285
© 2010 Nature America, Inc. All rights reserved.
ONLINE METHODS
Reconstruction of iNL403. This work focuses on the core of cerebral energy metabolism and the pathways that play a critical role in cell type–specific functions in the brain. The pathways in this work include mitochondrial metabolic pathways, central metabolic pathways closely tied to mitochondrial function and additional pathways that are needed for modeling neuron and astrocyte functions. To reconstruct these pathways, a list of known human mitochondrial, glycolytic and transport reactions were extracted from the manually curated human metabolic reconstruction Recon 1 (ref. 7). From this list, reactions were directly added to the brain reconstruction if brainlocalized protein or gene expression was suggested by the Human Protein Reference Database (release 5)13 or HINV14, both of which provide tissue expression presence calls for each gene. Proteomics data from live human brain, acquired for the HUPO brain proteome project15, were also used (see Supplementary Table 7 for accession numbers). Additional reactions were added as dictated by biochemical data from the literature (Supplementary Table 1). This reconstruction is designated iNL403 because it contains 403 genes. Reactions and pathways were manually curated to verify their presence in the human brain and to determine cell-type localization, thus yielding a first-draft metabolic reconstruction of the brain metabolic network. Reactions unique to the different neuron types were determined from the literature (see notes in Supplementary Table 1), and consist largely of the reactions needed to make and metabolize their associated neurotransmitters. A list of all reactions, supporting data, citations and a comparison with previous brain metabolism models can be found in Supplementary Tables 1–4 and 8. Three SBML models representing the union of reactions from the neuron-type specific models can be found as Supplementary Models 1–3. Neuron-type specific models in SBML format and model updates can be obtained from http://systemsbiology.ucsd.edu/In_Silico_Organisms/Brain. Constraint-based modeling. Constraint-based modeling and analysis of metabolic networks have been previously described5,12. Briefly, all of the reactions are described mathematically by a stoichiometric matrix, S, of size m × n, where m is the number of metabolites and n is the number of reactions, and each element is the stoichiometric coefficient of the metabolite in the corresponding reaction. The mass balance equations at steady state are represented as S • v = 0,
(1)
where v is the flux vector12. Maximum and minimum fluxes and reaction reversibility, when known, are placed on each reaction, further constraining the system as follows vmin ≤ v ≤ vmax .
(2)
At this point the model can then be used with many constraint-based methods5 to study network characteristics. The S matrix was constructed with the mass and charge balanced reactions from the reconstruction. Select metabolites, known to cross the blood-brain barrier, were added as exchange reactions, allowing those metabolites to leave or enter the extracellular space in the model. A few metabolites from network gaps were allowed to enter or leave the system from the cytosol or mitochondria. This was only done when transporter mechanisms or subsequent pathway steps were not known, and when their entrance or removal from the system was necessary for model function. When available, cerebral metabolic rates were used from published data to constrain the upper and lower bounds of the exchange reactions45,46. All parameters are detailed in Supplementary Tables 1 and 5. Parameters derived from rat data were varied to demonstrate that results presented here were robust and therefore relevant to human metabolism, and constraint-based methods were further employed to assess and compare the model to metabolic functions of the brain (Supplementary Notes). Monte Carlo sampling. Monte Carlo sampling was used to generate a set of feasible flux distributions (points). The method is based on the artificially centered hit-and-run algorithm with slight modifications. Initially, a set of nonuniform pseudo-random points, called warm-up points, is generated. In a
nature biotechnology
series of iterations, each point is randomly moved, always remaining within the feasible flux space. This is done by (i) choosing a random direction, (ii) computing the limits of how far one can travel in that direction and (iii) choosing a new random point along this line. After many iterations, the set of points is mixed and approaches a uniform sample of the solution space, thus providing a distribution for each reaction that represents the range and probability of the flux for each reaction, given the network topology and model constraints. For more detail, see the Supplementary Notes. Simulating enzyme deficiencies. Enzyme deficiencies were obtained from the literature23. To simulate each deficiency, the distribution for all candidate flux states was determined using Monte Carlo sampling. From this distribution, the most probable flux was found and the reaction upper bound was reduced by the fraction reported in the literature. All candidate states were then recomputed and compared with normal candidate flux states. Alzheimer’s disease microarray analysis. Microarrays were obtained from the Gene Expression Omnibus (GSE5281). Arrays consist of 161 Affymetrix Human Genome U133 Plus 2.0 Arrays that profile the gene expression from laser-capture, microdissected, histopathologically normal neurons from six different brain regions of Alzheimer’s disease patients and age-matched controls. These arrays were not used in model construction. Arrays were normalized using the GC Robust Multi-array Average (gcrma) normalization function in the bioconductor package for R. Pearson’s correlation coefficients were computed for all array pairs, and arrays with r < 0.8 were discarded (that is, GSM119643, GSM119661, GSM119666 and GSM119676). Different arrays had different levels of glial contamination. Therefore, to assess the amount of GAD (neuron-specific), the GAD1 and GAD2 levels on each array were normalized as follows. For each array, the relative amount of neuron material was determined by computing a ratio for four neuron-specific genes to the median level across all arrays. Neuron-specific genes were chosen to represent different neuron parts, including the soma, axon and synaptic bouton (TUBB3 (ref. 47), NeuN48, SYN1 (ref. 49) and ACTL6B50). These were summed to compute a relative amount of neuron material (NM) for each array, j, NM j = ∑ i
g i, j gi
,
(3)
for each neuron marker gene gi. Because GAD genes are neuron specific in the central nervous system, these were normalized for each array by the associated relative amount of neuron material, thus termed GADNMN for neuron-marker normalized GAD. It is assumed in this study that the four neuron marker transcripts used here do not change their expression level between Alzheimer’s patients and age-matched controls, in the cell populations sampled for microarray analysis. This assumption is made since there are no published studies that demonstrate that these genes change expression in healthy cells through the progression of Alzheimer’s disease and because efforts were made to only expression profile histopathologically normal neurons. It is possible that there is downregulation of some neuron marker transcripts among neurons bearing neurofibrillary tangles, as synapse loss is a hallmark of Alzheimer’s disease20. However, the arrays used in this study profile histopathologically normal neurons and the surrounding glial cells. Therefore, it is not expected that there will be significant changes in the expression of these key neuronal genes in the data used here. Lastly, the inclusion of multiple genes from different cell regions aims to minimize the effects from expression changes not attributable to glial cell contamination. The results presented in this work are robust to the removal of each neuron marker gene (Supplementary Notes and Fig. 4 therein). PathWave analysis. PathWave allows for the elucidation of pathways that significantly change together. Its advantage over other methods, such as Gene Set Enrichment Analysis, is that it takes metabolic network connectivity into account to identify changes in pathways, thereby identifying significantly differentially regulated pathways in which the gene products are mechanistically connected. PathWave was used as published previously33. The reactions in each model were subdivided into biologically relevant functional pathways. Reactions that
doi:10.1038/nbt.1711
© 2010 Nature America, Inc. All rights reserved.
were involved in multiple pathways were added to each associated pathway. Reactions and their pathways are shown in Supplementary Table 9. PathWave analyzes microarray data using metabolic reaction connectivity. For each model, pathways were simplified by removing all metabolites with connectivity >8 in the metabolic network. Exceptions are listed in Supplementary Table 10. For each of these simplified metabolic pathways from each model, reactions were laid into a two-dimensional, regular square lattice grid. To optimally preserve neighborhood relations of the reactions, adjacent nodes of the network were placed onto the grid as close to each other as possible. We mapped each expression data set, obtained from the Gene Expression Omnibus (GSE5281), onto the corresponding reactions of the transcribed enzymes. If a reaction was catalyzed by a complex of proteins, the average expression was taken. The resulting expression values of each reaction were z-transformed. Haar wavelet transforms on the optimized grid representation of each pathway were performed to explore every possible expression pattern of neighboring reactions and to define groups of reactions within a pathway that showed significant differences between samples of different conditions. To obtain significance values, the sample labels were permutated (n = 10,000) and scores were calculated for each wavelet and permutation. The scores represent the absolute value of the logarithm of the P-value for each wavelet feature, calculated by t-tests. For the best hit (highest score of the nonpermutated wavelet features) a P-value was obtained from the reference distributions and represented the significance for the corresponding pathway. The P-value for each pathway was corrected for multiple testing (false-discovery rate (FDR) = 0.05)51. Only pathways with more than three significantly differentially regulated reactions were further considered (FDR = 0.05). To obtain local patterns in the pathways, all wavelet features were statistically tested applying t-tests and corrected for multiple testing. Statistically significant features contained those subgraphs of the metabolic network that showed differentially regulated patterns. Reconstructing these subgraphs allowed us to directly detect the regions of interest in the metabolic network (Supplementary Table 11). Identifying pathways for acetylcholine synthesis. A flux balance analysisderived approach was used to identify all possible pathways coupling the mitochondrial and cytosolic acetyl-CoA pools using known reactions in human metabolism. First, the potential pathways were identified using Recon 1 (ref. 7). A reaction that supplies mitochondrial acetyl-CoA was added to the model. A second reaction was added to remove cytosolic acetyl-CoA from the model. Lastly, all other metabolite uptake and secretion constraints were opened. Reactions were randomly removed until a minimum pathway was identified, capable of carrying flux between mitochondrial and cytosolic acetyl-CoA. This was repeated until >21,000 unique sets of reactions were identified. An r × p binary matrix was then built with the p unique reaction sets consisting of r reactions. Each element (i,j) of this matrix was 0 if reaction i was absent from pathway j or 1 if reaction i was in pathway j. Rows for all reactions that were
doi:10.1038/nbt.1711
never necessary were subsequently removed from the matrix. Singular value decomposition was then used, followed by varimax factor rotation of the first five singular vectors. Singular vector loadings demonstrated the dominant sets of reactions, and their major dependencies, that could be used to couple mitochondrial acetyl-CoA metabolism and cytosolic acetylcholine metabolism. Predicting cholinergic neurotransmission. The percentage cholinergic neurotransmission was computed based on published data38. The previously published data were obtained from rat brain minces that were incubated in solutions containing [1-14C]pyruvate or [2-14C]pyruvate. Both acetylcholine and radiolabeled CO2 were measured at various titrations of several different inhibitors of pyruvate dehydrogenase (PDHm) (see Supplementary Notes for all inhibitors). Simulations were conducted using the cholinergic model. The models were allowed to take up the same substrates provided experimentally38, at rates consistent with the data (Supplementary Table 5). Monte Carlo sampling was used to identify all feasible flux states. This was done for various levels of PDHm inhibition, ranging from 0 to 90% inhibition. The percentage cholinergic neurotransmission was computed by randomly selecting a feasible flux state from each level of PDHm inhibition and computing the slope of the sum of labeled CO2-producing fluxes and choline acetyltransferase for the different simulations. A similar slope was computed from randomly sampled points from the reported experimental distributions. The ratio of these slopes represents a feasible percentage cholinergic neurotransmission. This was repeated 1,000 times and the median value was reported. Comparisons with the experimental data were done by suppressing the in silico pyruvate dehydrogenase flux until the measured CO2 release rate was obtained. At this level of suppression, the resulting predicted acetylcholine production rate was compared with the experimentally measured rates. See Supplementary Notes for more details. 47. Lying-Tunell, U., Lindblad, B.S., Malmlund, H.O. & Persson, B. Cerebral blood flow and metabolic rate of oxygen, glucose, lactate, pyruvate, ketone bodies and amino acids. Acta Neurol. Scand. 62, 265–275 (1980). 48. Lying-Tunell, U., Lindblad, B.S., Malmlund, H.O. & Persson, B. Cerebral blood flow and metabolic rate of oxygen, glucose, lactate, pyruvate, ketone bodies and amino acids. Acta Neurol. Scand. 63, 337–350 (1981). 49. Tischfield, M.A. et al. Human TUBB3 mutations perturb microtubule dynamics, kinesin interactions, and axon guidance. Cell 140, 74–87 (2010). 50. Kim, K.K., Adelstein, R.S. & Kawamoto, S. Identification of neuronal nuclei (NeuN) as Fox-3, a new member of the Fox-1 gene family of splicing factors. J. Biol. Chem. 284, 31052–31061 (2009). 51. De Camilli, P., Cameron, R. & Greengard, P. Synapsin I (protein I), a nerve terminalspecific phosphoprotein. I. Its general distribution in synapses of the central and peripheral nervous system demonstrated by immunofluorescence in frozen and plastic sections. J. Cell Biol. 96, 1337–1354 (1983). 52. Olave, I., Wang, W., Xue, Y., Kuo, A. & Crabtree, G.R. Identification of a polymorphic, neuron-specific chromatin remodeling complex. Genes Dev. 16, 2509–2517 (2002). 53. Benjamini, Y. & Yekutieli, D. The control of the false discovery rate in multiple testing under dependency. Ann. Stat. 29, 1165–1188 (2001).
nature biotechnology
B r i e f c o m m u n i c at i o n s
A robust system for production of minicircle DNA vectors
© 2010 Nature America, Inc. All rights reserved.
Mark A Kay1,2, Cheng-Yi He1,2 & Zhi-Ying Chen1,2 Minicircle DNA vectors allow sustained transgene expression in quiescent cells and tissues. To improve minicircle production, we genetically modified Escherichia coli to construct a producer strain that stably expresses a set of inducible minicircle-assembly enzymes, ΦC31 integrase and I-SceI homing endonuclease. This bacterial strain produces purified minicircles in a time frame and quantity similar to those of routine plasmid DNA preparation, making it feasible to use minicircles in place of plasmids in mammalian transgene expression studies. A minicircle episomal DNA vector is a circular expression cassette devoid of the bacterial plasmid DNA backbone. These vectors have been used for years in preclinical gene transfer research because of their 10- to 1,000-fold enhancement compared with regular plasmids in long-term transgene expression in quiescent tissues in vivo1,2 and in vitro3. The mechanism of enhanced transgene expression is unclear but may result from eliminating heterochromatin formation induced by the plasmid backbone4 and/or from reducing the death of transfected cells from inflammation due to CpG responses when plasmids are delivered by lipid carriers5. The major obstacle to widespread use of minicircles has been their time-consuming, labor-intensive production. In our previous minicircle production schemes (Fig. 1a), the minicircle producer plasmid contained a transgene expression cassette flanked with attB and attP, a set of inducible enzyme genes (a gene encoding homing endonuclease I-SceI and two copies of the gene encoding ΦC31 integrase) and an I-SceI recognition site6. The attB and attP sites are the bacterial and phage attachment sites of ΦC31 integrase, and the ΦC31 and I-SceI genes are regulated by the l-arabinose–inducible araCBAD system. Minicircle DNA is generated by recombination between the attB and attP sites, and I-SceI initiates the destruction of the plasmid DNA backbone circle by cutting through the engineered I-SceI site (Fig. 1a). Although the yields from this protocol were ~1 mg of minicircle DNA from 1 liter of overnight culture, the preparations still contained ~3–15% of the input minicircle producer plasmid plus the plasmid backbone circle as contaminants. Including CsCl equilibrium gradient centrifugation to remove these unwanted DNAs, the production procedure is four labor-intensive days longer than routine plasmid production protocols. Other groups have made minicircle DNA vectors using different recombinases, such as the bacteriophage λ integrase7 or Cre recombinase8. Limitations of these
approaches include low yields, high contamination or the need for expensive, labor-intensive techniques to isolate minicircles from bacterial lysates9. We present a system that allows simple, rapid and inexpensive production of a high-quality form of minicircles (Fig. 1b,c). We reasoned that the impurity plasmid DNAs in our earlier minicircle production schemes (Fig. 1a) resulted largely from an all-or-none phenomenon10, such that in a subset of bacterial cells, the l-arabinose transporter AraE is not expressed, resulting in the failure of ΦC31 integrase– directed formation of minicircle and I-SceI–mediated degradation of the plasmid backbone circle. To overcome this limitation, we replaced the E. coli ‘Top10’ cells with the BW27783 bacterial strain10, in which the araE gene was driven by the constitutive promoter, cp8. When used in combination with previous minicircle producer plasmids, the conversion to minicircle DNA was more complete (Fig. 2) even with the addition of a very small amount (0.001%) of l-arabinose to the culture (Fig. 2a). There was variable but substantial plasmid DNA degradation in these preparations (Fig. 2a,c). We hypothesized this was due to the presence of endonuclease A, which was confirmed when we knocked the gene out (Fig. 2b,c), resulting in strain BWΔendA (Fig. 2b and Supplementary Fig. 1). Trace amounts of impurity DNAs became visible after knocking out the gene encoding endonuclease A (Fig. 2c), suggesting that there was still a lack of complete recombination after l-arabinose induction. Thus, we engineered the bacterial cells to simultaneously express a second l-arabinose transporter, LacY A177C11 and created the strain 2T (Supplementary Figs. 1 and 2a–d). Its wild-type counterpart, LacY, is a glucose transporter, which gains an l-arabinose transporter function with the described missense mutation. This further reduced but did not completely eliminate the contaminating DNAs (data not shown). To eliminate any possible contamination of the genes encoding ΦC31 integrase and I-SceI from the small amount of parental plasmids used to generate minicircle DNAs, we relocated both genes from the minicircle producer plasmid to the bacterial genome. Accordingly, we integrated three copies of the BAD.I-SceI cassette using the bacterio phage λ Red homologous recombination system12 and created the strain 3S2T (Supplementary Figs. 1 and 3a). We found that the integrated BAD.I-SceI was fully functional, as evident by the efficient destruction of a plasmid with eight consecutive I-SceI sites after l-arabinose induction (Supplementary Fig. 3b,c). Integration of the BAD.ΦC31 gene was less straightforward. First, the BAD.ΦC31 cassette could not be integrated using our original Red system without destroying the integrated BAD.I-SceI gene. Second, we attempted to integrate the BAD.ΦC31 cassette from a plasmid through ΦC31 integrase–mediated recombination between the attB site in the plasmid and attP pre-integrated into the genome. However, this integrant was unstable (data not shown), most likely because of a reverse
1Departments of Pediatrics and Genetics, Stanford University School of Medicine, Stanford, California, USA. 2The Center for Clinical Science Research, Stanford University School of Medicine, Stanford, California, USA. Correspondence should be addressed to M.A.K. ([email protected]) or Z.-Y.C. ([email protected]).
Received 21 June; accepted 15 October; published online 21 November 2010; doi:10.1038/nbt.1708
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1287
b r i e f c o m m u n i c at i o n s a
c
Figure 1 Comparison of the present and previous (ii) (i) minicircle systems. (a) An earlier version of the E. coli strain Top 10 Strain BW27783 attB Transgene attP BADI-SceI minicircle production system. (i) Structure of p2ΦC31.Transgene Figure 2 the previous minicircle producer plasmid. BAD 2BADΦC31 araC AmpR ColE1 I-SceIs Knockout endA, and araC, the promoter and the repressor gene resulting in strain of the inducible l-arabinose-araC.BAD system; (iii) BW∆endA Isolate ΦC31, bacteriophage ΦC31 integrase gene; attB * * Resuspend Pellet the * Supplementary minicircle Incubate at and attP, the bacterial and phage attachment bacteria in LB Figure 2 bacteria by plasmid 32 °C, 250 containing via cenOvernight purification sites of the ΦC31 integrase; I-SceI, I-SceI Integrate lacY A177C, r.p.m. for 2 h 1% L-arabinose trifugation culture column resulting in strain homing endonuclease gene; I-SceIs, the I-SceI 2T * Remove ethidium * Remove * * Isolate Restrict recognition site; AmpR, ampicillin resistance Supplementary contaminated minicircle via bromide via isopropane Figure 3 gene; ColE1, DNA replication origin. (ii) E. coli plasmid CsCl gradient isopropane via ethanol backbone circle centrifugation extraction precipitation strain Top10, original strain used to produce Integrate 3XBADI-Scel, resulting in strain minicircle. (iii) Flow chart showing the minicircle 3S2T production protocol. Each box represents a (i) Supplementary attL ApoE.hFIX attB ApoE.hFIX attP Figure 4 major step and the starred boxes represent the ΦC31, Integrate 2xBAD steps required in addition to a routine plasmid pMC.ApoE.hFIX MC.ApoE.hFIX + Plasmid backbone circle Degradation resulting in strain production protocol. (b) The present minicircle 2P3S2T attR ColE1 KanR 32I-SceIs 32I-SceIs KanR ColE1 system. (i) Diagram of a minicircle producer Supplementary Figure 5 (ii) The genetic modifications in bacterial strain ZYCY10P3S2T: plasmid and its conversion to minicircle DNA. Integrate 4xBAD ΦC31, ∆endA Cp8.araE Bla.lacY A177C 3BAD.I-SceI 2BAD. Φ C31 4BAD. Φ C31 4BAD. Φ C31 pMC.ApoE.hFIX, minicircle producer plasmid; resulting in strain hFIX, human factor IX; sApoE, promoter/ 6P3S2T (iii) enhancer, as described previously1; KanR, * Isolate Add minicircle Supplementary Incubate * minicrcle by induction mix Figure 6 kanamycin resistance gene. Upon l-arabinose at 32 °C/ plasmid comprising 1 volume Integrate 4xBAD ΦC31, induction, ΦC31 is expressed to mediate the 250 r.p.m. purification of LB, 0.04 volume of Overnight resulting in strain for 5 h column 1N NaOH and formation of minicircle and plasmid backbone culture 10P3S2T 0.02% L-arabinose circle and I-SceI to induce the destruction of plasmid backbone circle. (ii) The genetic modifications of the minicircle producing bacterial strain ZYCY10P3S2T. 10P3S2T stands for (1) ten copies of BAD.ΦC31 cassette, that were integrated in three loci of the bacterial genome: two tandem copies at the ΔendA locus (Supplementary Fig. 4b), and four copies at the araD (Supplementary Fig. 5a) and galK (Supplementary Fig. 6a) each; (2) three tandem copies of BAD.I-SceI cassette, which were integrated at UMU locus (Supplementary Fig. 3a) and (3) two genes constitutively expressing l-arabinose transporter, one was araE gene driven by an artificial promoter cp8, which presented in strain BW27783 (ref. 10); the other was the bla-lacY A177C cassette, which was integrated at the lacY locus (Supplementary Fig. 2); bla, beta-galactosidase gene promoter; lacY A177C, the missense mutant of lacY gene. (iii) Flow chart showing the present minicircle production protocol. (c) Stepwise genetic modification of the bacterial genome to make the current ZYCY10P3S2T strain.
© 2010 Nature America, Inc. All rights reserved.
b
reaction mediated by leaky ΦC31 integrase expression in concert with an uncharacterized cofactor13. Third, because many more copies of the BAD.ΦC31 were needed to efficiently generate minicircles, we designed a strategy using ΦC31 integrase and a second recombinase. After making the desired integrant by the ΦC31 integrase–mediated reaction, the resulting attL site was removed by the second recombinase, eliminating the possibility of the reverse excision reaction (Supplementary Figs. 4a–c, 5a,e and 6a,d). Using the above approach, we generated a bacterial host containing two copies of the BAD.ΦC31 inserted into the deleted endA site (ΔendA), called the 2P3S2T strain (Supplementary Figs. 1 and 4a–c). We found that ΦC31 integrase mediated the formation of the minicircle, and I-SceI mediated destruction of the unrecombined minicircle producer plasmid and plasmid backbone circle in a minicircle producer plasmid encoding a 2.2-kb transgene and 32 tandem copies of the I-SceI site in the plasmid backbone (Supplementary Fig. 4d,e). However, the minicircle yield was low, suggesting that most of the minicircle producer plasmid was destroyed by I-SceI before minicircle formation, and that additional ΦC31 integrase activity was needed to mediate greater minicircle formation. We therefore used another targeting plasmid to integrate four copies of the BAD.ΦC31 cassette into the araD locus (Supplementary Figs. 1 and 5), and four additional copies into the gene encoding galK (Supplementary Figs. 1 and 6), resulting in strains 6P3S2T and ZYCY10P3S2T carrying six and ten copies of BAD.ΦC31, respectively. These two present strains, in concert with the minicircle producer plasmid described earlier (Supplementary Fig. 4d), enabled three improvements over the previous system (Fig. 1a)6. First, the procedure was greatly simplified and, compared to a routine plasmid preparation, required only an additional temperature change and 5-h incubation 1288
after addition of l-arabinose (Fig. 1b). Second, the yield of three minicircles, with a size range of 2.2–6.0 kb, was 3.4–4.8 mg/1,000 ml of overnight culture, making it ~3- to 5-times higher (Fig. 2e) than our previous minicircle producing system (Fig. 1a). In addition, compared with the previous minicircle production protocol, there were 10-times fewer contaminating plasmid DNAs, ranging from 0.4% to 1.5% of the input minicircle preparation as determined by qPCR (Online Methods, qPCR section)6. On a molar scale, the yield of minicircle was 20–70% higher than the minicircle producer plasmid (Fig. 2e). Third, the cost of minicircle production was similar to that of a standard plasmid. These production improvements, together with their superior expression profiles, make it feasible for minicircle DNA vectors to be used in place of plasmid DNAs in mammalian expression studies. Our efforts to optimize minicircle production led us to develop an improved bacterial genome modification strategy. The enhancements include: (i) the use of a circular integrating plasmid instead of linear DNA, allowing repeated integration of the same or different DNA sequences of up to 10 kb in selected targets (Supplementary Figs. 4b, 5a and 6a); (ii) inclusion of a second recombinase, allowing selective removal of unwanted sequences, such as the hybrid sequences responsible for the reverse recombinase reaction, along with the useless or harmful plasmid backbone DNAs, from the genome (Supplementary Figs. 4b, 5a and 6a); and (iii) the use of the TPin/9attB.9attP recombination system (Supplementary Figs. 5a and 6a), eliminating the inherent problems with the FLP/FRT system, in which the uncontrolled reaction between the substrate and product FRTs makes repeated integration virtually impossible. Although these bacterial gene modifications would be difficult to achieve with conventional homologous recombination methods, there are other technologies using dual site-specific recombinases for targeted integration of circular DNA into the bacterial
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
b r i e f c o m m u n i c at i o n s Figure 2 Improvement in minicircle quality Parental p2Φ31.hFIX and quantity. (a) Strain BW27783 produced Enzyme Bgl II+EcoNI F1 AmpR ColE1 minicircle (MC) with enhanced purity. Minicircles Strain Top10 BW27783 were produced according to the protocol pKanR.endA 0 1.0 0.1 0.01 0.001 0 1.0 0.1 0.01 0.001 L-arab (%) described previously6. 2Φ31.hFIX, minicircle endA attB KanR attP endA producer plasmid (PP); BglII+EcoN1, two PP Pme 1PB Pme I Pme I restriction enzymes used to cleave MC before cutting MC Targeting electrophoresis; l-arab(%), percent of l-arabinose endA attB KanR attP endA DNA fragment in the minicircle induction reaction; PB, plasmid + backbone circle. (b) Strategy for inactivation EndA Strain BW27783 of endA. pKanR.endA, the plasmid used to pBAD.Red generate the Pme1-restricted targeting DNA Strain endA attB KanR attP endA BW∆endA.KanR Plasmid p2Φ31.hFIX fragment; KanR, kanamycin-resistance gene; attB Enzyme Bgl II + EcoNI p2øC31 and attP, the bacterial and phage attachment BW∆endA1 Strain BW27783 sites of bacteriophage ΦC31 integrase; boxed endA attR endA Strain BW∆endA 32 °C (h) 0 2 2 0 2 endA, PCR-generated 329- and 754-bp end A 37 °C (h) 0 0 2 0 2 fragments; pBAD.Red, a plasmid expressing the PP Producer pMC pMC. pMC. PB bacteriophage λ homology recombination complex Plasmid RSV.hAAT ApoE.hFIX CMV.LGNSO MC (Red) under the control of araC.BAD (BAD); DNA type PP MC PP MC PP MC p2ΦC31, a complementing plasmid encoding two copies of BAD.ΦC31 gene, one copy of BAD. I-SceI gene and one I-SceI site; BWΔendA, a strain derived from BW27783 with the endA interrupted. (c) Minicircle DNA integrity before and after disruption of the endA gene. Before disrupting endA, we observed repeatedly large variations in the degree of plasmid degradation Molar Weight Size (kb) Yield (mg/liter) Yield (E-9 mole/liter) as shown in a and c. Because the endonuclease Minicircle producer ratio ratio A is a membrane-bound enzyme, it was possible plasmid PP MC PP MC PP MC (MC/PP) (MC/PP) that its membrane release and activation varied 5.84 ± 0.41 3.42 ± 0.87(I) 1.60 ± 0.11 2.59 ± 0.66(I) pMC.RSV.hAAT 6.1 2.2 0.59 1.63 during plasmid preparation. 32 °C and 37 °C, the 5.45 ± 0.38 4.83 ± 0.60(II) 1.12 ± 0.08 1.91 ± 0.24(I) pMC.ApoE.hFIX 8.1 4.2 0.89 1.71 incubation temperature. All reactions contained 5.57 ± 0.15 3.40 ± 0.35(I) 0.95 ± 0.03 1.07 ± 0.11(II) pMC.CMV.LGNSO 9.8 6.0 0.61 1.18 1% l-arabinose. (d) Quality of the minicircle determined by gel analyses. Minicircle was made according to the simplified protocol outlined in Figure 1b and Supplementary Protocol; DNAs were cleaved before electrophoresis. (e) Yield of minicircle producer plasmids and minicircle vector DNAs. The yield was derived from triplicate 400-ml overnight cultures; PP, minicircle producer plasmid; MC, minicircle. Wilcoxon rank sum test comparing the yield of minicircle and its minicircle producer plasmid: (I), P < 0.05; (II), P > 0.05. We used the following formula to convert the yield from mg/l to mol/l: mol/l = [yield (mg/l) × 10E-3 g/l]/[size (kb) × 1,000 × 330 × 2 g/mol], where 330 is the average molecular weight of dNTP. The minicircle producer plasmid, pMC.RSV.hAAT is schematically illustrated in Supplementary Figure 4d; the pMC.CMV.LGNSO is described in the Constructs section of Online Methods.
a
b
c
© 2010 Nature America, Inc. All rights reserved.
d
e
genome and creation of a marker-less bacterial strain14. However, in our approach we were able to remove one of the two recombination hybrids (either attL or attR; Supplementary Figs. 5 and 6), disabling the reverse reaction and resulting in a stable genomic insertion. Recent studies have shown that repeated gene sequences inserted into the bacterial genome are stable for at least 80 generations15. This type of site-specific integration will broaden the ability to make stable genetic modifications in prokaryotic and eukaryotic genomes. Methods Methods and any associated references are available in the online version of the paper at http://www.nature.com/naturebiotechnology/. Note: Supplementary information is available on the Nature Biotechnology website.
Acknowledgments The authors would like to thank P. Valdmanis for critical review of the manuscript. This work was supported by the US National Institutes of Health - HL064274 (M.A.K.). Bacterial strain BW27783 was a gift of Jay D. Keasling of the University of California at Berkeley. Plasmid placY A177C was obtained from John E. Cronan at the University of Illinois. AUTHOR CONTRIBUTIONS Z.-Y.C. planned and carried out the experiments. C.-Y.H. conducted a number of the experiments. M.A.K. and Z.-Y.C. discussed and planned the experimental strategies. M.A.K. and Z.-Y.C. wrote the manuscript.
COMPETING FINANCIAL INTERESTS The authors declare no competing financial interests. Published online at http://www.nature.com/naturebiotechnology/. Reprints and permissions information is available online at http://npg.nature.com/ reprintsandpermissions/.
Chen, Z.Y., He, C.Y., Ehrhardt, A. & Kay, M.A. Mol. Ther. 8, 495–500 (2003). Huang, M. et al. Circulation 120, S230–S237 (2009). Jia, F.J. et al. Nat. Methods 7, 197–199 (2010). Riu, E., Chen, Z.Y., Xu, H., He, C.Y. & Kay, M.A. Mol. Ther. 15, 1348–1355 (2007). 5. Tan, Y., Li, S., Pitt, B.R. & Huang, L. Hum. Gene Ther. 10, 2153–2161 (1999). 6. Chen, Z.Y., He, C.Y. & Kay, M.A. Hum. Gene Ther. 16, 126–131 (2005). 7. Darquet, A.M., Cameron, B., Wils, P., Scherman, D. & Crouzet, J. Gene Ther. 4, 1341–1349 (1997). 8. Bigger, B.W. et al. J. Biol. Chem. 276, 23018–23027 (2001). 9. Mayrhofer, P., Blaesen, M., Schleef, M. & Jechlinger, W. J. Gene Med. 10, 1253–1269 (2008). 10. Khlebnikov, A., Datsenko, K.A., Skaug, T., Wanner, B.L. & Keasling, J.D. Microbiology 147, 3241–3247 (2001). 11. Morgan-Kiss, R.M., Wadler, C. & Cronan, J.E. Jr. Proc. Natl. Acad. Sci. USA 99, 7373–7377 (2002). 12. Yu, D. et al. Proc. Natl. Acad. Sci. USA 97, 5978–5983 (2000). 13. Thorpe, H.M. & Smith, M.C. Proc. Natl. Acad. Sci. USA 95, 5505–5510 (1998). 14. Minaeva, N.I. et al. BMC Biotechnol. 8, 63 (2008). 15. Sallam, K.I., Tamura, N., Imoto, N. & Tamura, T. Appl. Environ. Microbiol. 76, 2531–2539 (2010). 1. 2. 3. 4.
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1289
ONLINE METHODS
© 2010 Nature America, Inc. All rights reserved.
Materials. Bacterial strain BW27783 was a gift of J.D. Keasling10. LuriaBertani broth (LB) powder was purchased from MP Biomedicals and Terrific Broth (TB) powder from Invitrogen. Plasmid DNA purification kits were from Qiagen. Constructs. Plasmid pKanR.endA (Fig. 2b and Supplementary Glossary) was made by inserting the following DNA elements: the kanamycin-resistance gene (KanR) flanked by the bacterial (attB) and phage (attP) attachment sites of the Streptomyces bacteriophage ΦC31 integrase and PCR-generated 329- and 734-bp fragments of endonuclease A gene (endA) into the pBlueScript (Stratagene) plasmid. The plasmid p2ΦC31 (Fig. 2b and Supplementary Figs. 1, 2b, 3a and Supplementary Glossary) was made by removing the human factor IX (hFIX) transgene and flanking attB and attP sites from p2ΦC31.hFIX6, the BAD.I-SceI cassette and I-SceI site were retained. To make the plasmid p3BAD.I-SceI (Supplementary Fig. 3a and Supplementary Glossary), the araC repressor gene together with three tandem copies of BAD.I-SceI gene were inserted downstream of the attB site of the plasmid pKanR.endA (Fig. 2b and Supplementary Glossary), followed by replacement of endA with the 737- and 647-bp UMU fragment generated by PCR. The plasmid pBS.8I-SceIs (Supplementary Fig. 3b and Supplementary Glossary) was constructed by inserting eight consecutive copies of the 18-bp I-SceI site into pBlueScript. The plasmids pBAD.Red (Fig. 2b and Supplementary Figs. 1, 2a,c, 3a, 4a, 5a and 6a and Supplementary Glossary)16 and pcI587.FLP (Supplementary Figs. 1, 4a,b and Supplementary Glossary)17, which both carry the temperaturesensitive A101 origin of replication, were obtained from the E. coli Genetic Resource Center of Yale University. Plasmid pFRT.KanR.attB (Supplementary Figs. 1, 4a and Supplementary Glossary) was generated by inserting the attB site and the KanR flanked with FRT sites of flipase into pBlueScript using DNA oligonucleotides. Plasmid p2ΦC31.attP.FRT (Supplementary Figs. 1, 4b and Supplementary Glossary) was constructed using our previous minicircleproducing plasmid p2ΦC31.hFIX6 as starting material. The sApoE.hFIX and BAD.I-SceI cassettes and the I-SceI site were eliminated, the ColE1 origin and the AmpR were replaced with the KanR, the FRT and attP sites and the plasmid replication origin R6K plus zeocin-resistance gene (Zeo) derived from the plasmid pCpG-mcs (InvivoGen). The plasmid pMC.ApoE.hFIX (Figs. 1b, 2d,e and Supplementary Glossary) was made by stepwise replacement of the AmpR and the F1 origin in the plasmid pBS.8I-SceI (Supplementary Fig. 3b and Supplementary Glossary) with KanR and sApoE.hFIX and the flanking attB and attP sites from the plasmid p2ΦC31.hFIX6, followed by the insertion of an additional 24 consecutive I-SceI recognition sites. The plasmid pMC.CMV.LGNSO (Fig. 2d,e) was made by replacing the attB.RSV.hAAT.attP fragment in the plasmid pMC.RSV.hAAT (Fig. 2d,e, Supplementary Fig. 4d and Supplementary Glossary) with the fragment flanked with the attB and attP derived from the plasmid p2ΦC31.LGNSO, as described previously3. The plasmid placY.TetR (Supplementary Figs. 1, 2a and Supplementary Glossary) was made by inserting the TetR sequence from pACYC184 (New England Biolabs) and flanked with PCR-generated 425 and 227 bp of the z and a lactose operon genes, respectively. Plasmid pbla.lacY A177C (Supplementary Figs. 1, 2a and Supplementary Glossary) was made by inserting the betalactosidase gene promoter (bla) derived from pBlueScript fused with the lacY A177C cDNA derived from plasmid placY A177C obtained from J.E. Cronan11. Plasmid p9attP.TetR.attB (Supplementary Figs. 5a, 6a and Supplementary Glossary) was made by replacing the z and a fragments in the plasmid placY.TetR with the attachment site (9attP) from bacteriophage TP901-1 (ref. 18) and attB. Plasmid p4ΦC31.attP.9attB (Supplementary Figs. 5a, 6a and Supplementary Glossary) was generated by replacing the FRT and ZEO.R6K sequences of p2ΦC31.attP.FRT (Supplementary Figs. 1, 4b and Supplementary Glossary) with the bacterial attachment site 9attB of bacteriophage TP901-1 and A101 from plasmid pBAD.Red16, followed by insertion of two additional copies of the BAD.ΦC31 sequence. Insertional inactivation of the endonuclease A gene in BW27783 (Fig. 2b). This was achieved by Red-mediated homologous recombination between a linear DNA and the endA gene. To do this, BW27783 cells were transformed with pBAD.Red. Cells from one transformed colony were used to make competent cells, as described16. Briefly, the cells were cultured in 25 ml of low salt
nature biotechnology
LB containing ampicillin (50 μg/ml) and 1% l-arabinose and incubated at 32 °C with shaking at 250 r.p.m. until the OD600 was 0.5. The competent cells were immediately transformed with 50-ng of the Pme1-restricted targeting DNA fragment prepared from pKanR.endA, cultured at 32 °C for 1 h, spread onto a plate containing kanamycin (Kan, 25 μg/ml) and incubated at 43 °C overnight. To select the KanR+ colonies free of pBAD.Red, eight KanR+ colonies were cultured in 500 μl LB with Kan at 43 °C for 30 min and 1 μl each was loaded onto the antibiotic-free, Kan+, and Amp+ plates, which were incubated at 43 °C overnight. To remove the KanR gene in the strain BWΔendA. KanR, competent cells were prepared from one KanR+-colony, transformed with p2ΦC31 and spread onto a Amp+ plate; subsequently, eight cultures were begun from eight p2ΦC31-transformed colonies in 500-μl antibiotic-free LB containing 1% l-arabinose at 32 °C for 2 h to induce the removal of KanR via ΦC31 integrase-mediated recombination between the attB and attP flanking the KanR, followed by 4 additional hours at 37 °C to induce I-SceI–mediated destruction of p2ΦC31. The cells were spread onto an antibiotic-free plate. Cells from eight colonies were cultured in 500 μl antibiotic-free LB at 37 °C for 30 min, and 1 μl each was loaded onto the antibiotic-free, Kan+ and Amp+ plates to select the AmpR−/KanR− colonies. This resulted in the BWΔendA strain. The disruption of the endA gene was confirmed by DNA sequencing of the locus-specific PCR products generated from selected colonies. Integration of the 2nd l-arabinose transporter lacY A177C and three copies of the BAD.I-SceI gene (Supplementary Fig. 2). Replacement of the y gene (lacY) of the lactose operon with the mutant lacY A177C (Supplementary Fig. 2) and integration of three copies of the I-SceI gene into the UMU locus (Supplementary Fig. 3) were achieved by the same Red-mediated homology recombination protocol as described for the endA gene disruption (Fig. 2b). To integrate the lacY A177C, however, knockout of the wild-type lacY was performed before knocking in the lacY A177C. Both were accomplished using the same Red-mediated homologous recombination. We used a DNA fragment containing TetR flanked with fragments of the z and a genes to knock out lacY, and selected the intermediate, BWΔendA. TetR on the Tet+ (6 μg/ml)-plate. This allowed the selection of the next integrant using differential KanR/TetR selection (Supplementary Figs. 2a, 5a and 6a). Integration of the BAD.ΦC31 cassettes. Three experiments were conducted to integrate ten copies of BAD.ΦC31 into the bacterial genome: two, four and four copies into the ΔendA locus (Supplementary Fig. 4), araD gene (Supplementary Fig. 5) and galK gene (Supplementary Fig. 6), respectively. All three integration events were achieved by ΦC31 integrase-mediated, sitespecific recombination between the attB or attP site in an integrating plasmid, and the attB or attP in the targeted genomic sites as described for the endA gene disruption (Fig. 2b). In an earlier attempt, we found that the integrated BAD.ΦC31 was lost soon after its integration. This is probably the result of a reverse reaction mediated by an uncharacterized excisionase or cofactor that worked in concert with ΦC31 integrase produced by the leaky expression from the integrated BAD.ΦC31. To stabilize the integrant, we designed a double recombination strategy. The ΦC31 integrase was used to mediate an integration event followed by FLP- or TPin-mediated recombination to remove the attL. This would eliminate the possibility of a reverse reaction between the attL and attR. For integration of the 2BAD.ΦC31 (Supplementary Fig. 4), we used the Red-mediated homologous recombination strategy with a linear DNA containing an attB site, a KanR gene and two flanking FRTs for insertion into the ΔendA locus. To remove the KanR, cells from one 3S2T.KanR colony were transformed with pcI587.FLP, and cells from a transformed colony were grown in 5 ml of antibiotic-free LB at 42 °C for 8 h. This induced the expression of FLP to mediate the recombination between the two FRTs, resulting in the elimination of KanR followed by the loss of the temperature-sensitive pcI587.FLP. Subsequently, the cells named 3S2T. attB were transformed with p2ΦC31.attP.FRT. Cells from one transformed colony were grown in 2-ml of LB containing 1% l-arabinose at 32 °C for 2 h, which induced the expression of ΦC31 integrase to mediate the recombination between the attB in the ΔendA locus and the attP in the plasmid. The same FLP-mediated FRT-FRT recombination was conducted to mediate the removal of the attL and R6K.Zeo and KanR, resulting in the strain of 2P3S2T (Supplementary Fig. 4b). The integrated 2BAD.ΦC31 was confirmed by DNA sequencing of the locus-specific PCR product (Supplementary Fig. 4c)
doi:10.1038/nbt.1708
© 2010 Nature America, Inc. All rights reserved.
and by demonstrating the function in mediating minicircle formation (Supplementary Fig. 4e). Integration of the four copies of the BAD.ΦC31 gene into the araD (Supplementary Fig. 5) and galK (Supplementary Fig. 6) loci, respectively, was also mediated by ΦC31 integrase; however, the removal of the attL was mediated by bacteriophage TP901-1 integrase (TPin). To do this, we used the Red recombination system to integrate a DNA fragment encoding TetR flanked with an attB and a phage attachment site of TPin (9attP) into the araD or galK site and confirmed the integrant by DNA sequencing of the integrant-specific PCR product (Supplementary Figs. 5b and 6b). We integrated p4ΦC31.attP.9attB into the modified araD or galK locus following the same procedure as used for integrating the p2ΦC31.attP. FRT (Supplementary Fig. 4) with modifications. We selected the colonies with integrants using plates containing both Tet and Kan, transformed the selected intermediates 6P3S2T.KanR.TetR or 10P3S2T.KanR.TetR with plasmid pBAD. TPin, and screened the resulting colonies using triple antibiotic resistance (Tet, Kan and Amp). We grew the cells from one colony in 2 ml of antibiotic-free LB containing 1% l-arabinose, with shaking (250 r.p.m.) at 43 °C for 6–8 h before spreading onto an antibiotic-free plate and incubated at 43 °C overnight. l-arabinose induced expression of both recombinases, however, the incubation temperature allowed significant TPin integrase activity18 but little ΦC31 integrase activity, resulting in the TPin integrase-mediated removal of attL and selection of colonies with the desired integrant, 6P3S2T (Supplementary Fig. 5a,d) and ZYCY10P3S2T (Supplementary Fig. 6a,c). Minicircle production protocol. Our present minicircle producing system, the strain ZYCY10P3S2T plus the minicircle producer plasmid pMC.hFIX or pMC.RSV.hAAT, allowed for a greatly simplified minicircle production protocol (Fig. 1b). On day one, we inoculated cells from one transformed colony in 5 ml of TB (pH 7.0) with Kan (50 μg/ml) and incubated at 37 °C with shaking at 250 r.p.m. Later that day, we amplified the bacteria by
doi:10.1038/nbt.1708
combining 100 μl of culture to every 400 ml TB containing Kan (50 μg/ml) and continued incubation for 16 to 18 h. For the yield comparison study (Fig. 2e), a 400-ml overnight culture was used to prepare intact plasmid DNA. At the end of the culture period the A600 was 3.5– 4.2 with a pH of ~6.5. On day 2, we prepared a minicircle induction mix comprising 400 ml fresh LB, 16 ml 1N sodium hydroxide and 0.4 ml 20% l-arabinose, and combined it with a 400 ml overnight culture, and incubated the culture at 32 °C with shaking at 250 r.p.m. for an additional 5 h. We used Qiagen plasmid purification kits to isolate minicircle from bacterial lysates following the manufacturer’s protocol with modifications. For every 400-ml overnight culture, we used a 2,500 column and 100-ml each of buffers P1, P2 and P3 to ensure complete resuspension and lysis of the bacteria and a high yield of minicircle DNA vector. A step-by-step protocol is provided in Supplementary Protocol. qPCR determination of impurity DNAs. The presence of impurity DNAs, the unrecombined minicircle producer plasmid and plasmid backbone circle, in the three minicircle preparations was determined as described in the legends of Figures 2d,e by qPCR. To optimize DNA-polymerase activity, all the DNA templates were digested with AflII plus XhoI flanking the ColE1 origin before qPCR. The minicircle producer plasmid (Supplementary Fig. 7) without a transgene expression cassette was used as a standard. The PCR primers, 5′-TCCTGTTACCAGTGGCTGCT and 5′-AGTTCGGTGTAGGTCGTTCG, were specific for ColE1 DNA origin shared by all templates. qPCR was performed using the Qiagen Quant SYBR Green RT-PCR kit with a RCorbett system (Corbett). The program included a denaturing step of 94 °C for 10 min, followed by 25 cycles at 94 °C for 20 s, 55 °C for 15 s, and 68 °C for 20 s each. 16. Datsenko, K.A. & Wanner, B.L. Proc. Natl. Acad. Sci. USA 97, 6640–6645 (2000). 17. Cherepanov, P.P. & Wackernagel, W. Gene 158, 9–14 (1995). 18. Stoll, S.M., Ginsburg, D.S. & Calos, M.P. J. Bacteriol. 184, 3657–3663 (2002).
nature biotechnology
letters
High-fidelity gene synthesis by retrieval of sequence-verified DNA identified using high-throughput pyrosequencing © 2010 Nature America, Inc. All rights reserved.
Mark Matzas1,5, Peer F Stähler1,5, Nathalie Kefer1, Nicole Siebelt1, Valesca Boisguérin1, Jack T Leonard1, Andreas Keller1, Cord F Stähler1, Pamela Häberle1, Baback Gharizadeh2, Farbod Babrzadeh2 & George M Church3,4 The construction of synthetic biological systems involving millions of nucleotides is limited by the lack of high-quality synthetic DNA. Consequently, the field requires advances in the accuracy and scale of chemical DNA synthesis and in the processing of longer DNA assembled from short fragments. Here we describe a highly parallel and miniaturized method, called megacloning, for obtaining high-quality DNA by using next-generation sequencing (NGS) technology as a preparative tool. We demonstrate our method by processing both chemically synthesized and microarray-derived DNA oligonucleotides with a robotic system for imaging and picking beads directly off of a high-throughput pyrosequencing platform. The method can reduce error rates by a factor of 500 compared to the starting oligonucleotide pool generated by microarray. We use DNA obtained by megacloning to assemble synthetic genes. In principle, millions of DNA fragments can be sequenced, characterized and sorted in a single megacloner run, enabling constructive biology up to the megabase scale. Current de novo gene construction1–4 rests on 1990’s technology for chemical oligonucleotide synthesis, which is costly and has error rates of 1 in 300 base pairs (bp). Errors are typically avoided by manually selecting the best Sanger sequences using electrophoretic automation. Recent innovations in programmable array technology5–8 offer the possibility to synthesize pools of thousands to millions of sequences per array with lengths comparable to conventional synthesis. The technology thus provides an extremely rich source of DNA oligonucleotides with great flexibility and superior efficiency regarding throughput and cost per bp. However, the error rate of microarrayderived oligonucleotides is typically higher compared to conventional synthesis, making error avoidance or correction necessary. Furthermore it is challenging to divide the derived oligonucleotide pools, containing vast amounts of species, into subpools—necessary, for example, to guide the assembly of synthetic genes, chromosomal regions or whole pathways in synthetic biology. Megacloning turns NGS from a previously purely analytical method into a preparative tool, and represents a tremendous source
of sequence-verified DNA where the yield from one NGS run is equivalent to that from hundreds to thousands of Sanger-sequence runs. It therefore addresses the challenge of error reduction for both conventional and microarray-derived DNA oligonucleotides. The method yields high-quality DNA libraries containing perfect parts with desired and correct sequences in adjustable ratios useful for a wide range of (bio-)technological applications. Here we present a proof-of-concept study aimed at the retrieval of clonal DNA with known sequence from an NGS platform after sequencing (Fig. 1). The workflow comprises the input of DNA of short length, an NGS run to generate sequence-verified DNA clones, the identification of DNA with desired sequence on the sequencer’s substrate and the retrieval of the clones of choice. The sources for the input DNA are for the most part independent of the megacloning step. For the present work, input DNA was derived from conventional oligonucleotide synthesis and from DNA microarrays. We used the NGS platform GS FLX from Roche 454 Life Sciences9,10. Owing to its open-top architecture, accessibility of the beads and the bead size, this platform is well suited for a pick-and-place approach using micropipettes to retrieve specific beads from the 454-Picotiterplate (PTP) and transfer them into conventional multi-well plates for further processing. First, we established a technical setup for the controlled extraction of beads. The PTP at this stage contained a natural sample from human DNA, and extraction was done using a micropipette controlled by a microactuator device (Supplementary Data). To assess the fidelity of our setup, we compared the reads coming from the GS FLX platform with Sanger-derived sequences of DNA amplified from extracted beads. The alignment of Sanger sequences to the NGS reads matched 99.9%. Only two mismatches were obtained in 2,410 bp. Both were putative insertions in the GS FLX reads occurring at homopolymer stretches and therefore have a high likelihood of being platform-specific, base-calling artifacts9 (Supplementary Data). Next we collected a set of 319 beads with DNA clones from a microarray-derived pool initially containing 3,918 sequences. The beads for extraction were selected to ensure that their GS FLX reads perfectly matched sequences in our starting pool. The obtained DNA and the
1febit
group, Heidelberg, Germany. 2Stanford Genome Technology Center, Stanford University, Palo Alto, California, USA. 3Harvard Medical School, Boston, Massachusetts, USA. 4Wyss Institute for Biologically Inspired Engineering, Boston, Massachusetts, USA. 5These authors contributed equally to this work. Correspondence should be addressed to G.M.C. ([email protected]). Received 8 June; accepted 19 October; published online 28 November 2010; doi:10.1038/nbt.1710
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1291
letters covered 7,195 bp and matched without errors to the expected target sequences Write Read Select (Supplementary Data). (oligonucleotide synthesis) We then assembled the model gene out of nine DNA fragments from the set of 29 matchNext-generation sequencing Microarrays ing beads. The full-length gene construct Pick & place Amplify Data Natural sources Roche/454 Optical release ABI/SOLiD Conventional was again checked by Sanger sequencing for PCR Copy off surface Illumina oligonucleotide synthesis absence of errors, and the biological functionality of the gene was tested in an enzymatic assay based on the conversion of X-Glc (5-bromo-4-chloro-3-indolyl-β-glucoside) GS FLX substrate into blue dye15 (Supplementary Pick Roche/454 Sort & & -Oligo elution Data). Besides the proof of feasibility of Amplify -Amplification Data select place generating biological functional genes, this PCR Next-generation experiment further mimics other applications Microarray synthesis PTP sequencing of our technology, such as the use of sheared Multiwell plate natural DNA and its subsequent sorting Oligonucleotide design and reordering. The absence of errors in 7,195 bp of DNA Figure 1 Coalescence of DNA reading and writing. The general approach begins with DNA from a variety of sources. Here we used oligonucleotides synthesized from microarrays as well as from obtained from 29 extracted beads raised the conventional sources. Then, next-generation sequencing is used to read and identify oligonucleotides question of achievable error rates from the with desired sequences. Here we used the GS FLX platform (454/Roche). Finally, the DNA is sorted megacloner process. Therefore we explored and retrieved selectively, in this case with a microactuator-controlled micropipette guided by two the potential of megacloning using a statistical microscope cameras. The technologies used for retrieval depend on the sequencing platform. model. This model considers two main sources of error—namely, wrong sequencing calls and untreated pool were compared after being sequenced independently polymerase errors during DNA amplification16. The calculations estion a Genome Analyzer II (Illumina GAII). We mapped 3.1% of reads mated the chance of finding one error in our extracted sequence space from the initial (nonenriched) DNA pool without errors to the set of ~7,200 bp to be 29%, which is in line with our experimental findings. of 319 selected sequences. In the enriched pool the fraction of reads The theoretical error rate of bead amplicons after megacloning using mapping perfectly to the target sequences was 84.3%. The increase the setup employed in this study was estimated to be 1 error in 21 kbp by a factor of 27.2 shows clearly a successful enrichment of selected (Supplementary Data). Compared with the error rate in the starting and correct sequences (Fig. 2a,b). Also the analysis of reads on the material of 1 error in 40 bp (determined from GAII data of the initial level of single-target sequences shows that for 94% of the sequences microarray pool), this equals a 500-fold error reduction. We further calculated the expected amount of reads from NGS in the selected pool, 50% or more of the reads were correct (Fig. 2c). Error-prone sequences contained a high number of different species that match the target sequences of a given pool without errors. These likely to be caused by known sequence variations on the GAII, as numbers are crucial to estimate the complexity of pools that can be processed in one megacloner run. The resulting efficiency and cost reported previously11. To test the assembly of gene fragments based on megacloned structure are influenced mainly by three parameters: the error rate of oligonucleotides stemming from a microarray, we assembled two the starting pool, the sequencing accuracy and the length of the varigene fragments, each ~220 bp in length, combining either nine able sequence (Supplementary Data). With an error rate of 1 error or ten megacloned, bead-derived amplicons in a PCR-based gene in 40 bp and an average sequencing accuracy of 99.9% in the GS FLX, assembly reaction12,13. The obtained assemblies were cloned and we expect a five- to tenfold cost reduction in producing DNA fragSanger sequenced. Seven out of eight clones matched the target ments (compared to conventional oligonucleotide synthesis) that can sequence perfectly. Interestingly, one clone showed insertions and be achieved now with the prototype device (Supplementary Data). deletions all located within a region 23 bp wide. Errors in assem- Because these fragments are largely free of errors, further savings can blies originating from inaccuracies in the starting material could be expected in gene synthesis because the cost of subsequent sequencbe expected to be distributed evenly over the entire construct. As ing for final quality control will be lower. this sequence was otherwise free of errors, these defects were likely In this work we demonstrated the targeted retrieval of bead-bound caused by misassembly rather than errors in the building blocks used DNA from a high-throughput sequencer without major modifications (Supplementary Data). to the sequencing process. Previous methods for error correction in To further evaluate the capabilities of the megacloning approach to DNA pools7,17–21 do not adequately handle collections of closely related generate biologically functional genes, we applied the method to DNA oligonucleotide sequences that occur during assembly of repetitive fragments 274–394 bp in length and extracted 32 beads from the PTP sequences or multi-gene family libraries. They also do not enable hiercarrying putatively correct sequences. These DNA fragments were archical assembly strategies, which are made possible by the ordered the product of gene assembly reactions12 using overlapping 40-mer selection and physical separation of clonal DNA described here. The megacloner process has been proven to be useful for retrieval oligonucleotides synthesized using conventional phosphoramidite chemistry and could be assembled into a model gene encoding β-d- and sorting of correct and functional sequences and to increase the portion of error-free sequences in a sample substantially. This techglucuronidase (uidA)14 (2,080 bp). Three Sanger sequences obtained from the bead DNA were totally nology allows the processing of DNA from microarrays but also from unrelated to the expected sequence and were probably caused by a variety of other sources, such as conventional oligonucleotide synwrong bead extraction or contamination. The remaining 29 sequences thesis or natural DNA fragments.
© 2010 Nature America, Inc. All rights reserved.
Perfect part
1292
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
letters The pool used in our conceptual study contained ~4,000 sequences. According to our results and extrapolations, this can be increased to ~30,000 sequences per pool with the described setup. As the bead extraction is generally independent of the pool complexity, it is mainly limited by the NGS platform and the quality of the starting material (Supplementary Data). More advanced microarray formats are able to deliver libraries with even higher complexity and of sufficient quality to fit into a gene assembly process 23. Therefore, with an appropriate degree of automation that reaches an extraction frequency of two or three beads per minute, which is achievable with state-of-the-art robotics, the work-up of one PTP becomes possible within days, resulting in > 106 bp per plate. Hence, the downstream process (amplification, cleanup, assembly) will represent the next bottleneck.
Set 4 84.28
6.07
3.07
Reads matching with max 3 errors in selected sequences After megacloning
b
Reads with perfect match in selected sequences
90 80 70 60 50 40 30 20 10 0
Before megacloning After megacloning
1– 0 11 10 – 21 20 – 31 30 41–40 – 51 50 61–60 – 71 70 – 81 80 91 –9 10 –1 0 201–200 1 0 30 –3 0 1 0 40 –4 0 0 501–5 0 0 601–6 0 1 0 70 –7 0 1 0 8 –8 0 9 01 0 1, 01– –9 0 0 0 2, 01–1,000 0 3, 01 2,0 0 0 – 0 4, 01 3,0 0 0 – 0 5, 01–4,0 0 0 5 0 6, 01– ,000 0 7, 01 6,0 0 0 – 0 8, 01–7,0 0 9, 001 8,000 00 – 0 1– 9,0 0 10 00 ,0 M 00 or e
Set 3 96.53
Number of sequences
Set 1 Set 2 98.45 100 85.33 90 80 69.38 70 60 50 40 33.51 30 20 10 0 Reads matching with Reads with max 3 errors in perfect match whole pool in whole pool Before megacloning
Reads/filtered reads in parts per million
Percent of total read count
c 100 80 60 40 20 0
0
50
0
50
100
150
200
250
300
100 Percent of total read count
© 2010 Nature America, Inc. All rights reserved.
a
Percent of correct matching reads
Megacloning could be optimized beyond the estimates in this work of one error in 21 kbp from input DNA having an error rate of 1 in 40 bp. Although such raw material can be obtained by state-of-the-art microarray technologies, the quality of input DNA could be increased further by addressing the amplification step of bead-bound DNA—for example, with higher fidelity polymerases, as the predicted contribution of the polymerase to the error rate is 4.7-fold higher than the expected error rate of the megacloner itself (Supplementary Data). Another accessible parameter for optimizing the overall process in terms of error rates is improvement in the quality of the DNA starting material. Also, optimization of sequencing accuracy could be a way to improve the ability to select correct parts after NGS. This is, however, the subject of ongoing optimization in the scope of NGS development, including ligase-based methods with improved accuracy22.
80 60 40 20 0
100
150
200
250
300
Selected oligos
Figure 2 NGS-based comparison of untreated and megacloned oligonucleotide pools from microarray. (a) Comparison of the initial microarray oligonucleotide pool (blue) and the pool enriched with the megacloner technology (red) based on the results of the Illumina GAII runs. The bars in set 1 represent the fraction of reads that could be mapped allowing up to three errors. Bars in set 2 show the fractions of perfectly matching reads to the sequence set of the initial pool (3,918 sequences). The difference between the blue and the red bar in set 2 represents the enrichment of correct sequences by megacloning. The bars in set 3 and set 4 show the fractions of reads mapping to sequences from the selected pool of 319 sequences. The difference between blue and red bars in set 3 shows the enrichment of a selected 319 sequences before megacloning compared with after. Blue and red bars in set 4 represent the enrichment of sequences that are in the set of 319 selected sequences and that are correct. (b) Histogram of read counts in the Illumina GAII data of the initial pool (blue) and the enriched megacloned sample (red). Only reads mapping without errors to one of the 319 selected target sequences have been taken into account. To compare the two NGS runs on the basis of read counts, we converted the numbers into parts-per-million (p.p.m.) from the total number of filtered reads. (c) Composition of reads from the Illumina GAII data including 319 selected sequences in the initial pool (top) and the enriched pool (bottom). The oligonucleotides are sorted by the fraction of correct reads. Green, correct reads; red, error-prone reads (compartments in the red bars represent single sequences with a read count of 0.1% or more of total reads for the particular sequence); light blue, sum of nonunique error-prone reads where each sequence represents less than 0.1% of total reads for the particular sequence; blue, unique reads. In the Illumina GAII data set from the enriched sample, just 315 out of 319 selected sequences could be detected.
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1293
© 2010 Nature America, Inc. All rights reserved.
letters Our next focus in the present context is improvement and a utomation of physical bead extraction. The workflow used in this study still involved a considerable number of manual steps and some human intervention, which was identified as the most important source of error in terms of extraction of unwanted beads. Therefore, the success rate of ~90% (29 beads out of 32) has to be increased for the bead localization and retrieval process. The method described here holds the potential to decrease production cost for synthetic DNA by one or more orders of magnitude. This source of high-quality DNA could aid the field of synthetic biology, as well as the production of libraries for antibodies or enzyme variants. In addition to synthetic sources, the sorting of natural DNA could enable the quick reconstruction or combination of DNA fragments to assemble genes, chromosomes or genomes, while simultaneously including synthetic parts of DNA. The principle that we applied here using the GS FLX technology should also be generally applicable to other available NGS platforms such as Illumina’s GAII, SOLiD, the Polonator or others. In the present context, the advantage of the GS FLX platform is the robotaccessible platform architecture and the comparably large size of the beads. Owing to different architectures of the other platforms, such as partially closed systems and substantially smaller DNA carriers, harvesting DNA from those will require a different mechanism, such as optical approaches including photosensitive and cleavable linkermolecules. The advantage of these platforms is a considerably higher number of DNA clones, which potentially could increase the capacity and throughput of the technology up to the gigabase level. Methods Methods and any associated references are available in the online version of the paper at http://www.nature.com/naturebiotechnology/. Note: Supplementary information is available on the Nature Biotechnology website. Acknowledgments We thank B.A. Roe, F.Z. Najar and D.D. White for sequencing support, J. Jäger for technical consulting, and D. Summerer, T. Brefort, S. Kosuri and D. Levner for discussions and comments. AUTHOR CONTRIBUTIONS M.M., P.F.S. and G.M.C. conceptualized the megacloning method and wrote the manuscript; M.M. designed and lead the study, wrote all algorithms for sequence design, data analysis, image conversion, image processing and microactuator control; M.M., N.K., N.S. acquired the used technology, set up the microactuator device and optical systems; N.S. designed the uidA genetic model; M.M., N.K., N.S., V.B. and P.H. designed and optimized molecular biological methods; C.F.S. and J.T.L. contributed to bead picking and engineering concepts; A.K. set up the statistical models and calculations; J.T.L. contributed to the design of molecular biological steps and the acquisition of sequencing samples; B.G. and F.B. evaluated and implemented necessary changes into the sample preparation and the sequencing process on the 454/Roche platform.
1294
COMPETING FINANCIAL INTERESTS The authors declare competing financial interests: details accompany the full-text HTML version of the paper at http://www.nature.com/naturebiotechnology/. Published online at http://www.nature.com/naturebiotechnology/. Reprints and permissions information is available online at http://npg.nature.com/ reprintsandpermissions/. 1. Endy, D. Foundations for engineering biology. Nature 438, 449–453 (2005). 2. Menzella, H.G. et al. Combinatorial polyketide biosynthesis by de novo design and rearrangement of modular polyketide synthase genes. Nat. Biotechnol. 23, 1171–1176 (2005). 3. Gibson, D.G. et al. Creation of a bacterial cell controlled by a chemically synthesized genome. Science 329, 52–56 (2010). 4. Carr, P.A. & Church, G.M. Genome engineering. Nat. Biotechnol. 27, 1151–1162 (2009). 5. Gao, X. et al. A flexible light-directed DNA chip synthesis gated by deprotection using solution photogenerated acids. Nucleic Acids Res. 29, 4744–4750 (2001). 6. Singh-Gasson, S. et al. Maskless fabrication of light-directed oligonucleotide microarrays using a digital micromirror array. Nat. Biotechnol. 17, 974–978 (1999). 7. Tian, J. et al. Accurate multiplex gene synthesis from programmable DNA microchips. Nature 432, 1050–1054 (2004). 8. Porreca, G.J. et al. Multiplex amplification of large sets of human exons. Nat. Methods 4, 931–936 (2007). 9. Margulies, M. et al. Genome sequencing in microfabricated high-density picolitre reactors. Nature 437, 376–380 (2005). 10. Wicker, T. et al. 454 sequencing put to the test using the complex genome of barley. BMC Genomics 7, 275 (2006). 11. Willenbrock, H. et al. Quantitative miRNA expression analysis: comparing microarrays with next-generation sequencing. RNA 15, 2028–2034 (2009). 12. Stemmer, W.P., Crameri, A., Ha, K.D., Brennan, T.M. & Heyneker, H.L. Single-step assembly of a gene and entire plasmid from large numbers of oligodeoxyribonucleotides. Gene 164, 49–53 (1995). 13. Richmond, K.E. et al. Amplification and assembly of chip-eluted DNA (AACED): a method for high-throughput gene synthesis. Nucleic Acids Res. 32, 5011–5018 (2004). 14. Jefferson, R.A., Burgess, S.M. & Hirsh, D. beta-Glucuronidase from Escherichia coli as a gene-fusion marker. Proc. Natl. Acad. Sci. USA 83, 8447–8451 (1986). 15. Couteaudier, Y., Daboussi, M.J., Eparvier, A., Langin, T. & Orcival, J. The GUS gene fusion system (Escherichia coli beta-D-glucuronidase gene), a useful tool in studies of root colonization by Fusarium oxysporum. Appl. Environ. Microbiol. 59, 1767–1773 (1993). 16. Cline, J., Braman, J.C. & Hogrefe, H.H. PCR fidelity of pfu DNA polymerase and other thermostable DNA polymerases. Nucleic Acids Res. 24, 3546–3551 (1996). 17. Carr, P.A. et al. Protein-mediated error correction for de novo DNA synthesis. Nucleic Acids Res. 32, e162 (2004). 18. Smith, J. & Modrich, P. Removal of polymerase-produced mutant sequences from PCR products. Proc. Natl. Acad. Sci. USA 94, 6847–6850 (1997). 19. Bang, D. & Church, G.M. Gene synthesis by circular assembly amplification. Nat. Methods 5, 37–39 (2008). 20. Fuhrmann, M., Oertel, W., Berthold, P. & Hegemann, P. Removal of mismatched bases from synthetic genes by enzymatic mismatch cleavage. Nucleic Acids Res. 33, e58 (2005). 21. Binkowski, B.F., Richmond, K.E., Kaysen, J., Sussman, M.R. & Belshaw, P.J. Correcting errors in synthetic DNA through consensus shuffling. Nucleic Acids Res. 33, e55 (2005). 22. McKernan, K.J. et al. Sequence and structural variation in a human genome uncovered by short-read, massively parallel ligation sequencing using two-base encoding. Genome Res. 19, 1527–1541 (2009). 23. Kosuri, S. et al. Scalable gene synthesis by selective amplification of DNA pools from high-fidelity microchips. Nat. Biotechnol. advance online publication, doi:10.1038/ nbt.1716 (28 November 2010).
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
ONLINE METHODS
Oligo synthesis, sequence design, adaptors. Oligonucleotides used for this work were synthesized on programmable microarray synthesizers using lightdirected synthesis methods5. Conventional oligonucleotides used for gene assembly were obtained from Sigma Aldrich. Harvesting of oligonucleotides from microarray surfaces was performed by chemical cleavage of succinateester bonds using ammonia hydrochloride solution. Amplification of microarray-derived oligonucleotide pools by emulsion PCR. Microarray-derived oligonucleotide pools were amplified before NGS using emulsion PCR24. Therefore universal terminal sequences were attached during synthesis and served as primer regions. Amplification primers contained adaptors for sequencing on the Illumina GAII platform and/or the 454 GS FLX (Supplementary Data).
© 2010 Nature America, Inc. All rights reserved.
Sequencing on the 454 GS FLX. The sample preparation for the PCRamplified oligonucleotides was done according to the manufacturer’s protocols (Roche/454). To keep the DNA intact after sequencing, we exchanged the bleaching cleaning buffer with TE buffer before the sequencing run to avoid degradation of DNA during the final cleaning steps of the Roche sequencer. Data analysis of 454 data and image conversion. NGS reads obtained from the GS FLX sequencer were aligned to the target sequences in the oligonucleo tide pool to find the best matching sequence for every read and to perform further analysis, such as error rate estimation. Perfect matching sequences were selected and localized in the sequencer image by using the coordinates attached to every read sequence. For sequence data analysis, we used various Python scripts using the BioPython package. The images from the GS FLX sequencer were converted into the TIFF standard format using the Python Imaging Library. Bead localization and extraction. After aligning the GS FLX reads to the set of target sequences, we selected reads that perfectly matched one of the desired oligonucleotide sequences in the pool. For localization of beads we located the corresponding chemiluminescent signals in the converted raw image from the GS FLX platform using the x- and y-coordinates that were included in the NGS raw data. To locate beads in the PTP, we identified reference points in the raw image and their corresponding positions in the PTP using suitable patterns of light signals. Based on these reference points the bead positions on the PTP were calculated using an algorithm for scaling and rotation. The extraction was performed with a micropipette with an outer diameter of 28 μm. For pipette handling we used a three-axis microactuator (Supplementary Data). Before extraction of beads the PTP was stored under a water layer to prevent desiccation and shrinking of beads. After picking, the beads were transferred immediately into a PCR vial and stored under water until further processing. Amplification of DNA from beads. Amplification of bead-bound DNA was performed with the primers 454-A and 454-B, targeting the Roche/454 adaptors, or ‘slx-fw-long’ and ‘slx-rev-long’ for Illumina adaptors. For amplification of fragments with 40-mer variable regions, primers were 5′-biotinylated to facilitate subsequent removal of primer regions on a streptavidin matrix. PCR conditions: 20 mM Tris-HCl (pH 8.8), 10 mM ammoniumsulfate, 10 mM potassium chloride, 2 mM magnesium-sulfate, 0.1% Triton X-100, 200 μM each dNTP, 2% (vol/vol) DMSO, 1 μM each primer, 50 U/ml native pfu polymerase (Fermentas). Cycling: initial denaturation 96 °C (2 min); then 30 cycles of 96 °C (30 s), 63 °C (30 s), 72 °C (30 s) and final elongation 72 °C (3 min). After amplification, all PCR products were analyzed on PAGE (Supplementary Data) to check specificity and yield.
doi:10.1038/nbt.1710
For generation of the subpool containing 319 sequences, we estimated the concentration on the basis of the gel analysis and mixed the amplicons in equimolar concentrations. Illumina sequencing and data analysis. As the sample contained suitable adaptors all steps regarding adaptor ligation have been omitted. All other steps were done according to the protocols from Illumina. The NGS raw data obtained from Illumina GAII were processed by the following steps. 1. Truncation of reads to the length of the variable regions (40 bp). 2. Filtering out reads containing ambiguities (filtered reads). 3. Group reads with similar sequences (bins). Subsequently for each read we identified the best matching target sequence from the oligonucleotide pool by mapping all reads to a pseudo-genome using rapid alignment of small RNA reads (razerS) (http://www.seqan.de/projects). The pseudo-genome was generated by concatenation of the variable parts of pool sequences separated by 40-mer poly-T stretches. The corresponding target sequence could then be determined by the matching position in the pseudo-genome. Alignments from the razerS output were used to determine insertions, deletions and substitutions. To compare the two GAII runs based on the number of correct reads, we converted the read counts into parts-permillion units (p.p.m.), taking the number of filtered reads before the matching procedure (after step 2) as a basis. Assembly of gene fragments from conventional oligonucleotides. Gene fragments > 200 bp were assembled from conventionally synthesized 40-mer oligonucleotides having a constant overlap region of 20 nucleotides to the adjacent oligomer. Primer regions for 454 sequencing and restriction sites for primer removal were included during assembly. The assembly reaction contained 5 nM of each construction oligonucleotide and 200 nM of terminal primers. PCR conditions: 1× KOD polymerase buffer (Novagen), 1.25 mM MgSO4, 40 μM each dNTP, 5 U/ml KOD Hot Start Polymerase (Novagen). Cycling for gene assembly: initial denaturation 96 °C (4 min); then 30 cycles of 96 °C (10 s), 55–40 °C touchdown (30 s), 72 °C (10 s). For subsequent amplification: 96 °C (10 s), 55 °C (30 s), 72° (30 s), final elongation 72 °C (3 min). Assembly of genes from >200 bp fragments. Gene assembly up to 2 kbp were performed according to the protocol used for assembly of > 200 bp from oligonucleotides. Primer removal and cleanup of bead amplicons before gene assembly. For removal of primer regions amplicons were incubated with LguI restriction endonuclease in 1× Tango buffer (Fermentas) for 3 h at 37 °C. For > 200 bp fragments, small restriction fragments containing primer regions were removed by PCR purification columns (GenElute PCR Clean-Up, Sigma Aldrich). For cleanup of microarray-derived fragments, we used 40-mer variable region biotinylated primers during bead DNA amplification and removed restriction products containing biotin residues using streptavidin matrix. The 40-mer fragments were ethanol precipitated and dissolved in water before further processing. Assembly of genes from 40-mer double-stranded DNA fragments. For the assembly of genes from 40-mer dsDNA we used a two-stage assembly protocol including a primerless PCR followed by a PCR for amplification of the resulting products described previously13. 24. Williams, R. et al. Amplification of complex gene libraries by emulsion PCR. Nat. Methods 3, 545–550 (2006).
nature biotechnology
letters
Scalable gene synthesis by selective amplification of DNA pools from high-fidelity microchips
© 2010 Nature America, Inc. All rights reserved.
Sriram Kosuri1,2,6, Nikolai Eroshenko1,3,6, Emily M LeProust4, Michael Super1, Jeffrey Way1, Jin Billy Li2,5 & George M Church1,2 Development of cheap, high-throughput and reliable gene synthesis methods will broadly stimulate progress in biology and biotechnology1. Currently, the reliance on columnsynthesized oligonucleotides as a source of DNA limits further cost reductions in gene synthesis2. Oligonucleotides from DNA microchips can reduce costs by at least an order of magnitude3–5, yet efforts to scale their use have been largely unsuccessful owing to the high error rates and complexity of the oligonucleotide mixtures. Here we use high-fidelity DNA microchips, selective oligonucleotide pool amplification, optimized gene assembly protocols and enzymatic error correction to develop a method for highly parallel gene synthesis. We tested our approach by assembling 47 genes, including 42 challenging therapeutic antibody sequences, encoding a total of ~35 kilobase pairs of DNA. These assemblies were performed from a complex background containing 13,000 oligonucleotides encoding ~2.5 megabases of DNA, which is at least 50 times larger than in previously published attempts. The synthesis of DNA encoding regulatory elements, genes, pathways and entire genomes provides powerful ways to both test biological hypotheses and harness biology for our use. For example, from the use of oligonucleotides in deciphering the genetic code6,7 to the recent complete synthesis of a viable bacterial genome8, DNA synthesis has engendered tremendous progress in biology. Currently, almost all DNA synthesis relies on the use of phosphoramidite chemistry on controlled-pore glass (CPG) substrates. The synthesis of gene-sized fragments (500–5,000 base pairs (bp)) relies on assembling many CPG oligonucleotides together using a variety of gene synthesis techniques2. Technologies to assemble verified gene-sized fragments into much larger synthetic constructs are now fairly mature8–12. The price of gene synthesis has fallen drastically over the last decade. However, the current commercial price of gene synthesis, ~$0.40– 1.00/bp, has begun to approach the relatively stable cost of the CPG oligonucleotide precursors (~$0.10–0.20/bp)1, suggesting that oligonucleotide cost is limiting. At these prices, the construction of large gene libraries and synthetic genomes is out of reach to most. There are many
ongoing efforts to lower the cost of gene synthesis that focus on reducing the cost of the oligonucleotide precursors. For example, microfluidic oligonucleotide synthesis can reduce reagent cost by an order of magnitude and has been used for proof-of-concept gene synthesis13. Another promising route is to harness existing DNA microchips, which can produce up to a million different oligonucleotides on a single chip, as a source of DNA. Previous efforts have demonstrated that genes can be synthesized from DNA microchips3–5,14. Thus far it has not been possible to scale up these approaches for at least three reasons. First, the error rates of oligonucleotides from DNA microchips are higher than traditional column-synthesized oligonucleotides. Second, the assembly of gene fragments becomes increasingly difficult as the diversity of the oligonucleotide mixture becomes larger. Finally, the potential for cross-hybridization between individual assemblies imposes strong constraints on the sequences that can be constructed on an individual microchip. Recently, the quality of microchip-synthesized oligonucleotides was improved by controlling depurination during the synthesis process15. These arrays produce up to 55,000 200-mer oligonucleotides on a single chip and are sold as a ~1–10 picomole pools of oligonucleo tides, termed OLS pools (oligo library synthesis). Several groups have used OLS pools in DNA capture technologies, promoter analysis and DNA barcode development16–20. We have previously shown that individual oligonucleotides in a 55,000 150-mer OLS pool were evenly distributed18. We reanalyzed this data set to provide an estimate of the frequency of transitions, transversions, insertions and deletions in this OLS pool (Online Methods) and found the overall error rate to be ~1/500 bp both before and after PCR amplification, suggesting that OLS pools can be used for accurate large-scale gene synthesis (Supplementary Table 1). To test whether OLS pools could be used for DNA microchipbased gene synthesis, we designed two pools (OLS pools 1 and 2) of different lengths, each containing ~13,000 130-mer or 200-mer oligonucleotides, respectively. Figure 1 is a general schematic of our methods for using OLS pools to perform gene synthesis. Briefly, we designed oligonucleotides that were then printed on DNA microchips and recovered as a mixed pool of oligonucleotides (OLS pool). Next, we took advantage of the long oligonucleotide lengths to
1Wyss
Institute for Biologically Inspired Engineering, Boston, Massachusetts, USA. 2Department of Genetics, Harvard Medical School, Boston, Massachusetts, USA. School of Engineering and Applied Sciences, Cambridge, Massachusetts, USA. 4Agilent Technologies, Santa Clara, California, USA. 5Present address: Department of Genetics, Stanford University, Stanford, California, USA. 6These authors contributed equally to this work. Correspondence should be addressed to S.K. ([email protected]) or N.E. ([email protected]). 3Harvard
Received 3 August; accepted 25 October; published online 28 November 2010; doi:10.1038/nbt.1716
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1295
letters Figure 1 Schematic for scalable gene synthesis from OLS pool 2. (a,b) Pre-designed oligonucleotides (no distinction is made between dsDNA and ssDNA in the figure) are synthesized on a DNA microchip (a) and then cleaved to make a pool of oligonucleotides (b). (c) Plate-specific primer sequences (yellow or brown) are used to amplify separate plate subpools (only two are shown), which contain DNA to assemble different genes (only three are shown for each plate subpool). (d) Assembly-specific sequences (shades of blue) are used to amplify assembly subpools that contain only the DNA required to make a single gene. (e) The primer sequences are cleaved using either type IIS restriction enzymes (resulting in dsDNA) or by DpnII/USER/λ exonuclease processing (producing ssDNA). (f) Construction primers (shown as white and black sites flanking the full assembly) are then used in an assembly PCR reaction to build a gene from each assembly subpool. Depending on the downstream application, the assembled products are then cloned either before or after an enzymatic error correction step.
a DNA microchip
b
© 2010 Nature America, Inc. All rights reserved.
c
OLS pool
d
Plate subpools
e
Assembly subpools
Processed subpools
f Assembled genes
i ndependently PCR amplify and process only those oligonucleotides required for a given gene assembly. For the 200-mer OLS pool 2, we first amplified a ‘plate subpool’ that contained DNA to construct up to 96 genes, and then amplified individual ‘assembly subpools’ to separate the oligonucleotides for an individual gene. For the 130-mer OLS pool 1, we directly amplified assembly subpools, foregoing the plate subpool step. Next, the primers used for these amplification steps were removed by either type IIS restriction endonucleases to form double-stranded DNA (dsDNA) fragments (OLS pool 2), or a combination of enzymatic steps to form single-stranded DNA (ssDNA) fragments (OLS pool 1). Finally, we constructed full-length genes using PCR assembly, performed enzymatic error correction to improve error rates if necessary, and, finally, cloned and characterized the constructs. Obtaining subpools of only those DNA fragments required for any particular assembly is crucial for robust gene synthesis in very complex DNA backgrounds. In addition, isolating subpools relieves constraints on sequence similarity inherent in past approaches. To facilitate the partitioning of OLS pools into smaller subpools, we designed 20-mer PCR primer sets with low potential cross-hybridization (‘orthogonal’ primers) derived from a set of 244,000 25-mer orthogonal sequences developed for barcoding purposes21. Two separate orthogonal primer sets were constructed for the different OLS pools because of their varying requirements for downstream processing. Both sets were screened for potential cross-hybridization, low secondary structure and matched melting temperatures to construct large sets of ortho gonal PCR primer pairs. 1296
To construct genes from the OLS pools, we developed algorithms to split the sequence into overlapping segments with matching melting temperatures such that they could be later assembled by PCR. Genes on OLS pool 1 and 2 were designed differently to test the effect of different overlap lengths. We designed genes on OLS pool 1 such that the processed ssDNA pools fully overlapped to form a complete dsDNA sequence. In OLS pool 2, the processed dsDNA fragments partially overlapped by ~20 bp and could be assembled into a contiguous gene sequence using PCR. We initially constructed a set of fluorescent proteins to test the efficacy of the gene synthesis methods on both OLS pools (Fig. 2). For OLS pool 1, we designed two independent ‘assembly subpools’ that encoded GFPmut3b plus flanking orthogonal primer sequences that were later used for PCR assembly (construction primers). The two assembly subpools, GFP43 and GFP35, differed in the average overlap length (43 and 35 bp, respectively), total length (82–90 and 64–78 bases, respectively) and number of oligonucleotides (18 and 22, respectively). We also designed two subpools, control subpools 1 and 2, containing ten and five 130-mer oligonucleotides, respectively, to test amplification efficacy. The other eight subpools, containing a total of 12,945 130-mer sequences, were constructed on the same chip but were not used in this study except to provide potential sources of cross-hybridization. Each of these 12 subpools was flanked with independent orthogonal primer pairs (assembly-specific primers). As a control, we used these same algorithms to design a set of shorter column-synthesized oligonucleotides (20 bp average overlap; 35–45 bases in length; and 39 total oligonucleotides) encoding GFPmut3b and obtained them from a commercial provider (IDT). These oligonucleotides were combined to form a third pool (GFP20) that was also tested. (All synthesized oligonucleotides used in the study can be found in Supplementary Sequences). Each of the four subpools (GFP43, GFP35, control 1 and control 2) were PCR amplified from the synthesized OLS pool using modified primers that facilitated downstream processing (Supplementary Figs. 1 and 2a)18. The oligonucleotides were then processed to remove primer sequences (Supplementary Figs. 2b and 3). Briefly, lambda exonuclease was used to make the PCR products single stranded, and then uracil DNA glycosylase, endonuclease VIII and DpnII restriction endonuclease were used to cleave off the assembly-specific primers. The resultant gel shows that although the reaction was efficient, unprocessed oligonucleotide still remained. In addition, we observed spurious cleavage by DpnII that was likely due to the extensive overlap within the subpool that is inherent in the gene synthesis process. We assembled the GFP43, GFP35 and GFP20 subpools using PCR, which resulted in GFP-sized products as well as many incorrect low molecular weight products (Fig. 2a).
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
letters a
b
c
P m 1 C itr in e m Ap pl e
m TF
35
M W
G FP
tro lG G FP FP 20 G FP 43
M W
Co n
35
G FP
M W
G FP
43
Figure 2 Gene synthesis products. (a) Results of PCR assembly of GFPmut3 from two different assembly subpools (GFP43 and GFP35) that were amplified 2 kb 850 bp from OLS pool 1. Full-length GFPmut3 is 850 bp 650 bp * 800 bp 650 bp expected to be 779 bp and is indicated with an asterisk (*). Other bands show lower 400 bp 200 bp molecular weight misassembled products. (b) Gel purification and re-amplification 100 bp 200 bp of the full-length assembled GFPmut3. (c) Results of assembling three fluorescent 100 bp proteins using the longer oligonucleotides in OLS pool 2 and a PCR assembly protocol that did not require gel isolation. (d) Results GGSGGSGGA SGA GSGGG GGSAGSGSSGGASGSGG GGA GSGA GSGSSGA GSG of assembling 42 variable regions of single650 bp chain antibody fragments that contained 1 2 3 4 5 6 7 8 16 17 18 19 20 21 22 30 31 32 33 34 35 36 challenging GC-rich linkers. Of the 42 assemblies, all but two (7 and 24) 650 bp resulted in strong bands of the correct size. 9 10 11 12 13 14 15 23 24 25 26 27 28 29 37 38 39 40 41 42 We gel isolated and re-amplified these two, resulting in bands of the correct size (Supplementary Fig. 10b). The antibody that corresponds to each number is given in Supplementary Table 3 and the amino acid sequence of the linker region used is given above each gel with differing amino acids in red.
We gel isolated, digested and then cloned the assembly products into an expression vector (Fig. 2b and Supplementary Fig. 4). We used flow cytometry tests, manual colony counts and sequencing of individual clones to measure the error rates (Supplementary Fig. 5a,b). All three of the assays correlated well, and the error rates determined through sequencing were 1/1,500 bp, 1/1,130 bp and 1/1,350 bp for the GFP43, GFP35 and GFP20 synthesis reactions, respectively (Fig. 3 and Supplementary Table 2). These results illustrate a number of important points. First, our subpool assembly primers were sufficiently well-designed to provide stringent subpool amplification of as few as 5 oligonucleotides
a
100
CPG oligos (IDT) OLS pool 1
90
Fluorescent (percent)
80
OLS pool 2
70 60 50 40 30 20 10
SE
SE
SE
rA
rA
rA
e
Er
Er
Er
e
pl Ap m
m
C
itr
in
P1 TF m
itr
Ap pl e
in e
m
P1 TF
C m
SE rA Er
FP 35
m
SE
35
rA Er 43
FP G
b 2,000
G
43
FP G
G FP
G
FP
20
0
CPG oligos (IDT) OLS pool 1 OLS pool 2
1,800 1,600 1,400 Bp/error
1,200 1,000 800 600 400 200
G
FP 20 FP 4 Ab GF 3 ag P ov 35 Af om u Al tuz ab em um tu ab zu C ma et ux b Ef im un a g b Ib um Pa aliz ab no um ba ab Pe cum rtu a R zu b an m R ibiz ab ob at um a u Ta mu b do m ci ab Tr zu as m a t U uzu b st ek ma Ve inu b do ma liz b um ab
0
G
© 2010 Nature America, Inc. All rights reserved.
d
out of a 12,995 oligonucleotide background. Second, the relative quantities of the oligonucleotides in the assembly subpools were sufficient to allow PCR assembly. Third, the error rates from 130-mer OLS pools were sufficiently low to construct gene-sized fragments (717 bp) such that >50% of the sequences were perfect. In fact, the error rates from both the GFP43 and GFP35 assemblies were indistinguishable from the column-synthesized GFP20 assemblies. Fourth, our data show that the level of fluorescence of our gene assemblies correlated with the number of constructs with perfect sequence, providing a useful screen to test fluorescent gene assemblies in OLS pool 2 (Supplementary Fig. 6). Finally, although PCR assembly was able to generate full-length product, many smaller misassembled products were also formed, requiring the use of difficult-to-automate gel isolation steps. In OLS pool 2, we designed 836 assembly subpools split into 11 plate subpools, encoding 2,456,706 bases of oligonucleotides that could potentially result in 869,125 bp of final assembled sequence. We first constructed three fluorescent proteins to test assembly protocols in OLS pool 2: mTFP1, mCitrine and mApple. We found that the PCR assembly protocols developed for ssDNA subpools in OLS pool 1 only produced short (<200 bp) misassemblies when applied to the dsDNA subpools in OLS pool 2. We tested over 1,000 assembly PCR conditions by varying parameters such as DNA concentration, annealing temperatures, cycle numbers, polymerase choice and buffer conditions. Using the best protocol (Supplementary Note), we assembled the genes encoding the three proteins with no detectable misassemblies, thereby removing the need for gel Figure 3 Characterization of products from OLS pools 1 and 2. (a) Percentage of fluorescent cells resulting from synthesis products derived from column-synthesized oligonucleotides (black), OLS Chip 1 subpools GFP43 and GFP35 (green) and the three fluorescent proteins produced on OLS Chip 2 with and without ErrASE treatment (blue, yellow and orange). Error bars correspond to the range of replicates from 3 (GFP20, GFP43, GFP35), 2 (GFP43 ErrASE, GFP35 ErrASE), 4 (mTFP1, mCitrine, mApple, mCitrine ErrASE) and 1 (mTFP1 ErrASE, mApple ErrASE) separate electroporations. (b) Error rates (average bp of correct sequence per error) from various synthesis products. Error bars show the expected Poisson error based on the number of errors found (±√n). Deletions of more than two consecutive bases are counted as a single error (no such errors were found in OLS pool 1).
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1297
© 2010 Nature America, Inc. All rights reserved.
letters isolation (Fig. 2c and Supplementary Fig. 7a,b). Cloning followed by flow cytometry screening showed that 6.8%, 7.5% and 6.8% of the cells were fluorescent for mTFP1, mCitrine and mApple assemblies, respectively (Fig. 3a). Assuming 6% correct sequence per construct and no selection against errors in the assembly process, the error rate was ~1/250 bp for 200-mer OLS pool 2, significantly above that of the estimates for 130-mer OLS pool 1 (~1/1,000 bp) and the sequenced 55,000 150mer OLS pool (~1/500 bp). This is not completely unexpected, as the amount of depurination is dependent upon the number of deprotection steps during synthesis and thus the oligonucleotide length. Despite the higher error rate, there were several advantages to the 200-mer OLS pool 2. First, the extensive overlaps designed in OLS pool 1 caused spurious processing of the primers from the assembly subpools. The use of type IIs restriction endonucleases to process primers to form dsDNA resulted in more robust processing. Second, the use of two amplification steps conserves chip-eluted DNA to allow for future scaling of the gene synthesis process (Supplementary Note). Third, the assemblies of OLS pool 1 produced many smaller bands and required lower-throughput gel isolation procedures. This could be due to mispriming during PCR assembly because of the long overlap lengths used in the design process. The assemblies in OLS pool 2 used much shorter overlap lengths and resulted in no smaller molecular weight misassembled products. To improve the error rates of the genes assembled from OLS pool 2, we used ErrASE, a commercially available enzyme cocktail that detects and corrects mismatched base pairs, to remove errors in the assembled fluorescent proteins. For each gene, we applied ErrASE at six different stringencies, reamplified the constructs, cloned the PCR products and rescreened the cloned genes using flow cytometry. Improvement of the level of fluorescence progressively increased with greater ErrASE stringency. At the highest levels of error correction, the fluorescence levels were 31%, 49% and 26% for mTFP1, mCitrine and mApple respectively (Fig. 3a and Supplementary Fig. 8). We also performed the ErrASE procedure on our GFP43 and GFP35 pools from OLS pool 1, resulting in fluorescence levels of 89% and 92%, respectively (Fig. 3a and Supplementary Fig. 5c). We sequenced clones of GFP43 and GFP35 and found three errors in 21,510 (1/7,170 bp) and four errors in 20,076 (1/5,019 bp) sequenced bases, respectively. As a more challenging test for our DNA synthesis technology, we designed and synthesized oligonucleotides in OLS pool 2 for 42 genes encoding the variable regions of single-chain antibody fragments (scFv) regions corresponding to a number of well-known antibodies. We have previously had trouble synthesizing these genes using commercial gene synthesis companies. This might be partly due to the prototype (Gly 4Ser)3 linker, which is designed to maximize flexibility and allow the heavy and light V regions to assemble22. The repetitive nature and high GC content of the linkerencoding sequences often represents a challenge for accurate DNA synthesis. We therefore tested three different linker sequences that varied in GC content and repetitive character of the linker encoding sequence. In addition, the presence of high sequence homology in the antibody backbones and linkers represented a potential source of cross-hybridization that could interfere with assembly (61% average sequence identity). As expected, the antibody sequences did not assemble as robustly as the fluorescent proteins, and thus we further optimized the conditions during pre- and post-assembly (Supplementary Figs. 7c, 9 and 10a). Under the best protocol, 40 of the 42 constructs assembled to the correct size (Fig. 2d and Supplementary Table 3). The two 1298
isassembled genes displayed faint bands at the correct size, which m were gel isolated and reamplified to produce strong bands of the correct size. We sequenced 15 antibodies including representatives from all three linker types. We performed enzymatic error correction using ErrASE, gel isolated the product and finally cloned the constructs into an expression vector. One of the 15 antibodies did not clone, and another had a deleted linker region in all 21 sequenced clones. Both of these antibodies were encoded with the highest GC content linker. The average error rate of the 14 antibodies that did clone was 1/315 bp (Fig. 3b and Supplementary Table 2); this was considerably higher than the GFP assemblies, but still sufficient for construction of genes of this size (~10% of clones should be perfect, on average). In addition, the high levels of sequence similarity between the antibodies, combined with the successful assembly and sequencing (which showed no instances of cross-contamination) further validates that the selective amplification is at least stringent enough to make highly related protein sequences. Our results show the assembly of gene-sized DNA fragments totaling ~35,000 bp from oligonucleotide pools of more than 50 kilobases. A number of key features are important to make the process work, including the use of low-error starting material, wellchosen orthogonal primers, subpool amplification of individual assemblies, optimized assembly methods and enzymatic error correction. Together, these features enabled gene assembly from oligonucleotide pools containing at least 50 times more sequences than previously reported (Supplementary Note). We describe two separate OLS pool lengths and assembly methods, which have their own advantages and disadvantages (Supplementary Fig. 1). The shorter, 130-mer OLS pool 1 assemblies have lower error rates, but because there are no plate amplifications, will be harder to scale as we begin to utilize larger OLS pools. The longer 200-mer OLS pool 2 is easier to scale, but contained higher error rates. The costs of oligonucleotides in both processes are <$0.01/bp of final synthesized sequence, and thus the dominant costs are enzymatic processing, cloning and sequence verification. Future work on reducing the cost of perfect sequence will focus on the ability to lower sequencing costs by using cheaper next-generation sequencing technologies, or by incorporating other error-correction techniques such as PAGE selection of oligonucleotide pools or mutS-based error filtration3,23. Methods Methods and any associated references are available in the online version of the paper at http://www.nature.com/naturebiotechnology/. Note: Supplementary information is available on the Nature Biotechnology website. Acknowledgments This work was supported by the US Office of Naval Research (N000141010144), National Human Genome Research Institute Center for Excellence in Genomics Science (P50 HG003170), Department of Energy Genomes to Life (DE-FG0202ER63445), Defense Advanced Research Projects Agency (W911NF-08-1-0254) and the Wyss Institute for Biologically Inspired Engineering (all to G.M.C.). We thank H. Padgett for providing ErrASE and expertise during optimization and J. Boeke for advice on gene assembly protocols. We also thank S. Raman, F. Vigneault and F. Zhang for critical readings of the manuscript, G. Dantas for pZE21 (Washington University), F. Isaacs (Yale University) for pZE21G and J.S. Workman (Wyss Institute) for pSecTag2A. AUTHOR CONTRIBUTIONS S.K. and N.E. wrote the paper with contributions from all authors; S.K. and G.M.C. conceived the study; S.K. wrote all algorithms and designed all sequences; S.K. and N.E. designed and performed all experiments; E.L. provided the oligonucleotides libraries; M.S. and J.F. designed the single-chained versions of commercial antibodies; J.B.L. performed the OLS high-throughput sequencing experiment and provided critical advice on the processing of subpools.
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
letters COMPETING FINANCIAL INTERESTS The authors declare competing financial interests: details accompany the full-text HTML version of the paper at http://www.nature.com/naturebiotechnology/.
© 2010 Nature America, Inc. All rights reserved.
Published online at http://www.nature.com/naturebiotechnology/. Reprints and permissions information is available online at http://npg.nature.com/ reprintsandpermissions/.
1. Carr, P.A. & Church, G.M. Genome engineering. Nat. Biotechnol. 27, 1151–1162 (2009). 2. Tian, J., Ma, K. & Saaem, I. Advancing high-throughput gene synthesis technology. Mol. Biosyst. 5, 714–722 (2009). 3. Tian, J. et al. Accurate multiplex gene synthesis from programmable DNA microchips. Nature 432, 1050–1054 (2004). 4. Richmond, K.E. et al. Amplification and assembly of chip-eluted DNA (AACED): a method for high-throughput gene synthesis. Nucleic Acids Res. 32, 5011–5018 (2004). 5. Zhou, X. et al. Microfluidic PicoArray synthesis of oligodeoxynucleotides and simultaneous assembling of multiple DNA sequences. Nucleic Acids Res. 32, 5409–5417 (2004). 6. Nirenberg, M.W. & Matthaei, J.H. The dependence of cell-free protein synthesis in E. coli upon naturally occurring or synthetic polyribonucleotides. Proc. Natl. Acad. Sci. USA 47, 1588–1602 (1961). 7. Söll, D. et al. Studies on polynucleotides, XLIX. Stimulation of the binding of aminoacyl-sRNA’s to ribosomes by ribotrinucleotides and a survey of codon assignments for 20 amino acids. Proc. Natl. Acad. Sci. USA 54, 1378–1385 (1965). 8. Gibson, D.G. et al. Creation of a bacterial cell controlled by a chemically synthesized genome. Science 329, 52–56 (2010).
9. Gibson, D.G. Synthesis of DNA fragments in yeast by one-step assembly of overlapping oligonucleotides. Nucleic Acids Res. 37, 6984–6990 (2009). 10. Li, M.Z. & Elledge, S.J. Harnessing homologous recombination in vitro to generate recombinant DNA via SLIC. Nat. Methods 4, 251–256 (2007). 11. Bang, D. & Church, G.M. Gene synthesis by circular assembly amplification. Nat. Methods 5, 37–39 (2008). 12. Shao, Z., Zhao, H. & Zhao, H. DNA assembler, an in vivo genetic method for rapid construction of biochemical pathways. Nucleic Acids Res. 37, e16 (2009). 13. Lee, C.-C., Snyder, T.M. & Quake, S.R. A microfluidic oligonucleotide synthesizer. Nucleic Acids Res. 38, 2514–2521 (2010). 14. Kim, C. et al. Progress in gene assembly from a MAS-driven DNA microarray. Microelectron. Eng. 83, 1613–1616 (2006). 15. LeProust, E.M. et al. Synthesis of high-quality libraries of long (150mer) oligonucleotides by a novel depurination controlled process. Nucleic Acids Res. 38, 2522–2540 (2010). 16. Patwardhan, R.P. et al. High-resolution analysis of DNA regulatory elements by synthetic saturation mutagenesis. Nat. Biotechnol. 27, 1173–1175 (2009). 17. Schlabach, M.R. et al. Synthetic design of strong promoters. Proc. Natl. Acad. Sci. USA 107, 2538–2543 (2010). 18. Li, J.B. et al. Multiplex padlock targeted sequencing reveals human hypermutable CpG variations. Genome Res. 19, 1606–1615 (2009). 19. Li, J.B. et al. Genome-wide identification of human RNA editing sites by parallel DNA capturing and sequencing. Science 324, 1210–1213 (2009). 20. Porreca, G.J. et al. Multiplex amplification of large sets of human exons. Nat. Methods 4, 931–936 (2007). 21. Xu, Q. et al. Design of 240,000 orthogonal 25mer DNA barcode probes. Proc. Natl. Acad. Sci. USA 106, 2289–2294 (2009). 22. Huston, J.S. et al. Medical applications of single-chain antibodies. Int. Rev. Immunol. 10, 195–217 (1993). 23. Carr, P.A. et al. Protein-mediated error correction for de novo DNA synthesis. Nucleic Acids Res. 32, e162 (2004).
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1299
© 2010 Nature America, Inc. All rights reserved.
ONLINE METHODS
Reanalysis of OLS pool error rates. We reanalyzed a previously published data set for determining sequencing error rates18. Briefly, the data set was derived from high-throughput sequencing using the Illumina Genome Analyzer platform of a 53,777 150-mer OLS pool. Two sequencing runs were performed, the first before any amplification, and the second after two rounds of ten cycles of PCR (20 cycles total). As our previous analyses were mostly looking for distribution effects, we reanalyzed these existing data to get an estimate of error rates before and after PCR amplification. We realigned the data set using Exonerate to allow for gapped alignments and analysis of indels24. Specifically, we used an affine local alignment model that is equivalent to the classic Smith-Waterman-Gotoh alignment, a gap extension penalty of −5, and used the full refine option to allow for dynamic programming–based optimization of the alignment. These reads were solely mapped on base calls by the Illumina platform. We used these alignments to count mismatches, deletions and insertions as compared to the designed sequences. However, as base-calling can be more error prone on next-generation platforms than traditional Sanger-based approaches, we filtered the results based only on high-quality base-calls (Phred scores of ≥30 or >99.9% accuracy). This was accomplished by converting Illumina quality scores to Phred values using the Maq utility sol2sanger25 and only using statistics from base calls of Phred 30 or higher. All error rate analysis scripts were implemented in Python and are available upon request. Although this method provides an estimate for error rates, unmapped reads may have higher error rates, thus underestimating the total average error rate. In addition, base-calling errors might still overestimate the error rate. Finally, using only high-quality base calls, which usually occur only in the first ten bases of a read, might only reflect error rates on the 5′ end of the synthesized oligonucleotide. Design and synthesis of OLS pools. The 13,000 oligos in the first OLS library (OLS pool 1) were broken up into 12 separately amplifiable subpools (assembly subpools). Each assembly subpool was defined by unique 20 bp priming sites that flanked each of the oligos in the pool. The priming sites were designed to minimize amplification of oligos not in the particular assembly subpool. This was done by designing set of orthogonal 20-mers (assemblyspecific primers) using a set of 240,000 orthogonal 25-mers21 as a seed. From these sequences we selected 20-mers with 3′ sequence ending in thymidine or GATC for the forward and reverse primers, respectively. We screened for melting temperatures of 62–64 °C and low primer secondary structure. After the additional filtering, 12 pairs of forward and reverse primers were chosen to be the assembly-specific primers. The 13,000 oligos in the second OLS library (OLS pool 2) were broken up into 11 subpools corresponding to 11 sets of up to 96 assemblies (plate subpools), which were further divided into a total of 836 assembly subpools. A new set of orthogonal primers were designed similarly to the previous set (without the GATC and thymidine constraints) but further filtered to remove type IIS restriction sites, secondary structure, primer dimers and self-dimers. The final set of primer pairs was distributed among the plate-specific primers, assembly-specific primers and construction primers. See Supplementary Methods for more detailed design information and primer sequences. OLS pools were synthesized by Agilent Technologies and are available upon signing a Collaborative Technology Development agreement with Agilent. Costs of OLS pools are a function of the number of unique oligos synthesized and of the length of the oligos (<$0.01 per final assembled base-pair for all scales used in this study). OLS pools 1 and 2 were independently synthesized, cleaved and delivered as lyophilized ~1–10 picomole pools. Amplification and processing of OLS subpools. Lyophilized DNA from OLS pools 1 and 2 were resuspended in 500 μl TE. Assembly subpools were amplified from 1 μl of OLS pool 1 in a 50 μl qPCR reaction using the KAPA SYBR FAST qPCR kit (Kapa Biosystems). A secondary 20 ml PCR amplification using Taq polymerase was performed from the primary amplification product. The barcode primer sites were removed using a technique previously described20. In brief, the forward primers contained a phosphorothioate bond at the 5′ end and the last nucleotide on the 3′ end was a deoxy uridine; the reverse primers contained a DpnII recognition site (GATC) at the 3′ end and a phosphorylated 5′ end. PCR amplification was followed
nature biotechnology
by λ-exonuclease digestion of 5′ phosphorylated strands, hybridization of the 3′ primer site to its complement, and cleavage of the 5′ and 3′ primer sites using USER enzyme mix and DpnII (New England Biolabs), respectively. Plate subpools were amplified from 1 μl of OLS pool 2 in 50 μl Phusion polymerase PCR reactions. Assembly subpools were amplified from the plate subpools by 100 μl Phusion polymerase PCR reactions. A BtsI digest removed the forward and reverse primer sites. See the Supplementary Methods for more detailed protocols. Assembly of fluorescent proteins. GFPmut3 (ref. 26) was assembled from OLS pool 1 assembly subpools by PCR. The GFP43 and GFP35 subpools were designed such that there was full overlap between neighboring oligos during assembly, with average overlaps of 43 bp and 35 bp for GFP43 and GFP35, respectively. For the first set of assemblies, 330 pg of the GF43 subpool or 40 pg of the GFP35 subpool was used per 20 μl Phusion polymerase PCR assembly. The full-length product was gel-isolated, amplified using Phusion polymerase and cloned into pZE21 after a HindIII/KpnI digest. The second set of assemblies was built using a similar procedure, except that the assembly PCR used 170 pg or 190 pg of GFP43 and GFP35 subpools, respectively; and the gel-isolated product was not re-amplified before cloning. Oligonucleotides for mTFP1, mCitrine and mApple were designed such that there was on average a 20 bp overlap between adjacent oligonucleotides. The proteins were built from OLS pool 2 assembly subpools by first performing a KOD polymerase pre-assembly reaction that was done in the absence of construction primers followed by a KOD polymerase assembly PCR in which the construction primers were included. ErrASE error correction was then performed on aliquots of the synthesis products following the manufacturer’s instructions. The assembled product was digested with HindIII and KpnI and cloned into pZE21. Sequencing of clones was performed by Beckman Coulter Genomics. See the Supplementary Methods for more detailed protocols. ErrASE. ErrASE is an enzyme cocktail designed to remove errors in synthetically assembled genes (Novici Biotech). Assembled genes are denatured and re-annealed to allow for the formation of hetero-duplexes. A resolvase enzyme in ErrASE then recognizes and cuts at mismatched positions. Other enzymes in the cocktail remove these cut mismatched positions. The products could then be reamplified by PCR to reassemble the full-length gene. Specifically, six aliquots of 10–50 ng of each assembled gene was added to 10 μl of PCR buffer (we have also tested the effects of including betaine in the buffer; see Supplementary Fig. 11). Hetero-duplexes were formed by denaturing at 95 °C and slowly cooling to 0 °C. Each aliquot was then used to resuspend six different lyophilized ErrASE mixtures of increasing stringency provided by the manufacturer. After a 1–2 h at incubation 25 °C, the assemblies were re-amplified and visualized on an agarose gel. Of the reactions that resulted in a correctly sized band, the one that used the most stringent ErrASE protocol was selected for cloning. Flow cytometry. Fluorescent cell fractions of the cloned libraries of assembly products were quantified using a BD LSR Fortessa flow cytometer either a 488 nm laser with a 530 nm filter (30 nm bandpass) or a 561 nm laser with a 610 nm filter (20 nm bandpass). Synthesis of antibodies. 125 ng of each antibody assembly pool was preassembled in 20 μl KOD pre-assembly reactions. We then tested nine amplification protocols for the ability to amplify the 42 antibody pre-assemblies into full-length genes. We attempted to clone eight constructs from the best assembly protocol (afutuzumab, efungumab, ibalizumab, oportuzumab, panobacumab, robatumumab, ustekinumab and vedolizumab; see Supplementary Fig. 10a and Supplementary Table 3). The eight assemblies were error-corrected using ErrASE, gel-isolated, re-amplified using Phusion polymerase, gel-isolated again, and cloned into pSecTag2A after an ApaI/SfiI digest. Sequencing was performed by Genewiz. All but oportuzumab cloned successfully. We then repeated the experiment, increasing the amount of assemblypool DNA in the pre-assembly reaction to 400 ng. We selected a different set of eight constructs from this second set of assemblies for cloning (abagovomab,
doi:10.1038/nbt.1716
24. Slater, G.S. & Birney, E. Automated generation of heuristics for biological sequence comparison. BMC Bioinformatics 6, 31 (2005). 25. Li, H. Maq: mapping and assembly with qualities. Welcome Trust Sanger Institute (2010). Available at: . 26. Cormack, B.P., Valdivia, R.H. & Falkow, S. FACS-optimized mutants of the green fluorescent protein (GFP). Gene 173, 33–38 (1996).
© 2010 Nature America, Inc. All rights reserved.
a lemtuzumab, ranibizumab, cetuximab, efungumab, pertuzumab, tadocizumab and trastuzumab; see Fig. 2d and Supplementary Table 3). Using the same methods as with the first set of cloned antibodies, this second set was errorcorrected, gel-isolated, cloned and sequenced. See the Supplementary Methods for more detailed protocols.
doi:10.1038/nbt.1716
nature biotechnology
letters
Rapid translocation of nanoparticles from the lung airspaces to the body
© 2010 Nature America, Inc. All rights reserved.
Hak Soo Choi1, Yoshitomo Ashitate1, Jeong Heon Lee1, Soon Hee Kim1, Aya Matsui1, Numpon Insin2, Moungi G Bawendi2, Manuela Semmler-Behnke3, John V Frangioni1,4,6 & Akira Tsuda5,6 Nano-size particles show promise for pulmonary drug delivery, yet their behavior after deposition in the lung remains poorly understood. In this study, a series of near-infrared (NIR) fluorescent nanoparticles were systematically varied in chemical composition, shape, size and surface charge, and their biodistribution and elimination were quantified in rat models after lung instillation. We demonstrate that nanoparticles with hydrodynamic diameter (HD) less than ≈34 nm and a noncationic surface charge translocate rapidly from the lung to mediastinal lymph nodes. Nanoparticles of HD < 6 nm can traffic rapidly from the lungs to lymph nodes and the bloodstream, and then be subsequently cleared by the kidneys. We discuss the importance of these findings for drug delivery, air pollution and carcinogenesis. Nanoparticles have been proposed as diagnostic, therapeutic and thera nostic agents for a wide variety of human diseases1–3. Delivery of nano particles through the lung is receiving increased attention owing to the large lung surface area available and the minimal anatomical barriers limiting access to the body4. The behavior of nanoparticles in the lung is also important for understanding the health effects of air pollution. Recent toxicological studies have confirmed that nano-size particles reach deep into the alveolar region of the lungs5,6 and can cause severe inflammatory reactions because of their large surface area–to-mass ratio6. Inhalation of nanoparticles is increasingly recognized as a major cause of adverse health effects and has an especially strong influence on the cardiovascular system and hemostasis, leading to increased cardiovascular morbidity and mortality6–8. Inhaled nanoparticles can pass from the lungs into the bloodstream and extrapulmonary organs9. Alveolar macrophage-mediated translocation from the lung surface to the regional tracheobronchial lymph nodes has been shown for micrometer-sized particles10. However, this process takes a relatively long time, from several hours to weeks10–12. Here we study the behavior of nanoparticles in the first hour after administration to the rat lung and attempt to define the key parameters that control their translocation to draining lymph nodes, the bloodstream and the rest of body, and subsequent elimination
(that is, clearance). The standard approach for studying the translo cation of inhaled nanoparticles and ultrafine air pollutants from the lungs to extrapulmonary compartments in animals is to perform postmortem analysis of tissues after inhalation of carbon-based particles13, radiotracers9 or neutron-activated metal particles11,14,15. Compared to these methods, real-time NIR fluorescence imaging provides several advantages for quantifying the process of nanoparticle trafficking, including a low autofluorescence background, real-time imaging, real-time overlay with anatomy and highly sensitive detection of tar gets several millimeters deep in tissue16. We systematically varied the chemical composition, size, shape and surface charge of NIR fluores cent nanoparticles to determine how their physicochemical proper ties affect trafficking across the alveolar surface barrier into tissue, and once in the tissue, their biodistribution and clearance. Enabling this study is our intraoperative fluorescence-assisted resection and exploration (FLARE) imaging system, which permits two independ ent channels (700 nm and 800 nm) of NIR fluorescence images to be acquired simultaneously with color video images in real time17,18. Two distinct families of nanoparticles were engineered with vary ing chemical composition, shape, size and surface charge (Table 1 and Fig. 1a). Inorganic/organic hybrid nanoparticles (INPs) emitted at 800 nm. Quantum dots with various core(shell) and organic coat ing ligands were used to produce INPs of various sizes and charges (INP1–INP5; Supplementary Fig. 1). Gel-filtration chromatography was used to measure the final HD, which ranged from 5 to 23 nm in PBS (Supplementary Fig. 2). It is important to note that the surfacecharged INPs with either carboxylic (INP3 and INP5) or amino (INP4) groups showed high protein adsorption, which contributed to a significant increase in final HD when exposed to 100% serum19. To study the effect of surface charge in detail, we engineered INP1 and INP2 to have zwitterionic and polar coatings, respectively, which eliminated serum protein binding and resulted in a constant HD in PBS and serum19,20. To engineer INPs with HD > 50 nm, we used silica nanospheres doped with quantum dots and conjugated NIR fluorophore CW800 on the silica surface (INP6–INP9). These silica nanoparticles did not associate with serum proteins; thus, the origi nal size was maintained after 1 h serum incubation at 37 °C. Organic
1Division
of Hematology/Oncology, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA. 2Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA. 3Institute of Lung Biology and Disease, Helmholtz Center München—German Research Center for Environmental Health, Neuherberg/Munich, Germany. 4Department of Radiology, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA. 5Molecular and Integrative Physiological Sciences, Harvard School of Public Health, Boston, Massachusetts, USA. 6These authors contributed equally to this work. Correspondence should be addressed to J.V.F. ([email protected]) or A.T. ([email protected]). Received 16 April; accepted 5 October; published online 7 November 2010; doi:10.1038/nbt.1696
1300
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
letters Table 1 Chemical and physical properties of inorganic/organic hybrid nanoparticles (INPs) and organic nanoparticles (ONPs) HDa (nm)
Inorganic/organic hybrid nanoparticles (INPs)
Organic nanoparticles (ONPs)
Nano particle
Core(shell)
Organic coating
INP1 INP2 INP3 INP4 INP5 INP6 INP7 INP8 INP9 ONP1 ONP2 ONP3 ONP4 ONP5 ONP6 ONP7
CdSe(ZnCdS) CdSe(ZnCdS) CdTe(ZnS) CdTe(ZnS) CdTe(ZnS) Silica/CdSe(ZnS) Silica/CdSe(ZnS) Silica/CdSe(ZnS) Silica/CdSe(ZnS) HSA mPEG20k PS-PAA PS-PAA PS-PAA PS-PAA PS-PAA
Cys-CW800 PEG-CW800 PEG-COOH PEG-NH2 PEG-COOH CW800 CW800 CW800 CW800 Cy5.5 Cy5.5 Cy5.5 Cy5.5 Cy5.5 Cy5.5 Cy5.5
In PBS
In serum
5 9 16 16 23 52 110 130 320 7 9 21 35 51 97 220
5 9 27 29 38 56 110 130 320 7 9 34 48 68 120 270
Translocation at 1 h Surface charge
Emission max (nm)
LN (%ID/g)
Nonlung (body + urine; %ID)
800 800 800 800 800 800 800 800 800 700 700 700 700 700 700 700
2.01 1.75 2.03 <0.02 0.05 <0.02 <0.02 <0.02 <0.02 2.18 1.77b 1.88 <0.02 <0.02 <0.02 <0.02
48.1 35.5 33.8 <5.0 <5.0 <5.0 <5.0 <5.0 <5.0 50.9 62.5b 42.2 <5.0 <5.0 <5.0 <5.0
Zwitterionic Polar Anionic Cationic Anionic Polar Polar Polar Polar Zwitterionic Polar Anionic Anionic Anionic Anionic Anionic
© 2010 Nature America, Inc. All rights reserved.
HSA, human serum albumin; PS, polystyrene; PAA, polyacrylate; LN, lymph node.
aHD was measured three times in independent experiments (s.d. ≈ ±5%) and calculated by the following equation in Prism (an exponential decay equation): HD = A−BX, where A = 758.6; B = −0.4709; R2 = 0.9951. Translocation of nanoparticles into lymph node (%ID/g) and the entire body (except lung; %ID) was measured at 1 h after instillation and represents the mean of n = 3 animals except as noted. bn = 2.
nanoparticles (ONPs) were designed to emit 700 nm NIR fluorescence, which permitted comparison of the effect of nanoparticle chemical composition with 800 nm–emitting INPs. To investigate the role of HD in the translocation of nanoparticles from the lung to extrapulmonary compartments of the body, we systemati cally increased the size of fluorescent nanoparticles from 5 to 300 nm. Translocation of INP5 (38 nm HD) and larger particles was severely restricted at 30 min after administration (Supplementary Fig. 3), which indicates that the size threshold for rapid translocation of nano particles from lungs to lymph nodes corresponds to an HD <38 nm in serum (Fig. 1b). Notably, a similar size threshold was found in the ONPs; that is, between 34 nm (ONP3) and 48 nm (ONP4) in HD, suggesting that the rapid translocation from lungs to mediastinal
a INP1 – INP5
ONP1
b
ONP2
Coating
INP1 (5 nm)
5 – 40 nm
PEG NIR 9 nm
INP6 – INP9
INP2 (9 nm)
Mu
Lu
Tr
Mu Tr
Ga
Lu
Mu Lu
ONP2 Polar
Tr
Lu
Mu
ONP3 – ONP7
Ga
Ga
Coating
INP3 (27 nm)
Silica core
Tr
Lu
Mu
Shell
QDs NIR fluorophore 50 – 300 nm
ONP1 Zwitterionic
Lu Tr
HSA NIR 7 nm
c
Ga
QD core Shell
lymph nodes is primarily governed by the size of nanoparticles, irre spective of chemical composition, shape and conformational flexibil ity. For charged nanoparticles, nonspecific adsorption of endogenous proteins, mostly albumins, can result in a large increase in final HD by 10–50 nm (Supplementary Fig. 2a)19,21. Below the size threshold of 34 nm, the surface charge of nano particles is also critical for rapid translocation from lungs to regional lymph nodes (Fig. 1c). To investigate this effect in detail, we intro duced zwitterionic (INP1 and ONP1), polar (INP2 and ONP2), anionic (INP3 and ONP3) and cationic (INP4) coatings to both INPs and ONPs, and compared the charge effect. First, hydrophilic and neutral organic coatings such as zwitterionic cysteine and polar PEG ligands prevented adsorption of serum proteins19 and led to rapid
INP3 Anionic
Lu
Tr
Ga
Ga m O
O
n
x
m O
NH
20 – 300 nm
n
y
INP5 (38 nm)
Lu
INP4 Cationic Tr
Mu
Mu
Lu
Tr
Mu
Figure 1 Schematic structures of inorganic/organic hybrid nanoparticles (INPs, 800 nm emission) and organic nanoparticles (ONPs, 700 nm emission) and their size- and charge-dependent translocation. (a) INPs comprised 800 nm NIR quantum dots composed of various core(shell) configurations and coatings (INP1–INP5, 5–40 nm HD) and silica nanospheres with incorporation of small-molecule NIR fluorophores (INP6–INP9, 50–300 nm HD) into the coating. ONPs comprised human serum albumin (HSA) conjugated with a 700 nm NIR fluorophore (ONP1, 7 nm HD), a 700 nm NIR dye-conjugated mPEG (ONP2, 9 nm HD) and spherical organic nanoparticles consisting of polystyrene core and chemically cross-linked polyacrylate chains with 700 nm fluorophore conjugation (ONP3–ONP7, 20–300 nm HD). Different charged organic ligands were used for coating the core(shell) of INPs. (b) Size-dependent translocation of INPs from lungs to lymph nodes. Four INPs were administered into lungs and their translocation was observed at 30 min after administration. Shown are representative (n = 3) images of color video (left), NIR fluorescence (middle) and a pseudo-colored merge of the two (right). Ga, gauze; Lu, lung; Mu, muscle; Tr, trachea. Arrows and red dotted circles indicate lymph node. Scale bar, 500 μm. λExc = 667 ± 11 nm and λEm = 700 ± 17.5 nm for ONPs. λExc = 760 ± 20 nm and λEm = 795 nm longpass for INPs. All NIR fluorescence images have identical exposure times and normalizations. (c) Charge-dependent translocation of nanoparticles from lungs to lymph nodes. ONP1 (zwitterionic), ONP2 (polar), INP3 (anionic) and INP4 (cationic) were administered into lungs and their translocation was observed at 30 min after administration. See b for abbreviations and experimental details.
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1301
letters
0.4 Inset
50
60
Translocation of INP3 (HD = 27 nm, Anionic) LN Blood Urine
%ID/g
0 Mu
LN+ LN–
Mu
LN+ LN–
10
2.5
Liver
20 30 40 Time (min)
50
60
–
TcO4 Tc-INP1 Tc-INP3 Tc-INP4
2.0
15
5
1.5 1.0
10
20 30 40 Time (min)
50
0.1
0
5
0
10
Lag
100 75 50 25
0.5 –
TcO4 Tc-INP1 Tc-INP3 Tc-INP4 0
0.2
0 0
%ID
20 30 40 Time (min)
TcO4 Tc-INP1 Tc-INP3 Tc-INP4
0.2 0.1
%ID/g
10
Lymph node
5
Inset
0.3 –
0.3
15
25 SBR (normalized)
c
Kidney
SBR (normalized)
LN Blood Urine
0
© 2010 Nature America, Inc. All rights reserved.
b
Translocation of INP1 (HD = 5 nm, Zwitterionic) 25
Lung
a
0
Urine
Lung
Body
Total
60
Figure 2 Biodistribution, clearance and histological analysis of INPs in Sprague-Dawley rats. (a) Translocation into lymph node, blood and urinary excretion of INP1 (left) and INP3 (right) using real-time NIR fluorescence imaging. Each point represents the mean ± s.d. of n = 3 animals. SBR, signal-to-background ratio. (b) Frozen sections obtained from resected organs of INP1-administered Sprague-Dawley rats at 1 h after instillation. From top to bottom are representative images of lung, lymph node, kidney (arrow: cortex; arrowhead: calyces), and liver. Mu, muscle; LN+, posterior mediastinal lymph node; LN−, negative para-aortic lymph node. Scale bars, 5 mm. Shown are color video and NIR fluorescence of intact specimens (left two panels, respectively) along with representative histological images from the same organ/tissue (H&E, NIR, right two panels, respectively). Green dotted circle, bronchiole; blue dotted circle, glomerular basement membrane; red dotted circle, portal area (portal vein, hepatic artery and bile duct). Scale bars, 200 μm. All NIR fluorescence images (λExc = 760 ± 20 nm and λEm = 795 nm longpass) have identical exposure times and normalizations. (c) Quantitative biodistribution and clearance using 99mTc-conjugated INPs administered intratracheally into Sprague-Dawley rats. The small molecule TcO4− was used as a control. Translocation from lung to blood over time (top). Translocation from lung to regional lymph nodes (bottom left) 1 h after injection. Recovery of injected dose in urine, lung, body (without lungs) and total (bottom right). Each data point represents the mean ± s.d. of n = 3 animals. All values in blood curves are statistically different (ANOVA) from each other at 1 h.
translocation of nanoparticles into the mediastinal lymph nodes. A previous study has shown the importance of surface properties of nanoparticles in hemostasis-inducing effects, such as platelet activation, which may lead to thrombosis13. On the other hand, the behavior of highly net-charged nanoparticles (charge-to-volume ratio >±1) was more complicated. As both anionic and cationic charged molecules quickly adsorb endogenous proteins in the lungs, similar migration results were expected among the batches19,21. The translocation behavior of nanoparticles in the alveolar lining fluid was, however, very different. Carboxylic group–modified INP3 was found in the lymph nodes within 30 min after administration, whereas no significant translocation was observed for the cationic INP4 (Fig. 1c, bottom), even though they were engineered from the same batch of CdTe(CdSe) nanocrystals. As it is well known that cationic-charged materials can be taken up by cells22, sometimes causing acute cytotoxicity, it is likely that cationic particles (INP4) were rapidly taken up by pulmonary macrophages and/or epithe lial cells shortly after administration, making them unavailable for translocation to lymph nodes. As cationic nanoparticles remain in pulmonary cells for a long time, they may cause severe lung injury23. Our results are also consistent with recent reports on pulmonary drug delivery systems that suggest that surface charge affects trans location efficiency24. The majority of the nanoparticles administered in this study remained in the lungs, with small amounts rapidly appearing in the mediastinal lymph nodes within 30 min after administration. Even more rapid translocation (in minutes) from the site of deposition to lymph nodes was only observed for smaller nanoparticles. In particular, for the smallest nanoparticles (INP1, 5 nm), migration to the medias tinal lymph nodes was very fast (within 3 min), and there was measur able nanoparticle accumulation in the kidneys and excretion into urine at 30 min after administration (Fig. 2a and Supplementary Video 1). On the other hand, INP3 (27 nm) showed slower accumulation into 1302
lymph nodes (≈10 min) and blood (≈20 min). Most of our ultra-small nanoparticles were found in the kidneys, with the smallest ones also found in the urine (Fig. 2b and Supplementary Fig. 4). Because of the zwitterionic surface coating and small HD19,25, there were virtually no nanoparticles trapped in the hepatic clearance route, such as liver, bile and intestine (Fig. 2b and Supplementary Video 2). To confirm the qualitative observations made using NIR fluo rescence, we labeled INPs with 99mTc-MAS3 (MAS3 is S-acetylmer captoacetyltriserine) and repeated experiments to better quantify nanoparticle translocation from the lung airspaces to lymph node, blood and eventually urine (Fig. 2c and Supplementary Figs. 5 and 6). Blood clearance was measured by intermittent sampling of the tail vein. The small-molecule sodium pertechnetate (TcO4−) was used as a control and showed rapid uptake into blood, consistent with direct translocation through the alveolar air-blood barrier9, excretion into urine and moderate uptake in lymph nodes at 1 h post-instillation. On the other hand, INP1 and INP3 appeared in the blood after a short lag and at concentrations inversely proportional to HD. Because lymph node uptake of nanoparticles was more pronounced than TcO4−, this suggests that there could be another pathway for translocation of nanoparticles from lung airspaces into blood, one involving tran sient passage through lymph nodes. In this study, we have systematically engineered various nanopar ticles to investigate the effect of particle characteristics on the ability of nanoparticles to translocate from the lungs into the lymph nodes and bloodstream, and on their fate in vivo. Although nanoparticles are stable and show a narrow size distribution in PBS, this study suggests that the final HDs as measured in serum, along with surface charge, are critical in determining the ultimate migration, biodistribution and clearance of nanoparticles deposited in the lungs. The key findings of our study are: (i) a size threshold of ≈34 nm determines whether there is rapid transepithelial translocation of nanoparticles from the alveolar luminal surface into the septal interstitium, followed by quick
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
© 2010 Nature America, Inc. All rights reserved.
letters translocation to the regional draining lymph nodes where further translocation into the bloodstream could occur; (ii) below this size threshold of ≈34 nm, surface charge is a major factor that influences translocation, with zwitterionic, anionic and polar surfaces being permissive and cationic surfaces being restrictive; and (iii) when HD is <6 nm and surface charge is zwitterionic, nanoparticles can enter the bloodstream quickly from the alveolar airspaces and can ultimately be cleared from the body by means of renal filtration. The latter finding represents what we believe to be an initial description of nanoparticle trafficking from the lung airspaces through the body and into the urine. These findings have implications for the fields of drug delivery, air pollution control and carcinogenesis. Our results might prove use ful to investigators trying to engineer inhaled nanoparticle-based drugs. First, the data are consistent with the literature showing that large (>100 nm) cationic liposomes provide an improved therapeutic window to treat lung diseases. That is, systemic absorption is very low, presumably because these nanoparticles exceed the size and charge thresholds defined above (with whatever systemic absorption is observed probably due to liposome instability). Second, and more importantly, engineering nanoparticle-based drugs to be zwitterionic and <6 nm in HD should result in rapid and high levels of drug in the bloodstream after administration while permitting renal clearance of drug not bound to its target. And finally, noncationic nanoparticles ≤34 nm and ≥6 nm should provide high levels of drug to pulmonary lymph nodes, which could be used to deliver antibiotics and antiinflammatory drugs to treat lung infections, tumor metastases and inflammatory conditions. With respect to air pollution and carcinogenesis, our data suggest that noncationic nanoparticles with HD in serum ≤34 nm are poten tially the most dangerous in that they rapidly traverse the epithelial barrier, enter regional lymph nodes, and, if ≥6 nm, are not cleared efficiently by the kidneys. Thus, exposure of internal organs and tissues will be maximal. Chemical strategies that alter the size and charge of pollutants to be large (>34 nm) and cationic could potentially mini mize airspace-to-body translocation and provide time for mucociliaryand/or macrophage-mediated clearance. Once in the body and lodged in lymph nodes (that is 6 nm ≤ HD ≤ 34 nm), noncationic nanoparticles could cause inflammation and/or contribute to carcinogenesis. Smaller nanoparticles, such as those ≈5 nm in HD, are of even more concern for carcinogenesis and distal inflammation because they are capable of traveling from the lung to the bloodstream, and once in the blood stream, can potentially reach every tissue and organ in the body26. Methods Methods and any associated references are available in the online version of the paper at http://www.nature.com/naturebiotechnology/. Note: Supplementary information is available on the Nature Biotechnology website. Acknowledgments We thank R. Oketokoun and S. Gioux for help with developing the FLARE imaging system and E.P. Lunsford of the Longwood Small Animal Imaging Facility for assistance with SPECT/CT imaging. The Biophysical Instrumentation Facility for the Study of Complex Macromolecular Systems (National Science Foundation0070319 and US National Institutes of Health (NIH) GM68762) is gratefully acknowledged. We thank W. Liu and B.I. Ipe for providing quantum dots, L. Moffitt for editing, and L. Keys and E. Trabucchi for administrative support. This work was supported in part by NIH grant HL054885 (A.T.), HL070542 (A.T.), HL074022 (A.T.) and R01-CA-115296 (J.V.F.).
AUTHOR CONTRIBUTIONS H.S.C., Y.A., J.H.L., S.H.K., A.M., N.I. and A.T. performed the experiments. H.S.C., M.G.B., M.S.-B., A.T. and J.V.F. reviewed, analyzed and interpreted the data. H.S.C., A.T. and J.V.F. wrote the paper. All authors discussed the results and commented on the manuscript. COMPETING FINANCIAL INTERESTS The authors declare competing financial interests: details accompany the full-text HTML version of the paper at http://www.nature.com/naturebiotechnology/. Published online at http://www.nature.com/naturebiotechnology/. Reprints and permissions information is available online at http://npg.nature.com/ reprintsandpermissions/. 1. Janib, S.M., Moses, A.S. & Mackay, J.A. Imaging and drug delivery using theranostic nanoparticles. Adv. Drug Deliv. Rev. 3, 1–2 (2010). 2. Zrazhevskiy, P., Sena, M. & Gao, X. Designing multifunctional quantum dots for bioimaging, detection, and drug delivery. Chem. Soc. Rev. 3, 1–2 (2010). 3. Choi, H.S. & Frangioni, J.V. Nanoparticles for biomedical imaging: Fundamentals of clinical translation. Mol. Imaging 9, 291–310 (2010). 4. Courrier, H.M., Butz, N. & Vandamme, T.F. Pulmonary drug delivery systems: recent developments and prospects. Crit. Rev. Ther. Drug Carrier Syst. 19, 425–498 (2002). 5. Tsuda, A., Rogers, R.A., Hydon, P.E. & Butler, J.P. Chaotic mixing deep in the lung. Proc. Natl. Acad. Sci. USA 99, 10173–10178 (2002). 6. Oberdorster, G. Pulmonary effects of inhaled ultrafine particles. Int. Arch. Occup. Environ. Health 74, 1–8 (2001). 7. Mills, N.L. et al. Adverse cardiovascular effects of air pollution. Nat. Clin. Pract. Cardiovasc. Med. 6, 36–44 (2009). 8. Wichmann, H.E. et al. Daily mortality and fine and ultrafine particles in Erfurt, Germany part I: role of particle number and particle mass. Res. Rep. Health Eff. Inst. November, 5–86, discussion 87–94 (2000). 9. Moller, W. et al. Deposition, retention, and translocation of ultrafine particles from the central airways and lung periphery. Am. J. Respir. Crit. Care Med. 177, 426–432 (2008). 10. Harmsen, A.G., Muggenburg, B.A., Snipes, M.B. & Bice, D.E. The role of macrophages in particle translocation from lungs to lymph nodes. Science 230, 1277–1280 (1985). 11. Semmler-Behnke, M. et al. Biodistribution of 1.4- and 18-nm gold particles in rats. Small 4, 2108–2111 (2008). 12. Geiser, M. et al. The role of macrophages in the clearance of inhaled ultrafine titanium dioxide particles. Am. J. Respir. Cell Mol. Biol. 38, 371–376 (2008). 13. Nemmar, A. et al. Ultrafine particles affect experimental thrombosis in an in vivo hamster model. Am. J. Respir. Crit. Care Med. 166, 998–1004 (2002). 14. Kreyling, W.G. et al. Translocation of ultrafine insoluble iridium particles from lung epithelium to extrapulmonary organs is size dependent but very low. J. Toxicol. Environ. Health A 65, 1513–1530 (2002). 15. Semmler, M. et al. Long-term clearance kinetics of inhaled ultrafine insoluble iridium particles from the rat lung, including transient translocation into secondary organs. Inhal. Toxicol. 16, 453–459 (2004). 16. Frangioni, J.V. In vivo near-infrared fluorescence imaging. Curr. Opin. Chem. Biol. 7, 626–634 (2003). 17. Gioux, S. et al. High-power, computer-controlled, light-emitting diode-based light sources for fluorescence imaging and image-guided surgery. Mol. Imaging 8, 156–165 (2009). 18. Troyan, S.L. et al. The FLARE™ intraoperative near-infrared fluorescence imaging system: a first-in-human clinical trial in breast cancer sentinel lymph node mapping. Ann. Surg. Oncol. 16, 2943–2952 (2009). 19. Choi, H.S. et al. Renal clearance of quantum dots. Nat. Biotechnol. 25, 1165–1170 (2007). 20. Choi, H.S. et al. Tissue- and organ-selective biodistribution of NIR fluorescent quantum dots. Nano Lett. 9, 2354–2359 (2009). 21. Burns, A.A. et al. Fluorescent silica nanoparticles with efficient urinary excretion for nanomedicine. Nano Lett. 9, 442–448 (2009). 22. Zhang, L.W. & Monteiro-Riviere, N.A. Mechanisms of quantum dot nanoparticle cellular uptake. Toxicol. Sci. 110, 138–155 (2009). 23. Brown, D.M., Wilson, M.R., MacNee, W., Stone, V. & Donaldson, K. Size-dependent proinflammatory effects of ultrafine polystyrene particles: a role for surface area and oxidative stress in the enhanced activity of ultrafines. Toxicol. Appl. Pharmacol. 175, 191–199 (2001). 24. Duncan, R. Polymer conjugates as anticancer nanomedicines. Nat. Rev. Cancer 6, 688–701 (2006). 25. Liu, W. et al. Compact cysteine-coated CdSe(ZnCdS) quantum dots for in vivo applications. J. Am. Chem. Soc. 129, 14530–14531 (2007). 26. Choi, H.S. et al. Design considerations for tumour-targeted nanoparticles. Nat. Nanotechnol. 5, 42–47 (2010).
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1303
ONLINE METHODS
© 2010 Nature America, Inc. All rights reserved.
Synthesis of inorganic/organic nanoparticles (INPs). CdSe(ZnCdS) nano crystals were used to produce INP1 and INP2 with cysteine and dihydro lipoic acid (DHLA)-conjugated polyethylene glycol spacer (DHLA-PEG, MW 1 kDa) coating, respectively, via modification of previously reported procedures (Supplementary Methods)19,25. 800 nm NIR-emitting fluoro phore CW800 (LI-COR) was conjugated on the quantum dot surface in PBS, pH 7.8 (labeling ratio ≈1.4). CdTe(ZnS) core(shell) NIR quantum dots were purchased from Invitrogen (QDot 800 ITK) with carboxylated PEG (INP3) or amino PEG (INP4) ligand coating27. To engineer INP5, α-amino-ωcarboxylate PEG 3.4 kDa (Nektar) was conjugated through conventional EDC chemistry. INP6 to INP9 silica nanoparticles were prepared by incorporating CdSe(ZnS) quantum dots in the silica shells and conjugating CW800 fluoro phores on the surface of nanospheres to engineer larger sizes of nanoparticles (Supplementary Methods)28. Synthesis of organic nanoparticles (ONPs). ONP1 was prepared by a reaction of Cy5.5-NHS (Amersham Biosciences) with human serum albumin (HSA; American Red Cross) in PBS at pH 7.8, followed by purification by gel-filtration chromatography (GFC) using an Econo-Pac P6 cartridge (Bio-Rad). The labe ling ratio was 2.6, estimated using the extinction coefficients of HSA (ε280nm = 32,900 M−1cm−1) and Cy5.5 (ε675nm = 19,000 M−1cm−1). ONP2 was obtained by simply mixing two equivalents of Cy5.5-NHS with amino mPEG 20 kDa (Nektar) in PBS at pH 7.8, followed by GFC purification (labeling ratio = 1). A size series of nanospheres based on polystyrene and polyacrylate block copolymers (ONP3–ONP7) were purchased from Phosphorex and conjugated covalently to Cy5.5 (5–10 fluorophores per mole of polyacrylate) to produce 700 nm NIR fluorescence emission (Supplementary Methods). Double-lumen balloon catheter for nanoparticle administration into lungs. Administration of nanoparticles into the lung without backflow entering the trachea and obscuring mediastinal lymph nodes imaging was a major chal lenge. To solve this problem, we developed a custom-built, double-lumen balloon catheter that blocked mucociliary flow into the trachea while ani mals were being ventilated (Supplementary Fig. 3). A custom-made 20 cm double-lumen balloon catheter was developed by Pelham Plastics with the following specifications: OD = 1.5 mm; a thru lumen = 0.71 mm for ventila tion; a balloon lumen = 0.5 mm; inflation balloon OD = 2.5 mm. The catheter was inserted through a trachea cannula (OD = 2.1 mm, ID = 1.8 mm) into the right mainstem bronchus and the outer balloon was inflated. The inflated balloon remained in position for 1 h to block backflow of deposited nanopar ticles. The thru lumen was used for administration of nanoparticle solutions deep into the lung followed by a puff of air during an inspiration phase to ensure delivery of a nanoparticle bolus deep into the lung. Both side lungs were ventilated for 60 min. Animal models. All animals were used under the supervision of an approved institutional protocol. 500 to 550 g Sprague-Dawley (SD) male rats from Charles River Laboratories were anesthetized with 65 mg/kg intraperitoneal pento barbital and were mechanically ventilated by tracheostomy using an MRI-1 ventilator (CWE) at 80 breaths per min using room air. A specially designed double-lumen balloon catheter was placed in the right mainstem bronchus as described above, and a minimal volume (50–100 μl) of nanoparticle solution
nature biotechnology
(a mixture of 800 nm INPs and/or 700 nm ONPs) in PBS was then administered through the thru lumen into lungs at a dose of 10 pmol/g of animal weight. After each study, animals were euthanized by intraperitoneal injection of 200 mg/kg pentobarbital, a method consistent with the recommendations of the Panel on Euthanasia of the American Veterinary Medical Association. Real-time intraoperative NIR fluorescence imaging. The dual-channel intra operative NIR fluorescence imaging system (FLARE) optimized for animal surgery has been described in detail previously17,18. Excitation fluence rate for white light, 700 nm NIR excitation light and 800 nm NIR excitation light were 20,000 lux, 3.5 mW/cm2 and 10 mW/cm2, respectively. The translocation of nanoparticles into lymph nodes and bladder was measured using the follow ing method: the fluorescence (Fl) and background (BG) intensity of a region of interest over each organ was quantified using custom FLARE software at the following time points: 0, 1, 2, 5, 10, 20, 30, 40, 50 and 60 min. Signal-tobackground ratio (SBR), defined as SBR = (Fl–BG)/BG, was then plotted as a function of time. For measuring nanoparticle concentration in blood, ~20 μl of blood was collected from the tail vein using glass capillary tubes, and the SBR was measured over time. At least three animals were analyzed at each time point. Statistical analysis was carried out using the unpaired Student’s t-test or one-way analysis of variance (ANOVA). Results were presented as mean ± s.d. and curve fitting was performed using Prism version 4.0a software (GraphPad). 99mTc-labeling of nanoparticles and radioscintigraphic imaging. 99mTc-con
jugated nanoparticles were prepared using high-specific-activity N-hydroxy succinimide (NHS) ester of 99mTc-MAS3, as described previously19,20,26. 50–100 μl of 99mTc-conjugated nanoparticles (100–300 μCi) were administered intratracheally into Sprague-Dawley rats. Sodium pertechnetate (TcO4−) was used as a control. Measurement of blood clearance was performed by intermit tent sampling of the tail vein. Rats were euthanized at 1 h after administra tion. To measure total urinary excretion, the bladder was removed en masse and combined with excreted urine before measurement of radioactivity in a dose calibrator. The radioactivity of each resected major organ was meas ured on a Wallac Wizard (PerkinElmer) 10-detector gamma counter. Gamma radioscintigraphy was performed with a Research Digital Camera (Isocam Technologies) equipped with a 1/2″ NaI crystal, 86 photomultiplier tubes and high-resolution (1 mm) lead collimator. Histological analysis of tissues. Tissues were resected 1 h after administration, fixed in 10% formalin for 3 h, molded with Tissue-Tek OCT compound (Fisher Scientific) and frozen in liquid nitrogen. Frozen sections were cut to a 10 μm thickness and stained with hematoxylin and eosin (Histotec Laboratories). Lung, liver, spleen, kidney, pancreas, intestine, muscle and lymph nodes were examined by a 4-channel fluorescence microscope. The excitation and emission filters used were 650 ± 22 nm and 710 ± 25 nm for NIR 700 nm, and 760 ± 20 nm and 790 nm longpass for NIR 800 nm, respectively.
27. Kim, S. et al. Near-infrared fluorescent type II quantum dots for sentinel lymph node mapping. Nat. Biotechnol. 22, 93–97 (2004). 28. Insin, N. et al. Incorporation of iron oxide nanoparticles and quantum dots into silica microspheres. ACS Nano 2, 197–202 (2008).
doi:10.1038/nbt.1696
letters
Suppressing resistance to Bt cotton with sterile insect releases
© 2010 Nature America, Inc. All rights reserved.
Bruce E Tabashnik1, Mark S Sisterson2, Peter C Ellsworth3, Timothy J Dennehy1,4, Larry Antilla5, Leighton Liesner5, Mike Whitlow5, Robert T Staten6, Jeffrey A Fabrick7, Gopalan C Unnithan1, Alex J Yelich1, Christa Ellers-Kirk1, Virginia S Harpold1, Xianchun Li1 & Yves Carrière1 Genetically engineered crops that produce insecticidal toxins from Bacillus thuringiensis (Bt) are grown widely for pest control1. However, insect adaptation can reduce the toxins’ efficacy2–5. The predominant strategy for delaying pest resistance to Bt crops requires refuges of non-Bt host plants to provide susceptible insects to mate with resistant insects2–7. Variable farmer compliance is one of the limitations of this approach. Here we report the benefits of an alternative strategy where sterile insects are released to mate with resistant insects and refuges are scarce or absent. Computer simulations show that this approach works in principle against pests with recessive or dominant inheritance of resistance. During a largescale, four-year field deployment of this strategy in Arizona, resistance of pink bollworm (Pectinophora gossypiella) to Bt cotton did not increase. A multitactic eradication program that included the release of sterile moths reduced pink bollworm abundance by >99%, while eliminating insecticide sprays against this key invasive pest. Transgenic cotton and corn that produce proteins from Bacillus thuringiensis (Bt) for insect control have been planted on a cumulative total of >200 million ha worldwide since their commercial introduction in 1996 (ref. 1). Although Bt crops remain effective against most pest populations, several pests have evolved resistance2–5. The main strategy for delaying pest resistance to Bt crops promotes survival of susceptible insects by providing host plants that do not produce Bt toxins6,7. These are commonly referred to as ‘refuges’. Ideally, most of the rare, resistant insects emerging from Bt crops will mate with the relatively abundant susceptible insects from nearby refuges. If resistance is inherited as a recessive trait, the Bt crops will kill the hybrid progeny produced by such matings and evolution of resistance will be substantially slowed6,7. Retrospective evaluations of global resistance monitoring data suggest that refuges have delayed pest resistance to Bt crops3,7. In particular, theoretical and empirical analyses imply that refuges have delayed resistance in pink bollworm (Pectinophora gossypiella), one of the world’s most destructive pests of cotton7–10. This invasive
insect, which was first detected in the United States in 1917, feeds almost exclusively on cotton in some parts of the southwestern United States, including Arizona9,10. Field and greenhouse data show that transgenic cotton that produces Bt toxin Cry1Ac (Bt cotton) kills essentially 100% of susceptible pink bollworm larvae11–14. However, laboratory selection with Cry1Ac quickly produced several resistant strains of pink bollworm from Arizona that could survive on Bt cotton plants11,12,15. Furthermore, pink bollworm resistance to Bt cotton has been reported in the field in India, where farmer compliance with the refuge strategy has been low5,16. In contrast, compliance with the refuge strategy is considered a primary reason why pink bollworm susceptibility to Bt cotton did not decrease in the field in Arizona from 1997 to 2005 (refs. 8,17). As part of a coordinated, multitactic effort to eradicate pink bollworm from the southwestern United States and northern Mexico, a new strategy that replaced refuges with season-long releases of sterile pink bollworm moths was initiated in Arizona in 2006 (refs. 18,19) (Online Methods). Under this new strategy, Arizona cotton growers were permitted to plant up to 100% transgenic cotton that produces either one Bt toxin (Cry1Ac) or two Bt toxins (Cry1Ac and Cry2Ab)18,19. The concept underlying this alternative approach is that if enough sterile moths are released, resistant moths will mate mainly with sterile moths, rather than with fertile, wild moths that are either resistant or susceptible to Bt. In principle, this approach has several advantages over the refuge strategy. First, farmers could greatly reduce or eliminate planting of refuges and thus avoid associated complications and yield losses. Moreover, because matings with sterile insects do not produce fertile progeny, this approach could delay resistance that is based on either recessive or dominant inheritance. Unlike the refuge strategy, this approach does not require maintenance of pest populations. It is thus compatible with eradication efforts. To test the idea of delaying resistance with sterile insect releases, we conducted computer simulations and analyzed more than a decade of field data from before and after deployment of this strategy statewide in Arizona. In the computer simulations, sterile moth releases suppressed resistance to Bt cotton by decreasing the pest’s population size and
1Department
of Entomology, University of Arizona, Tucson, Arizona, USA. 2USDA-ARS, San Joaquin Valley Agricultural Sciences Center, Parlier, California, USA. of Entomology, University of Arizona, Maricopa Agricultural Center, Maricopa, Arizona, USA. 4Monsanto Company, St. Louis, Missouri, USA. 5Arizona Cotton Research & Protection Council, Phoenix, Arizona, USA. 6USDA-APHIS, retired. 7USDA-ARS, US Arid Land Agricultural Research Center, Maricopa, Arizona, USA. Correspondence should be addressed to B.E.T. ([email protected]).
3Department
Received 17 August; accepted 8 October; published online 7 November 2010; doi:10.1038/nbt.1704
1304
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
letters
*
15
10 Steriles Low Very low None
5
0 0
5
10
15
20
© 2010 Nature America, Inc. All rights reserved.
Refuge (%)
Figure 1 Computer simulations of effects of sterile moth releases on evolution of resistance to Bt cotton. We used a stochastic, spatially explicit model with a region of 400 cotton fields of 15 ha each (Supplementary Methods). Two alleles (‘r’, resistant; ‘s’, susceptible) at a single locus controlled larval survival, which was 0% for ss and rs (recessive inheritance), and 15% for rr on Bt cotton; 20.8% for ss and rs, and 17.7% for rr on non-Bt cotton. The initial r allele frequency was 0.018. The criteria for resistance were an r allele frequency >0.50 and a population size >10% of the initial population size. Each point represents the median of ten simulations. The simulated sterile moth release rate was 10 times higher in non-Bt cotton fields than in Bt cotton fields to mimic field releases. We simulated three sterile release rates: none, very low and low. Sterile release rates (in moths per ha per week) were 0.16 in Bt cotton and 1.6 in non-Bt cotton for the very low release rate, and 0.62 in Bt cotton and 6.2 in non-Bt cotton for the low release rate. The actual release rates in the field were >600 times higher than the low release rate (see text). With no refuges and the low release rate, the regional population size decreased to 0 after 2 to 4 years (indicated by asterisk). In the cases with refuges where resistance did not evolve in 20 years, the regional population persisted but the r allele frequency remained <0.025 after 20 years.
reducing the probability of mating between resistant moths (Fig. 1, Supplementary Fig. 1 and Supplementary Methods). In the simulations, when sufficient numbers of sterile moths were released, pest populations did not persist and resistance did not occur over a 20-year period (Fig. 1 and Supplementary Fig. 1). Based on experimental evidence of pink bollworm responses to Bt cotton11–13,15,17, we first modeled recessive inheritance of resistance with a fitness cost and incomplete resistance (Fig. 1 and Supplementary Table 1). With no refuges, resistance evolved in 3 years without releases of sterile moths, but populations did not persist and resistance did not occur with weekly ‘low’ releases of 0.62 sterile moths per ha of Bt cotton (Fig. 1). With refuges accounting for 2 to 20% of the total area planted to cotton, resistance evolved more slowly with increases in the number of sterile moths released, increases in the refuge percentage, or both (Fig. 1). With 20% of cotton planted to non-Bt cotton refuges, resistance did not occur in 20 years, even without sterile releases (Fig. 1). Because of fitness costs associated with pink bollworm resistance to Bt cotton, higher refuge percentages not only reduced the proportion of the population exposed to selection for resistance but also increased selection against resistance8,20. In a hypothetical worst-case scenario with dominant inheritance of resistance and no refuges, resistance evolved in 1 year with no sterile moths, but populations did not persist and resistance did not occur with weekly releases of 78 sterile moths per ha of Bt cotton (Supplementary Fig. 1). The mean release rate of sterile pink bollworm moths achieved from 2006 to 2009 in Arizona was >600 times higher than the simulated rate that suppressed recessive resistance to Bt cotton for >20 years without refuges (Fig. 1). For each year from 2006 to 2009, the mean number of sterile moths released per ha per week was 380 (range = 170–830) for Bt cotton and 4,400 (range = 3,900–5,200) for non-Bt cotton, with
a statewide total of 1.7 to 2.1 billion sterile moths released per year. Before sterile releases began in Arizona in 2006, the percentage of cotton planted to non-Bt cotton refuges statewide was >25% in all years, with a mean of 37.4% for 1997 to 2005 (Supplementary Fig. 2). With the onset of sterile releases, the statewide refuge percentage declined to 15.4% in 2006, 8.4% in 2007, 2.3% in 2008 and 3.1% in 2009 (Supplementary Fig. 2). Consistent with the simulation results (Fig. 1), monitoring of pink bollworm field populations showed no net decrease in susceptibility to Cry1Ac from 1997 to 2005, when the refuge percentage was >25% every year, or from 2006 to 2009, when sterile insects were released and the mean refuge percentage was 7.3% (Fig. 2 and Supplementary Fig. 2). DNA screening for the three mutations in the cadherin gene that are linked with pink bollworm resistance to Cry1Ac did not identify any resistant alleles during 2006 to 2009 (n = 2,499) (Online Methods). Based on larval survival on diet treated with Cry1Ac, bioassays detected a single resistant individual during 2006 (n = 3,822), but no resistant individuals were found during 2007 or 2008 (n = 3,602) (Fig. 2) (Online Methods). Bioassays also detected no larvae resistant to Cry2Ab during 2007 or 2008 (n = 2,572). As detailed below, in 2009, this pest was so scarce in Arizona that we could not collect enough individuals to conduct bioassays. Since the eradication program began in 2006, pink bollworm populations have declined dramatically (Fig. 3). In 2009, only two pink bollworm larvae were found in 16,600 bolls of non-Bt cotton screened statewide. This yields an infestation rate of 0.012%, which represents a 99.9% decline from the 15.3% infestation rate in 2005 (Fig. 3A). Likewise, the number of wild male pink bollworm moths caught per trap per week dropped from 26.7 in 2005 to 0.0054 in 2009, a 99.98% decrease (Fig. 3b). The decrease in pink bollworm populations during the eradication program was steeper than the decline observed with the planting of Bt cotton before the eradication program began in Arizona (Fig. 3)21 and the declines in other target pests associated with planting of Bt crops in other regions22–26. Along with declines in pink bollworm populations, insecticide sprays against this pest fell to historic lows (Fig. 4). The mean number of sprays per ha per year targeting pink bollworm in Arizona was 2.7 from 1990 to 1995, which dropped to 0.64 from 1996 to 2005 with use of Bt cotton, before the eradication program (Fig. 4)10,27. Under the eradication program, this mean decreased to 0.14 in 2006, 0.013 in 2007, 0.0029 in 2008 and 0 in 2009 (Fig. 4). The mean yearly cost of pink bollworm to Arizona cotton growers, including 0.30 Before eradication program Resistance allele frequency
Years to resistance
>20
0.25
Eradication program
0.20 0.15 0.10 0.05 0 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Year
Figure 2 Pink bollworm resistance allele frequency (with 95% confidence intervals) in Arizona from 1997 to 2008, as estimated from laboratory bioassays with Cry1Ac (Online Methods). Data from 1997 to 2004 were reported previously8.
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1305
letters
100
Before eradication program
Figure 3 Pink bollworm abundance in Arizona before and during the eradication program. (a) Larval infestation of non-Bt cotton bolls from 1997 to 2009. Analysis of covariance (Online Methods) shows that infestation (log [% infested non-Bt cotton bolls]) was significantly affected by year, treatment (before versus during the eradication program), and a year-by-treatment interaction (P ≤ 0.0001 for each factor and their interaction, r2 = 0.97). Linear regression shows that the slope, which indicates the decrease in infestation per year, was 18 times steeper from 2006 to 2009 (−0.81, r 2 = 0.97, P = 0.012) than from 1997 to 2005 (−0.044, r2 = 0.42, P = 0.059). Data from 1997 to 2005 were reported previously 14. (b) Wild male pink bollworm moths trapped in Bt cotton fields from 1998 to 2009. Analysis of covariance shows that number of moths caught per trap per week (log transformed) was significantly affected by year, treatment (before versus during the eradication program), and a year-by-treatment interaction (P < 0.0001 for each factor and their interaction, r2 = 0.95). Linear regression shows that the slope, which indicates the change in moths trapped per year, was significantly negative from 2006 to 2009 (−1.0, r2 = 0.92, P = 0.04), but did not differ significantly from 0 from 1998 to 2005 (0.017, r2 = 0.071, P = 0.52).
Eradication program
10
1
0.1
0.01 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 Year 100
Before eradication program
Eradication program
1 0.1 0.01 0.001 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 Year
yield losses and insecticide sprays, was $18 million for 1990 to 1995, $5.4 million from 1996 to 2005 and only $172,000 for 2006 to 2009 (ref. 28). By using Bt cotton as one component of a comprehensive integrated pest management program, Arizona growers also greatly reduced insecticide use against all cotton pests, including those not killed by Bt cotton, saving a cumulative total of $200 million from 1996 to 2009 (refs. 10,27). The estimated yearly cost of pink bollworm to US cotton growers before the eradication program was $32 million per year 19, which is $2 million more than the mean annual cost of the eradication program in the United States and northern Mexico from 2006 to 2009 (ref. 29). These estimates do not include the value of the indirect advantages of reduced insecticide use associated with the eradication program, such as conservation of natural enemies that control pests other than pink bollworm, and benefits to the environment and human health27. Although increasing economic gains are expected if pink bollworm remains scarce and program costs decline, eradication of pink bollworm from the United States and northern Mexico remains challenging because this invasive pest is widespread and resilient9,10. Nevertheless, our results show that pink bollworm resistance to
10
Eradication program
Before eradication program 1 Sprays per year (log scale)
0.1
0.01
0.001
20 08
20 07
20 06
20 05
20 04
02 20 03
01
20
00
20
99
20
98
19
96
20 09
*
0.0001 19
Figure 4 Mean number of insecticide sprays per ha per year targeting pink bollworm on cotton in Arizona from 1996 to 2009. The asterisk indicates 0 sprays in 2009. Analysis of covariance (Online Methods) shows that insecticide use (log [sprays + 0.0001]) was significantly affected by year, treatment (before versus during the eradication program), and a year-by-treatment interaction (P < 0.0001 for each factor and their interaction, r2 = 0.96). Linear regression shows that the slope, which indicates the change in sprays per year, was significantly negative from 2006 to 2009 (−1.0, r2 = 0.98, P = 0.01), but did not differ significantly from 0 from 1996 to 2005 (−0.035, r2 = 0.17, P = 0.17). The regression lines are not plotted because the 0 sprays for 2009 cannot be represented on the log scale used here and the regressions were calculated on a different scale (log [sprays + 0.0001]). Data from 1996 to 2005 were reported previously 27.
Bt cotton in Arizona did not increase from 2006 to 2009, despite the low abundance of non-Bt cotton refuges. Although simulation results suggest that sterile releases alone can delay resistance to Bt crops (Fig. 1 and Supplementary Fig. 1), the dramatic decline in Arizona’s pink bollworm population probably reflects the combined effects of the sterile releases, high adoption and sustained efficacy of transgenic cotton producing either one or two Bt toxins and other control tactics used in the eradication program18,19. Although the sterile insect technique has been known for decades and used successfully in some cases30,31, the program described here is, to our knowledge, the first large-scale effort using this approach to suppress pest resistance to a transgenic crop. This program has benefitted from a strong grower commitment, public investment in sterile insect technology, a well-developed infrastructure for monitoring pink bollworm resistance and population density, virtually 100% efficacy of Bt cotton against pink bollworm, and this pest’s nearly exclusive dependence on cotton in Arizona9,10,18,19. We do not know whether the success of this program can be replicated with other pests or even with pink bollworm in other parts of the world. Analyses of mathematical models imply that refinements of the sterile insect technique, such as release of transgenic insects carrying a dominant lethal gene, could be more widely applicable for suppression of pests that harm crops or transmit pathogens31–33. Our results suggest that further exploration of such tactics could help to enhance the sustainability of Bt crops. The results reported here also illustrate the idea that Bt crops are likely to be most useful when combined with other tactics for integrated control of pests.
97
Moths/trap/week (log scale)
© 2010 Nature America, Inc. All rights reserved.
10
19
b
19
Infested non-Bt bolls (%) (log scale)
a
Year
1306
VOLUME 28 NUMBER 12 DECEMBER 2010 nature biotechnology
letters Methods Methods and any associated references are available in the online version of the paper at http://www.nature.com/naturebiotechnology/. Note: Supplementary information is available on the Nature Biotechnology website.
© 2010 Nature America, Inc. All rights reserved.
Acknowledgments We thank N. Alphey, D. Crowder, F. Gould, W. Hutchison, S. Naranjo, G. Rosenthal, R.L. Smith and K.M. Wu for their thoughtful comments and assistance; and E. Miller, D. Parker, M. Krueger, N. Manhardt and Arizona Cotton Research and Protection Council (ACRPC) staff members for their contributions to the eradication program. This work was supported by funding from the USDANational Institute of Food and Agriculture program, ACRPC, Arizona Cotton Growers Association, Cotton Foundation, Cotton Inc., National Cotton Council, Western IPM Center, Arizona Pest Management Center, and Monsanto and Dow AgroSciences. AUTHOR CONTRIBUTIONS M.S.S. conducted computer simulations; P.C.E. collected and summarized insecticide use data; L.A., L.L., M.W. and R.T.S. directed the eradication program and contributed to its design; J.A.F., G.C.U., A.J.Y., C.E.-K., and V.S.H. collected data; B.E.T., M.S.S., P.C.E., T. J. D., L.A., R.T.S., J.A.F., G.C.U., X. L. and Y.C. contributed to research design; Y.C. analyzed data. B.E.T. wrote the paper. All authors discussed the results and commented on the manuscript. COMPETING FINANCIAL INTERESTS The authors declare competing financial interests: details accompany the full-text HTML version of the paper at http://www.nature.com/naturebiotechnology/. Published online at http://www.nature.com/naturebiotechnology/. Reprints and permissions information is available online at http://npg.nature.com/ reprintsandpermissions/. 1. James, C. Global status of commercialized biotech/GM crops: 2009. ISAAA Briefs 41 (ISAAA, Ithaca, New York, USA, 2009). 2. Kruger, M.J., Van Rensburg, J.B.J. & Van den Berg, J. Perspective on the development of stem borer resistance to Bt maize and refuge compliance at the Vaalharts irrigation scheme in South Africa. Crop Prot. 28, 684–689 (2009). 3. Tabashnik, B.E., Van Rensburg, J.B.J. & Carrière, Y. Field-evolved insect resistance to Bt crops: definition, theory, and data. J. Econ. Entomol. 102, 2011–2025 (2009). 4. Storer, N.P. et al. Discovery and characterization of field resistance to Bt maize: Spodoptera frugiperda (Lepidoptera: Noctuidae) in Puerto Rico. J. Econ. Entomol. 103, 1031–1038 (2010). 5. Bagla, P. Hardy cotton-munching pests are latest blow to GM crops. Science 327, 1439 (2010). 6. Gould, F. Sustainability of transgenic insecticidal cultivars: integrating pest genetics and ecology. Annu. Rev. Entomol. 43, 701–726 (1998). 7. Tabashnik, B.E., Gassmann, A.J., Crowder, D.W. & Carrière, Y. Insect resistance to Bt crops: evidence versus theory. Nat. Biotechnol. 26, 199–202 (2008). 8. Tabashnik, B.E., Dennehy, T.J. & Carrière, Y. Delayed resistance to transgenic cotton in pink bollworm. Proc. Natl. Acad. Sci. USA 102, 15,389–15,393 (2005). 9. Henneberry, T.J. & Naranjo, S.E. Integrated management approaches for pink bollworm in the southwestern United States. Integrated Pest Management Rev. 3, 31–52 (1998). 10. Naranjo, S.E. & Ellsworth, P.C. Fourteen years of Bt cotton advances IPM in Arizona. Southwest. Entomol. 35, 439–446 (2010). 11. Tabashnik, B.E. et al. Frequency of resistance to Bacillus thuringiensis in field populations of pink bollworm. Proc. Natl. Acad. Sci. USA 97, 12,980–12,984 (2000).
12. Tabashnik, B.E. et al. Association between resistance to Bt cotton and cadherin genotype in pink bollworm. J. Econ. Entomol. 98, 635–644 (2005). 13. Carrière, Y. et al. Cadherin-based resistance to Bt cotton in hybrid strains of pink bollworm: fitness costs and incomplete resistance. J. Econ. Entomol. 99, 1925–1935 (2006). 14. Dennehy, T.J. et al. Susceptibility of Southwestern Pink Bollworm to Bt toxins Cry1Ac and Cry2Ab2 in 2005. in Arizona Cotton Report (P-151) (Cooperative Extension, Agricultural Experiment Station, The University of Arizona, Tucson, USDA, 2007) . 15. Liu, Y.B., Tabashnik, B.E., Dennehy, T.J., Patin, A.L. & Bartlett, A.C. Development time and resistance to Bt crops. Nature 400, 519 (1999). 16. Stone, G.D. Biotechnology and the political ecology of information in India. Hum. Organ. 63, 127–140 (2004). 17. Carrière, Y. et al. Long-term evaluation of compliance with refuge requirements for Bt cotton. Pest Manag. Sci. 61, 327–330 (2005). 18. Antilla, L. & Liesner, L. Program advances in the eradication of pink bollworm, Pectinophora gossypiella, in Arizona cotton. in Proceedings of the 2008 Beltwide Cotton Conferences, Nashville Tennessee, January 8–11, 2008, 1162–1168 (National Cotton Council of America, Memphis, Tennessee, USA, 2008). 19. Grefenstette, B., El-Lissy, O. & Staten, R.T. Pink Bollworm Eradication Plan in the US. (USDA, 2009). . 20. Gassmann, A.J., Carrière, Y. & Tabashnik, B.E. Fitness costs of insect resistance to Bacillus thuringiensis. Annu. Rev. Entomol. 54, 147–163 (2009). 21. Carrière, Y. et al. Long-term regional suppression of pink bollworm by Bacillus thuringiensis cotton. Proc. Natl. Acad. Sci. USA 100, 1519–1523 (2003). 22. Adamczyk, J.J. Jr. & Hubbard, D. Changes in populations of Heliothis virescens (F.) (Lepidoptera: Noctuidae) and Helicoverpa zea (Boddie) (Lepidoptera: Noctuidae) in the Mississippi Delta from 1986 to 2005 as indicated by adult male pheromone traps. J. Cotton Sci. 10, 155–160 (2006). 23. Micinski, S., Blouin, D.C., Waltman, W.F. & Cookson, C. Abundance of Helicoverpa zea and Heliothis virescens in pheromone traps during the past twenty years in northwestern Louisiana. Southwest. Entomol. 33, 139–149 (2008). 24. Storer, N.P., Dively, G.P. & Herman, R.A. Landscape effects of insect-resistant genetically modified crops. in Integration of Insect-Resistant Genetically Modified Crops within IPM Programs, (eds. Romeis, J., Shelton, A.M. & Kennedy, G.G.) 273–302 (Springer, The Netherlands, 2008). 25. Wu, K.M., Lu, Y.H., Feng, H.Q., Jiang, Y.Y. & Zhao, J.Z. Suppression of cotton bollworm in multiple crops in China in areas with Bt toxin-containing cotton. Science 321, 1676–1678 (2008). 26. Hutchison, W.D. et al. Areawide suppression of European corn borer with Bt maize reaps savings to non-Bt maize growers. Science 330, 222–225 (2010). 27. Naranjo, S.E. & Ellsworth, P.C. Fifty years of the integrated control concept: moving the model and implementation forward in Arizona. Pest Manag. Sci. 65, 1267–1286 (2009). 28. Ellsworth, P.C., Fournier, A. & Smith, T.D. Cotton Insect Losses (The University of Arizona, Cooperative Extension, 2010). . 29. Anonymous. Pink Bollworm Eradication Current Status. (US Department of Agriculture, February, 2009). < http://www.aphis.usda.gov/plant_health/plant_pest_ info/cotton_pests/downloads/PBW_prog_update2009.pdf>. 30. Krafsur, E.S. Sterile insect technique for suppressing and eradicating insect populations: 55 years and counting. J. Agric. Entomol. 15, 303–317 (1998). 31. Gould, F. & Schliekelman, P. Population genetics of autocidal control and strain replacement. Annu. Rev. Entomol. 49, 193–217 (2004). 32. Thomas, D.D., Donnelly, C.A., Wood, R.J. & Alphey, L.S. Insect population control using a dominant, repressible, lethal genetic system. Science 287, 2474–2476 (2000). 33. Alphey, N., Bonsall, M.B. & Alphey, L. Combining pest control and resistance management: synergy of engineered insects with Bt crops. J. Econ. Entomol. 102, 717–732 (2009).
nature biotechnology VOLUME 28 NUMBER 12 DECEMBER 2010
1307
© 2010 Nature America, Inc. All rights reserved.
ONLINE METHODS
Pink bollworm eradication program. The goal of this ongoing program is to eradicate pink bollworm from the United States and northern Mexico18,19. The program includes: (i) mapping cotton field locations, sizes and types (Bt or non-Bt); (ii) measuring pink bollworm abundance by checking cotton bolls for larval infestation and by trapping adult males; (iii) monitoring pink bollworm resistance to Bt toxins Cry1Ac and Cry2Ab; and (iv) controlling pink bollworm using Bt cotton, sterile moth releases, cultural practices, mating disruption with pheromone in non-Bt cotton and minimal insecticide applications. The grower-sponsored Arizona Cotton Research and Protection Council (ACRPC) proposed a plan to allow Arizona cotton growers to plant up to 100% Bt cotton that produces either one or two Bt toxins, with sterile insect releases instead of non-Bt cotton refuges for delaying resistance18. The US Environmental Protection Agency (EPA) convened a scientific advisory panel to review the plan34 and the EPA subsequently approved the plan. The eradication program began in Texas, New Mexico and Mexico in 2001 and has expanded each year since19. It was initiated in phases in Arizona, starting in 2006 with eastern and central Arizona (Cochise, Graham, Greenlee, Maricopa, Pima and Pinal Counties), which accounted for 83% of Arizona’s cotton acreage that year. The program extended to northwestern Arizona (La Paz and Mohave Counties) and southern California in 2007, and to southwestern Arizona (Yuma County) and Baja California in 2008. Funding for the binational program is provided by growers (80%) and the USDA-Animal Plant Health Inspection Service (APHIS) (20%)29. Computer simulations. To assess the potential effects of sterile moth releases on evolution of resistance to Bt cotton, we used a previously described stochastic, spatially explicit model of pink bollworm resistance to Bt cotton35 with some modifications. We modeled scenarios where resistance would evolve with limited refuges and no sterile releases, as reported for pink bollworm resistance to Bt cotton producing Cry1Ac in India5 and for some cases with other pests3. As detailed in the Supplementary Methods, we based assumptions primarily on empirical data for pink bollworm. However, to conservatively test the potential for sterile moth releases to delay resistance, we also used some assumptions that overestimate the rate of resistance evolution. Supplementary Table 1 summarizes the parameter values that we examined. Sterile moth releases. The sterile moths released were from the APHIS strain of pink bollworm36 maintained at the USDA-APHIS Pink Bollworm Rearing Facility in Phoenix, Arizona. This strain originated from Arizona and was infused yearly with wild insects until 2000. Moths were marked internally by rearing larvae on artificial diet containing a fat soluble dye37 (oil red dye no. 2114, Passaic Color and Chemical Co.). Moths were irradiated with 200 Gy in a Shepherd 484R Cobalt-60 irradiator and stored in containers in groups of 2 million at 4 °C for 1–2 d until release. Moths were released from small airplanes such as Cessna model 206. Each plane had a tube underneath for releasing moths and a device that controlled the release rate. Moths were usually released from dawn to 11 a.m. at an altitude of roughly 150 m and a speed of ~180 km per hour. Throughout each cotton-growing season from 2006 to 2009, 1.7–2.1 billion sterile pink bollworm moths were released over Arizona cotton fields. From May to October, each cotton field received releases two or three times per week. For each year from 2006 to 2009, the mean number of sterile moths released per ha per week was 380 for Bt cotton (range = 170–830) and 4,400 (range = 3900–5200) for non-Bt cotton. The release rate was higher for non-Bt cotton than for Bt cotton because larval survival and emergence of wild moths was expected to be higher in non-Bt cotton. Whereas spatial separation between refuges and Bt crops could limit the efficacy of the refuge strategy, sterile insect releases were made directly into Bt and non-Bt cotton fields. Also, although temporal asynchrony in moth emergence between refuges and Bt cotton fields could reduce the effectiveness of the refuge strategy15, sterile releases were made frequently throughout the season, so that sterile moths were available consistently for mating with wild moths. Refuge percentage. For 1998 to 2009, we determined the total area of cotton (Gossypium hirsutum (upland cotton) and G. barbadense (Pima cotton)) planted and the area planted to non-Bt cotton in Arizona using methods similar to those described previously17. We calculated the refuge percentage
nature biotechnology
as the area of non-Bt cotton divided by the total area of cotton, multiplied by 100%. Thus, our estimate of the refuge percentage includes non-Bt cotton planted by growers who planted no Bt cotton. In each year, an experienced field crew trained by the ACRPC mapped the position of cotton fields and collected information from producers on cotton type (Bt or non-Bt) throughout Arizona. In each year, data collected by field crews were mapped with Geographic Information System (GIS) software and validated by comparing cotton field locations between field-generated paper maps and computergenerated maps. Enzyme-linked immunosorbent assays for Cry1Ac from a randomly chosen subset of fields showed that all of the fields tested had been correctly identified on the GIS map17. We analyzed the GIS maps using ArcView software to calculate the area planted to each type of cotton in each year. The statewide Bt cotton percentage for 1997 was reported previously38. For 1997 to 2009, Arizona’s yearly mean total area planted to cotton was 98,000 ha (range = 58,000–123,000 ha). In addition to cotton, pink bollworm larvae in Arizona feed on okra (Abelmoschus esculentus), which typically grew on less than 150 ha per year in Arizona, approximately 1/500th (0.2%) or less of the area planted to cotton. We did not include okra in our calculations of non-Bt cotton refuges, but sterile moths were released on okra at the same rate used on non-Bt cotton. Resistance monitoring. We monitored pink bollworm resistance to Bt toxins Cry1Ac and Cry2Ab using bioassays8,11. We also used DNA screening to monitor resistance to Cry1Ac8,12,13,39–41. The bioassays can detect resistance caused by any mechanism. Because pink bollworm resistance to the diagnostic concentrations of Cry1Ac and Cry2Ab used in bioassays is recessive11–13,15,17,42, however, the bioassays do not distinguish between homozygous susceptible larvae and heterozygous larvae carrying only one copy of a resistance gene. The DNA screening detects any of the three mutations in a cadherin gene that are tightly linked with resistance to Cry1Ac in several laboratory-selected strains of pink bollworm from Arizona that survive on Bt cotton plants12,13,39–41. Although the DNA screening can identify single resistance alleles in heterozygous insects, it detects only the three known cadherin resistance alleles. Bioassays. To monitor pink bollworm resistance to Cry1Ac and Cry2Ab, we used previously described methods for field sampling, laboratory bioassays and data analysis8,11,42. To monitor resistance to Cry1Ac, each year from 1997 to 2008, we tested an average of 2,730 larvae from an average of 11.6 cotton fields in Arizona. The progeny of field-collected pink bollworm from each site were reared and tested separately. Neonates were tested individually for 21 d on artificial diet without toxin (control) or on diet with 10 μg Cry1Ac per ml diet, which kills susceptible homozygotes and heterozygotes but not resistant homozygotes11. Based on recessive inheritance of resistance, the Cry1Ac resistance allele frequency for each site was estimated as the square root of the frequency of survivors after adjustment for control mortality11. We calculated the 95% confidence interval for each yearly statewide mean resistance allele frequency using the bootstrap method with 10,000 repetitions8. Bioassay data from 1997 to 2004 were reported previously8. To monitor resistance to Cry2Ab, we used methods similar to those described above for Cry1Ac. Neonates were tested individually for 21 d on artificial diet without toxin (control) or on diet with a diagnostic concentration of 10 μg Cry2Ab per ml diet42. The numbers of larvae tested at the diagnostic concentration of Cry2Ab were 2,052 larvae from nine cotton fields in 2007 and 520 larvae from two cotton fields in 2008. The sample size was smaller in 2008 because the scarcity of pink bollworm in that year made it difficult to collect enough live individuals to conduct bioassays. DNA screening. We used previously described field sampling procedures and allele-specific PCR methods to screen for three cadherin mutations linked with pink bollworm resistance to Cry1Ac8,12,13,39–41. Details of the methods and results of DNA screening from 2001 to 2005 were reported previously41. DNA screening was completed here for the following numbers of wild pink bollworm individuals collected from the field from Arizona: 1,033 in 2006, 884 in 2007, 364 in 2008 and 218 in 2009 (total = 2,499). Pink bollworm abundance. Pink bollworm abundance was measured with two complementary methods: checking bolls of non-Bt cotton for larvae and capturing male moths in Bt cotton fields with pheromone-baited traps.
doi:10.1038/nbt.1704
© 2010 Nature America, Inc. All rights reserved.
Bolls. Pink bollworm abundance in bolls of non-Bt cotton plants in Arizona was determined from 1997 to 2009 by sampling bolls from commercial cotton fields during August to November and cutting them open to check for larvae as described previously14. The mean sample size per year was 18,200 bolls (range = 2,900–54,300) from 7 to 44 cotton fields. Pheromone traps. Male pink bollworm moths were captured in Bt cotton fields from 1998 to 2009 using delta traps baited with septa impregnated with 4 mg of the pink bollworm female sex pheromone gossyplure (1:1 mixture of the Z,E-7,11 and Z,Z-7,11 isomers of hexadecadienyl acetate, Shin-Etsu Corporation)43. Traps were near field edges, usually at 0.8 m high (range = 0.5–1.5 m). Traps and lures were changed every week. Traps were brought into the laboratory where moths were identified under magnification. During the eradication program when sterile moths were released, sterile males in traps were identified visually by the red dye used in their larval diet. When wild pink bollworm males were rare in traps, particularly during 2008 and 2009, pink bollworm males that did not readily show dye were subjected to additional testing as follows: each male was ground individually with a glass rod in a glass shell vial containing several milliliters of acetone. A strip of Whatman no. 4 filter paper trimmed to a point at the top was put in the glass vial. After the acetone rose to the top of the filter paper and evaporated, the male was deemed sterile if red dye appeared at the top of the filter paper and wild if no red dye was visible. We analyzed data on wild males caught in traps that were collected from the field each year from 15 April to 15 June because data for this period were available for all years from 1998 to 2009. The mean number of traps per year was 10,400 (range = 1,498–29,928) with a seasonal total of up to nine traps (one per week) at each monitoring site. More than 150 Bt cotton fields were monitored with traps each year. A previous analysis based on data from 1992 to 2001 (before the eradication program) showed significant declines in pink bollworm males captured in pheromone traps in regions of Arizona where abundance of Bt cotton was high but not in regions of Arizona where abundance of Bt cotton was low21. Compared with the data reported and analyzed here (Fig. 3b), the previous analysis differs in terms of the years studied (1992 to 2001 before; 1998 to 2009 here), the criteria for standardizing the time period examined within years (accumulation of degree-days before; calendar dates here); spatial scale (15 regions of Arizona analyzed separately before; statewide data pooled here), and the distribution of pheromone traps (traps near all cotton fields before; only traps in Bt cotton fields here). The analysis here shows a steep statewide decline in males captured in pheromone traps in Bt cotton fields during the eradication program (2006 to 2009) but not before the eradication program (1998 to 2005) (Fig. 3b). Insecticide sprays. Insecticide sprays targeting pink bollworm in Arizona cotton from 2006 to 2009 were calculated by adding the number of sprays made by growers and by the ACRPC as part of the eradication program. The ACRPC sprayed when larval infestation reached or exceeded 5% of bolls. Growers sprayed based on their own criteria. The number of sprays made by growers against pink bollworm was estimated as described previously28. The mean number of sprays per ha per year made by growers against pink bollworm was 0.068 in 2006, 0.0095 in 2007, 0 in 2008 and 0 in 2009 (ref. 10). The mean number of sprays per ha per year made by the ACRPC against pink bollworm was 0.070 in 2006, 0.0035 in 2007, 0.00029 in 2008 and 0 in 2009. Data for 1996 to 2005, which reflect only sprays made by growers, were reported previously27. Analysis of data on pink bollworm abundance and insecticide sprays. We calculated the percentage decrease in abundance as: 100% − [(final abundance/
doi:10.1038/nbt.1704
initial abundance) × 100%]. For example, infestation of non-Bt cotton bolls was 0.012% in 2009 (final abundance) and 15.3% in 2005 (initial abundance). The percentage decrease in abundance was 100% − [(0.012%/15.3%) × 100%], which equals 99.9%. We tested for effects of the eradication program on three response variables: (i) infestation of non-Bt cotton bolls, (ii) wild males caught per trap per week in Bt cotton fields, and (iii) insecticide sprays per ha per year targeting pink bollworm. To test for effects of the eradication program on infestation of non-Bt cotton bolls, we compared trends for two time periods: years with Bt cotton before the eradication program (1997–2005) and years during the eradication program (2006–2009). For each period, simple linear regression was used to evaluate the association between the percentage of infested bolls (log transformed) and year. We also used covariance analysis to evaluate the effects of year, treatment (before versus during the eradication program), and their interaction on the percentage of infested bolls (log transformed). In this analysis a significant interaction term indicates that the slope before the eradication program differs from the slope during the eradication program. For the covariance analysis of the boll infestation data, years before the eradication program were coded as 1 (1997) to 9 (2005) and during the eradication program as 1 (2006) to 4 (2009). As with the boll infestation data, we used simple linear regression and covariance analysis to evaluate the effects of the eradication program on the number of wild males caught per trap per week in Bt cotton fields (log transformed). For this analysis, years with available trap data before the eradication program were 1998–2005 and were coded as 1–8. We used the same approach to analyze the effects of the eradication program on the number of insecticide sprays targeting pink bollworm. Because the spray data included a zero (for 2009), we added 0.0001 to the number of sprays before performing the log transformation44. For sprays, years analyzed before the eradication program were 1996 to 2005 and were coded as 1–10. Analyses were performed in JMP45. 34. U.S. Environmental Protection Agency. Scientific Advisory Panel, October 24–26, 2006: Evaluation of the resistance risks from using 100% Bollgard and Bollgard II cotton as part of a pink bollworm eradication program in the state of Arizona . 35. Sisterson, M.S., Antilla, L., Carrière, Y., Ellers-Kirk, C. & Tabashnik, B.E. Effects of insect population size on evolution of resistance to transgenic crops. J. Econ. Entomol. 97, 1413–1424 (2004). 36. Liu, Y.B. et al. Effects of Bt cotton and Cry1Ac toxin on survival and development of pink bollworm (Lepidoptera: Gelechiidae). J. Econ. Entomol. 94, 1237–1242 (2001). 37. Graham, H.M. & Mangum, C.L. Larval diets containing dye for tagging pink bollworm moths internally. J. Econ. Entomol. 64, 376–379 (1971). 38. Sims, M.A. et al. Arizona’s multi-agency resistance management program for Bt cotton: sustaining the susceptibility of pink bollworm. in Proceedings of the 2001 Beltwide Cotton Conferences, January 9–13, 2001, Anaheim, California, 2, 1175–1179 (National Cotton Council of America, Memphis, Tennessee, USA, 2001). 39. Morin, S. et al. Three cadherin alleles associated with resistance to Bacillus thuringiensis in pink bollworm. Proc. Natl. Acad. Sci. USA 100, 5004–5009 (2003). 40. Morin, S. et al. DNA-based detection of Bt resistance alleles in pink bollworm. Insect Biochem. Mol. Biol. 34, 1225–1233 (2004). 41. Tabashnik, B.E. et al. DNA screening reveals pink bollworm resistance to Bt cotton remains rare after a decade of exposure. J. Econ. Entomol. 99, 1525–1530 (2006). 42. Tabashnik, B.E. et al. Asymmetrical cross-resistance between Bacillus thuringiensis toxins Cry1Ac and Cry2Ab in pink bollworm. Proc. Natl. Acad. Sci. USA 105, 11,889–11,894 (2009). 43. Tabashnik, B.E. et al. Dispersal of pink bollworm (Lepidoptera: Gelechiidae) males in transgenic cotton that produces a Bacillus thuringiensis toxin. J. Econ. Entomol. 92, 772–780 (1999). 44. Sokal, R.R. & Rohlf, F.J. Biometry (W.H. Freeman and Company, San Francisco, 1969). 45. SAS Institute. JMP 8.0, Cary, North Carolina, USA (2008).
nature biotechnology
e r r ata a n d c o r r i g e n d a
Erratum: GM alfalfa—who wins? Jeffrey L Fox Nat. Biotechnol. 28, 770 (2010); published online 9 August 2010; corrected after print 7 December 2010 In the version of this article initially published, Geertson Seed Farms was spelled “Geerston.” The error has been corrected in the HTML and PDF versions of the article.
Erratum: South-South entrepreneurial collaboration in health biotech Halla Thorsteinsdóttir, Christina C Melon, Monali Ray, Sharon Chakkalackal, Michelle Li, Jan E Cooper, Jennifer Chadder, Tirso W Saenz, Maria Carlota de Souza Paula, Wen Ke, Lexuan Li, Magdy A Madkour, Sahar Aly, Nefertiti El-Nikhely, Sachin Chaturvedi, Victor Konde, Abdallah S Daar & Peter A Singer Nat. Biotechnol. 28, 407–416 (2010); published online 7 May 2010; corrected after print 7 December 2010
© 2010 Nature America, Inc. All rights reserved.
In the version of this article initially published, a line was missing connecting India and China in Figure 3. The error has been corrected in the HTML and PDF versions of the article.
Corrigendum: The BioPAX community standard for pathway data sharing Emek Demir, Michael P Cary, Suzanne Paley, Ken Fukuda, Christian Lemer, Imre Vastrik, Guanming Wu, Peter D’Eustachio, Carl Schaefer, Joanne Luciano, Frank Schacherer, Irma Martinez-Flores, Zhenjun Hu, Veronica Jimenez-Jacinto, Geeta Joshi-Tope, Kumaran Kandasamy, Alejandra C Lopez-Fuentes, Huaiyu Mi, Elgar Pichler, Igor Rodchenkov, Andrea Splendiani, Sasha Tkachev, Jeremy Zucker, Gopal Gopinath, Harsha Rajasimha, Ranjani Ramakrishnan, Imran Shah, Mustafa Syed, Nadia Anwar, Özgün Babur, Michael Blinov, Erik Brauner, Dan Corwin, Sylva Donaldson, Frank Gibbons, Robert Goldberg, Peter Hornbeck, Augustin Luna, Peter Murray-Rust, Eric Neumann, Oliver Reubenacker, Matthias Samwald, Martijn van Iersel, Sarala Wimalaratne, Keith Allen, Burk Braun, Michelle Whirl-Carrillo, Kei-Hoi Cheung, Kam Dahlquist, Andrew Finney, Marc Gillespie, Elizabeth Glass, Li Gong, Robin Haw, Michael Honig, Olivier Hubaut, David Kane, Shiva Krupa, Martina Kutmon, Julie Leonard, Debbie Marks, David Merberg, Victoria Petri, Alex Pico, Dean Ravenscroft, Liya Ren, Nigam Shah, Margot Sunshine, Rebecca Tang, Ryan Whaley, Stan Letovksy, Kenneth H Buetow, Andrey Rzhetsky, Vincent Schachter, Bruno S Sobral, Ugur Dogrusoz, Shannon McWeeney, Mirit Aladjem, Ewan Birney, Julio Collado-Vides, Susumu Goto, Michael Hucka, Nicolas Le Novère, Natalia Maltsev, Akhilesh Pandey, Paul Thomas, Edgar Wingender, Peter D Karp, Chris Sander & Gary D Bader. Nat. Biotechnol. 28, 935–942 (2010); published online 09 September 2010; corrected after print 7 December 2010 In the version of this article initially published, the affiliation for Ken Fukuda was incorrect. The correct affiliation is Computational Biology Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan. The error has been corrected in the HTML and PDF versions of the article.
Corrigendum: Analysis of a genome-wide set of gene deletions in the fission yeast Schizosaccharomyces pombe Dong-Uk Kim, Jacqueline Hayles, Dongsup Kim, Valerie Wood, Han-Oh Park, Misun Won, Hyang-Sook Yoo, Trevor Duhig, Miyoung Nam, Georgia Palmer, Sangjo Han, Linda Jeffery, Seung-Tae Baek, Hyemi Lee, Young Sam Shim, Minho Lee, Lila Kim, Kyung-Sun Heo, Eun Joo Noh, Ah-Reum Lee, Young-Joo Jang, Kyung-Sook Chung, Shin-Jung Choi, Jo-Young Park, Youngwoo Park, Hwan Mook Kim, Song-Kyu Park, Hae-Joon Park, Eun-Jung Kang, Hyong Bai Kim, Hyun-Sam Kang, Hee-Moon Park, Kyunghoon Kim, Kiwon Song, Kyung Bin Song, Paul Nurse & Kwang-Lae Hoe Nat. Biotechnol. 28, 617–623 (2010); published online 16 May 2010; corrected after print 7 December 2010 In the version of this article initially published, the address of one of the authors, Young-Joo Jang, was incorrect. The correct address is Laboratory of Cell Cycle & Signal Transduction, WCU Department of NanoBioMedical Science, Institute of Tissue Regeneration Engineering, Dankook University, Cheonan, Korea. The error has been corrected in the HTML and PDF versions of the article.
1308
volume 28 number 12 DECEMBER 2010 nature biotechnology
careers and recruitment
Catching the wave in China Connie Johnson Hambley
© 2010 Nature America, Inc. All rights reserved.
China is presenting the pharmaceutical world with unprecedented opportunity. Here is what is happening there and how you can be a part of it.
C
atching a trend at its earliest stage and reaping its rewards as it advances is easy only in hindsight. It takes more than being able to identify a trend to ensure job security: it takes understanding the factors behind the trend to gauge its strength, longevity, direction and sustainability, as well as your place in its potential opportunities. Armed with this information, you can best assess your career options. Seeds of a boom In hindsight, it is easy to see the factors that created the biotech boom in the United States. The confluence of world-class academic institutions, human talent, innovative ideas, abundant investment capital, economic incentives and enjoyment of profits combined to create a magnet for the world’s brightest who wanted to be educated and work in the United States. Sporadic layoffs in big pharma were not inherently destabilizing because biotech companies were being created fast enough to absorb the talent. Many say there was no better place to be during the boom years than in Cambridge, San Diego or Research Triangle Park. During the boom, having a PhD or handson experience was a guarantee to lifelong employment. It was easy to manage a career when you were in demand and had a pick of jobs. But now, those who can find jobs are often relocating families, taking on less desirable or part-time positions or taking pay cuts. This comes as a surprise to many who feel that they have finally arrived at the station only to find that their train has already left. In addition, venture capital investment in biotech startups has declined, with some Connie Johnson Hambley is at Steele Executive Search, Boston, Massachusetts, USA. e-mail: [email protected]
venture capitalists citing ‘investor fatigue’— characterized as a reluctance to ‘re-up’ financing of laboratories and talent as a drug moves closer to the clinic. As a result, companies cut research staff and run virtually, meaning they consist of management and project managers who coordinate with contract research organizations (CROs). Drug development is increasingly moving offshore through outsourcing and big pharmas such as GlaxoSmithKline and Merck are creating R&D outposts in China, which are taking the jobs with them. In the new reality, a global industry must now include Asia, not just the United States and Europe. China is poised for a prolonged period of sustainable growth and that can translate into job longevity and career advancement, says Connie Johnson Hambley.
As a result, opportunities for pharmaceutical professionals are exploding in places like India and China. However, it is China that is poised to enjoy the greatest growth. The ZRG Partners Global Life Science Hiring Index1 states that in the third quarter of 2010 the Asia Pacific region enjoyed a 20% increase in hiring over the second quarter, whereas the Americas saw a 2% decline in hiring in the same period. Even before the recent decline, the seeds for the boom in China were being sown. From ‘Made in China’ to ‘Created in China’ “A tidal wave of change is coming to China, and the goal for many is to ride the wave rather than be buried by it,” says Jason Mann, who advises biopharmaceutical firms on emerging market strategy with a focus on China. “As Americans we often view the
nature biotechnology volume 28 number 12 DECEMBER 2010
Chinese through the viewpoint of our own lifetime, as the sudden rise of a new power, but many Chinese view themselves through a historical lens of thousands of years.” According to Mann, we are now seeing the impact of reforms in China over the last 30 years. He notes that some observe an unspoken contract at work between the people and the government in post-Mao China: “Give the people an ever-rising standard of living through economic reform and they will focus less on political reform.” This emphasis on economic reform combined with China’s scale means that the government can mobilize tremendous resources behind favored sectors, with an increasing focus on life science investment and growth. The Chinese government created a complex series of reforms and initiatives designed to move China from an economy based on manual labor and manufacturing to one of innovation and creation, as articulated in their most recent five-year plan. Biotechnology is a key focus, and the biotech industry in China has grown to nearly $9 billion from $3 billion barely five years ago, with much of that growth heavily supported with government investment. An estimated additional $1.5 billion investment into drug development has been announced. Barriers to entrepreneurial growth have been removed. Intellectual property laws were revised and high-profile enforcement of those laws has given investors and scientists greater confidence. And with the reemergence of the stock exchange barely 20 years ago, the longstanding Chinese tradition of private enterprise has been enhanced by strengthening the mechanisms for distributing ownership over a larger number of people. Chinese companies are now mastering that truly entrepreneurial holy grail—the initial public offering. All of these factors show that China is poised for a prolonged period 1309
c a r e e r s a n d r e cr u i t m e n t
© 2010 Nature America, Inc. All rights reserved.
of sustainable growth and that can translate into job longevity and career advancement. The human factor Restructuring laws and funneling money into infrastructure are helpful supports, but it is the human talent that gets the work done. In drug discovery and development, Western-trained pharmaceutical professionals have an edge. Wang Huiyao, visiting fellow at the Brookings Institution and director general of the Center for China and Globalization, states that China has had 1.62 million students go overseas to get their education since 1978. Only 497,000 have returned to China and only 8% of science and engineering PhD graduates have returned. As China builds its industry, the lack of seasoned professionals is strongly felt. A hallmark of Western R&D culture is to question what is being presented and not to merely accept information. Scientific rigor is the intellectual process of innovation and creativity where peers and superiors engage in a dynamic dialog of questions and critiques requiring scientists to push back with rebuttals to support their findings. Weaknesses are identified, unusual relationships are discovered and creative problem-solving results. This is in contrast to traditional Chinese education, which has a more top-down approach, where challenging an authority can be uncomfortable. This cultural ethic produces high-quality skills uniquely suited to drug discovery. In 2008, China launched its ‘Thousand Talents’ program designed to change China’s economy from an investment-driven to a more human resources–driven one. With its large population viewed as a valuable natural resource, the Thousand Talents program is designed to invest in and cultivate human capital. Business entrepreneurs, technical professionals and highly skilled workers make up three of the six targeted categories and each is essential for creating and
1310
sustaining a biotech and life science boom. The plan lures highly trained professionals and researchers to its shores with financial incentives and then uses those professionals to further train local talent. China is also pouring money and resources into revising and deepening their educational systems to address the needs of their population as a whole and to supply the human capital that their targeted industries require. The Thousand Talents program acknowledges that it is not only book training that young minds need, but experienced professionals to guide them and impart knowledge. And these professionals need not just be ‘sea turtles’, or Chinese returnees—the plan specifically allows for the recruitment of foreign nationals. Western-trained pharmaceutical talent receives the most interest. “The young scientists in China are hungry for knowledge. They want to learn and they want to learn from the best,” says Yun He, chief scientific officer of BioDuro, the Beijing subsidiary of US-based CRO giant PPD. “The most important factor for success is to have the right talent. The people part is hard. [The] money part is easier.” Focusing on cultivating talent in-house, BioDuro launched the BioDuro Learning Institute in 2009 using professional instructors and their own senior management to provide instruction on everything from management skills to good laboratory practices to English. Limited time of opportunity Language is not a barrier to working in China. “Most scientists are surprised when they walk into a lab in Beijing and sit down and interact with a team,” states John Oyler, president and CEO of BeiGene, a startup cancer research company in Beijing. English is the predominant language in most laboratories because senior and middle managers received essential scientific training in the United States. But Oyler cautions there could be “a short window of time” for a non-Chinese
speaking employee to reap all of the rewards of the current climate. Both He and Oyler roughly estimate the true unequalled opportunity horizon at five years. “The quality that already exists in China makes it almost too late for some synthetic chemistry and perhaps structural biology professionals,” continues Oyler, citing the more systematic and formulaic disciplines more suited to current Chinese educational standards. “Medicinal chemistry scientists have a shorter window. Folks in development or other more specialized areas of biology may have ten years and translational research may be even longer given the sheer number of patients and the importance of proximity to the clinic.” These time horizon estimates are supported by both the five-year plan and the Thousand Talents program goals. Certainly stellar talent knows no geographical bounds. If you are preeminent in your field, you will have opportunities regardless of country. So what does this mean for you? In China, Western-trained scientists have an edge most are not aware of, and it comes in the very essence of their training. The pharmaceutical industry has always shifted workers around the globe to meet its needs and China is the new frontier. Its growth is not a passing trend but a major force in its early stages. Individual employees may be powerless to avoid being laid off, but they are not powerless to take advantage China’s opportunities. COMPETING FINANCIAL INTERESTS The author declares competing financial interests: details accompany the full-text HTML version of the paper at http://www.nature.com/ naturebiotechnology/. ACKNOWLEDGMENTS C.J.H. would like to thank Michael Russo at the Bruckner Group for his help in the preparation of this article. 1. ZRG Partners. Global Life Science Hiring Index, http://www.zrgpartners.com/Documents/lifescience indexq32010.pdf
volume 28 number 12 DECEMBER 2010 nature biotechnology
people
© 2010 Nature America, Inc. All rights reserved.
PolyTherics (London) has announced the appointment of John Burt (left) as chief business officer. He was most recently CEO and cofounder of Thiakis. His previous posts include positions at Imperial Innovations, GlaxoSmithKline and Vernalis. “We are delighted that John has joined us and are sure that his experience, notably with Thiakis and GSK, will be invaluable,” says PolyTherics CEO Keith Powell. “We now have a management team which, together with our chairman, Ken Cunningham, has the breadth of experience with both biotech and large pharma companies to take PolyTherics to the next stage in its evolution.”
Vical (San Diego) has announced the appointment of Igor P. Bilinsky as senior vice president, corporate development. Bilinsky was previously at Halozyme Therapeutics, where he served most recently as vice president, business development and special operations. Aastrom Biosciences (Ann Arbor, MI, USA) has named Ronald Cresswell to its board of directors. Cresswell is the former senior vice president and CSO of Warner-Lambert and has served on the board of directors of Esperion Therapeutics, Allergan, Curagen and Vasogen. Milind S. Deshpande has been promoted to president of R&D and retains his position as CSO, and Mary Kay Fenton has been promoted to senior vice president and retains her position as CFO at Achillion Pharmaceuticals (New Haven, CT, USA). Deshpande previously served as the company’s executive vice president of R&D and Ms. Fenton served as vice president. Jeffrey D. Edelson has been appointed chief medical officer of Palatin Technologies (Cranbury, NJ, USA). Edelson has more than 15 years of clinical development experience in the pharma and biotech industries, most recently serving as executive vice president of R&D and chief medical officer of Ikano Therapeutics. Prior to that, he was vice president and therapeutic area head of novel therapeutics for Johnson & Johnson Pharmaceutical Research and Development. In addition, Palatin announced the resignation of Trevor Hallam as executive vice president of R&D, effective December 31. His departure results from the company’s decision last quarter to eliminate research and discovery activities, and focus on advancing its phase 2 clinical candidates. 1312
Ambit Biosciences (San Diego) has named biotech industry veteran Faheem Hasnain as chairman of the board and Alan Fuhrman as CFO. From December 2008 until its acquisition by Abbott Laboratories in April, Hasnain was president, CEO and a director of Facet Biotech. Previously he was with PDL BioPharma, Biogen Idec, Bristol-Myers Squibb and GlaxoSmithKline. Fuhrman joins Ambit from Naviscan, where he served as vice president and CFO. He also held positions at Sonus Pharmaceuticals and Integrex. James Patrick Kelly has been appointed senior vice president, CFO, treasurer and secretary of Vanda Pharmaceuticals (Rockville, MD, USA). Kelly has more than 18 years of financial and operating experience in the healthcare industry, most recently as vice president, controller at MedImmune. The board of directors of Athersys (Cleveland, OH, USA) has elected Ismail Kola as a director of the company. Kola currently serves as executive vice president of Belgium-based pharma company UCB and as president of UCB New Medicines, UCB’s discovery research through proof-of-concept organization since November 2009. He currently serves on the board of directors of Astex Pharmaceuticals, Synosia and Ondek, and has previously served on the board of directors for companies such as Eragen Biotech, Promega and Ingene. Regeneron Pharmaceuticals (Tarrytown, NY, USA) has elected Christine A. Poon to fill a new seat on its expanded board of directors. Poon currently serves as dean of Ohio State University’s Fisher College of Business. She was formerly vice chairman of the board of directors of Johnson & Johnson and
worldwide chairman of the Johnson & Johnson Pharmaceuticals Group. Vestaron (Kalamazoo, MI, USA) has announced the election of John Sorenson to the position of chairman and Elin Miller to the position of vice chairman of the company’s board of directors. Both were previously members of the board. Former Chairman Eli Thomssen remains on the board. Sorenson was formerly president of Syngenta Biotechnology and before that president of Syngenta Seeds. Miller previously held senior management positions with Dow, Dow AgroSciences and ArystaLifeScience. Amarin (Dublin, Ireland, and Mystic, CT, USA) has announced the appointment of two replacements for president, CEO and director Colin Stewart, who resigned in November to address personal matters. Joseph S. Zakrzewski, who has served as Amarin’s chairman of the board since January 2010, has been named CEO while continuing to serve as chairman. He most recently served as CEO of Xcellerex. Additionally, John F. Thero has been appointed president of Amarin. Since November 2009, Thero had been the company’s CFO. Gentris (Morrisville, NC, USA) has appointed Sam C. Tetlow executive chairman of the company’s board of directors. He currently serves as vice president of business development for Calvert Research and as a director of Immunologix. In addition, Gentris has named Howard McLeod as the company’s first chief scientific advisor. McLeod is the Fred N. Eshelman Distinguished Professor and director of the Institute for Pharmacogenomics and Individualized Therapy at the University of North Carolina at Chapel Hill. He serves as principal investigator for the CREATE Pharmacogenomics Research Network and is a member of the FDA Committee on Clinical Pharmacology. Timothy P. Walbert has been elected to the board of directors of antibody therapeutics developer XOMA (Berkeley, CA, USA). A seasoned executive with experience launching therapeutic products including Abbott Laboratories’ Humira (adalimumab), Walbert currently serves as chairman, president and CEO of privately held Horizon Pharma.
volume 28 number 12 DECEMBER 2010 nature biotechnology